id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
b83e555bdd84e7af882a39d07d0dcb3e52b6d048
|
[REMOVED]
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/60238/60238.pdf?sequence=1", "len_cl100k_base": 10545, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37929, "total-output-tokens": 12462, "length": "2e13", "weborganizer": {"__label__adult": 0.0004565715789794922, "__label__art_design": 0.0006608963012695312, "__label__crime_law": 0.0004782676696777344, "__label__education_jobs": 0.00107574462890625, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.00023508071899414065, "__label__finance_business": 0.00040841102600097656, "__label__food_dining": 0.0004665851593017578, "__label__games": 0.0012187957763671875, "__label__hardware": 0.0031185150146484375, "__label__health": 0.0009222030639648438, "__label__history": 0.0003995895385742187, "__label__home_hobbies": 0.00015032291412353516, "__label__industrial": 0.001003265380859375, "__label__literature": 0.0005631446838378906, "__label__politics": 0.0003502368927001953, "__label__religion": 0.0007424354553222656, "__label__science_tech": 0.3232421875, "__label__social_life": 0.0001347064971923828, "__label__software": 0.01198577880859375, "__label__software_dev": 0.65087890625, "__label__sports_fitness": 0.00033974647521972656, "__label__transportation": 0.00089263916015625, "__label__travel": 0.0002315044403076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42625, 0.0219]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42625, 0.54833]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42625, 0.83532]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2661, false], [2661, 5916, null], [5916, 10044, null], [10044, 13964, null], [13964, 17152, null], [17152, 19595, null], [19595, 20401, null], [20401, 23746, null], [23746, 26560, null], [26560, 28875, null], [28875, 30879, null], [30879, 34387, null], [34387, 37870, null], [37870, 41034, null], [41034, 42625, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2661, true], [2661, 5916, null], [5916, 10044, null], [10044, 13964, null], [13964, 17152, null], [17152, 19595, null], [19595, 20401, null], [20401, 23746, null], [23746, 26560, null], [26560, 28875, null], [28875, 30879, null], [30879, 34387, null], [34387, 37870, null], [37870, 41034, null], [41034, 42625, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42625, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42625, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2661, 2], [2661, 5916, 3], [5916, 10044, 4], [10044, 13964, 5], [13964, 17152, 6], [17152, 19595, 7], [19595, 20401, 8], [20401, 23746, 9], [23746, 26560, 10], [26560, 28875, 11], [28875, 30879, 12], [30879, 34387, 13], [34387, 37870, 14], [37870, 41034, 15], [41034, 42625, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42625, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
69dc1df349419482d6d51add8fc49ca254f08ea6
|
Citation for published version
DOI
Link to record in KAR
http://kar.kent.ac.uk/30748/
Document Version
Author's Accepted Manuscript
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all content is protected by copyright and in the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact:
researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html
RedAlert: Determinacy Inference for Prolog
JAEL KRIENER and ANDY KING
School of Computing, University of Kent, CT2 7NF, UK.
submitted 1 January 2003; revised 1 January 2003; accepted 1 January 2003
Abstract
This paper revisits the problem of determinacy inference addressing the problem of how to uniformly handle cut. To this end a new semantics is introduced for cut, which is abstracted to systematically derive a backward analysis that derives conditions sufficient for a goal to succeed at most once. The method is conceptionally simpler and easier to implement than existing techniques, whilst improving the latter’s handling of cut. Formal arguments substantiate correctness and experimental work, and a tool called ‘RedAlert’ demonstrates the method’s generality and applicability.
KEYWORDS: abstract interpretation, backwards analysis, Boolean formulae, constraints, cut, determinacy inference, Prolog
1 Introduction
The question of determinacy is constantly on the mind of a good Prolog programmer. It is almost as important to know that a goal will not compute an answer multiply, as it is to know that it will compute the right answer. To this effect, Prolog programmers often use the cut to literally cut off all choice points that may lead to additional answers, once a goal has succeeded. A cut that is used to (brutely) enforce determinacy in this way is termed a “red cut” (O’Keefe, 1990). O’Keefe also distinguishes between further uses of cut, namely “green cut” and “blue cut”, which are used to avoid repeating tests in clause selection and exploring clauses which would ultimately fail. Such classifications have been introduced to facilitate reasoning about the determinising effects of cut in different contexts. Since these issues are subtle, they motivate developing semantically justified tools which aid the programmer in reasoning about determinacy in the presence of cut.
In light of this close connection between determinacy and cut, it is clear that cut ought to play a prominent role in determinacy analysis. This was recognised by Sahlin (1991), twenty years ago, who proposed an analysis which checks whether a goal can succeed more than once. The analysis abstracts away from the instantiation of arguments within a call which weakens its applicability. Mogensen (1996) recognised the need to ground the work of Sahlin on a formal semantics, yet his work illustrates the difficulty of constructing and then abstracting a semantics for cut. Very recently Schneider-Kamp et al. (2010) have shown how a semantics, carefully crafted to facilitate abstraction, can be applied to check termination of logic
programs with cut on classes of calls. This begs the question whether a semantics can be distilled which is amenable to inferring determinacy conditions. A good answer to this question will provide the basis for a tool that supports the software development process by providing determinacy conditions in the presence of cut.
1.1 Existing methods for determinacy inference
The issue of inferring determinacy in logic programs has been considered before (Lu and King, 2005; King et al., 2006), though neither of the works adequately addressed the cut. King et al. (2006) for example present a method for inferring determinacy conditions initially for cut-free Prolog programs by using suspension analysis in a constraint-based framework. Their motivation is to overcome a limitation of the method presented by Lu and King (2005) that arises from the way in which the order of the literals in the clause influences the strength of the determinacy conditions inferred. To demonstrate this problem, consider the following example:
\[
\text{diag}([],[],_). \\
\text{diag}([(X,Y)|Xs],[Y,X)|Ys],[_|Ds]) :- \text{diag}(Xs,Ys,Ds).
\]
\[
\text{vert}([],[],_). \\
\text{vert}([(X,Y)|Xs],[X1,Y)|Ys],[_|Ds]) :- \{X1 = -X\}, \text{vert}(Xs,Ys,Ds).
\]
\[
\text{rot}(Xs,Ys) :- \text{diag}(Xs,Zs,Ys), \text{vert}(Zs,Ys,Xs).
\]
(The constraint notation in the second clause of \text{vert} is needed to render the predicate multi-modal.) The method presented by Lu and King (2005) infers the groundness of \textit{Xs} as a sufficient condition for the determinacy of \textit{rot}(\textit{Xs},\textit{Ys}). It does not detect that the groundness of \textit{Ys}, too, is sufficient for determinacy. This is because the method only considers the left-to-right flow of information from one goal to the next. For instance, if \textit{rot}(\textit{Xs},\textit{Ys}) is called with \textit{Ys} ground, then when the call \textit{diag}(\textit{Xs},\textit{Zs},\textit{Ys}) is encountered, neither \textit{Xs} nor \textit{Zs} are ground, hence the call is possibly non-deterministic and therefore the method concludes that only groundness of \textit{Xs} is sufficient for determinacy of \textit{rot}(\textit{Xs},\textit{Ys}).
In response, King et al. (2006) propose a framework in which the order of the literals in a clause does not impose the implicit assumption that the determinacy of a goal is not affected by the bindings subsequently made by a later goal. To demonstrate, notice that if \textit{rot}(\textit{Xs},\textit{Ys}) is called with \textit{Ys} ground, then when the call \textit{diag}(\textit{Xs},\textit{Zs},\textit{Ys}) is encountered, neither \textit{Xs} nor \textit{Zs} are ground, hence the call is possibly non-deterministic and therefore the method concludes that only groundness of \textit{Xs} is sufficient for determinacy of \textit{rot}(\textit{Xs},\textit{Ys}).
In response, King et al. (2006) propose a framework in which the order of the literals in a clause does not impose the implicit assumption that the determinacy of a goal is not affected by the bindings subsequently made by a later goal. To demonstrate, notice that if \textit{Ys} is ground then the execution of \textit{vert}(\textit{Zs},\textit{Ys},\textit{Xs}) grounds \textit{Zs}, which is sufficient for the earlier goal \textit{diag}(\textit{Xs},\textit{Zs},\textit{Ys}) to be deterministic as well. They achieve this by delaying execution of a goal until a mutual exclusion condition between its clauses is fulfilled and then using suspension inference (Genaim and King, 2008) to infer a determinacy condition for the goals that constitute the body of a clause. This allows them to infer the determinacy condition \textit{Xs} \lor \textit{Ys} for the goal \textit{rot}(\textit{Xs},\textit{Ys}). Notice, however, the irony in solving a problem that arises from the failure to abstract away from the temporal order of execution by adding temporal complexity into the program.
1.2 Limitations of existing methods
However, the limitations of (King et al., 2006) become sharply apparent when considering the way that the framework is extended to cut: Their method is extended by strengthening the determinacy condition for a predicate to ensure that calls before a cut are invoked with ground arguments only. While this treatment is sufficient to handle green and blue cuts, it means that a cut will invariably strengthen the determinacy conditions derived. This is unsatisfactory when considering red cuts, given that they are used to ensure determinacy. In that case, the presence of cut ought to have a weakening effect on determinacy conditions. To demonstrate, consider the following pair of predicates:
\[
\text{memberchk}(X,L) :- \text{member}(X,L), !. \\
\text{member}(X,[X|\_]). \\
\text{member}(X,[_|L]) :- \text{member}(X,L).
\]
In the framework of King et al. (2006), \text{memberchk} inherits its determinacy conditions from \text{member} and (if necessary) strengthens them to ensure that the arguments in the call to \text{member} are ground. In this situation, the determinacy condition derived for \text{member} is false, which cannot be strengthened within the domain of boolean constraints. Therefore the determinacy condition derived for \text{memberchk} is false as well. However, it should be obvious that the effect of the red cut in this situation is to make \text{memberchk} deterministic independently of the determinacy of \text{member}. This example demonstrates that in the presence of cut, determinacy conditions on predicates cannot be derived by a straightforward compositional method where parent predicates inherit their conditions from their sub-predicates. Rather, the method needs to allow for weakening and disregarding of determinacy information in the transition from parent to sub-predicates. Aiming to develop a uniform technique for handling cut along these lines, this paper makes the following contributions:
- it presents a concise semantics for Prolog with cut, based on a cut-normal form, that constitutes the basis for a correctness argument (and as far as we are aware the sequence ordering underpinning the semantics is itself novel);
- it presents and proves correct a method for inferring determinacy conditions on Prolog predicates which abstracts over the order of their execution and is both conceptually simpler and easier to implement than previous techniques;
- it reports experimental work that demonstrates precision improvements over existing methods; correctness proofs are given in (Kriener and King, 2011).
2 Preliminaries
2.1 Computational domains
The basic domain underlying the semantics presented in the next section is the set of constraints, \(\text{Con}\), containing diagonalization constraints of the form \(\vec{x} = \vec{y}\), expressing constraints on and bindings to program variables. \(\text{Con}\) is pre-ordered by the entailment relation, \(\models\), and closed under disjunction and conjunction. We assume the existence of an extensive projection of \(\theta\) onto \(\vec{x}\), denoted by \(\exists_{\vec{x}}(\theta)\).
Our concrete domain is the set of closed non-empty sets of constraints \(Con^1\), which represent program states by capturing all possible bindings to the program variables consistent with a specific set of constraints on the same. The elements of \(Con^1\) are constructed thus: For any set of constraints \(\Theta\), \([\Theta]\) = \{\phi \mid \exists y \in \Theta, \phi \models \theta\}, i.e. the set of all constraints that entail some constraints in \(\Theta\).
(Observe that \([\{\text{false}\}] = \{\text{false}\}\). In this construction, unification is straightforwardly modeled by intersection: The result of unifying variable \(A\) with constant \(c\) at state \([\Phi]\) is simply \([A = c] \cap [\Phi]\). \(Con^1\) is partially ordered by \(\subseteq\) and \(\{\text{false}\}\), \([\{\text{true}\}\), \([\bigcup\), \([\bigcap]\) is a complete lattice. (Notice that \(\emptyset \notin Con^1\).)
Two projections, one an over-, the other an under-approximation, are defined on \(Con^1\) as follows: \(\exists_\xi(\Theta) = \{\exists_\xi(\theta) \mid \theta \in \Theta\}, \exists_\xi(\xi) = \{\psi \in \Theta \mid \exists_\xi(\psi) = \psi\} \). Notice that both projections on \(Con^1\) are defined in terms of an arbitrary existential projection on the elements of \(Con\). Each of these two is required later on to ensure soundness: The denotational and success set semantics (Sects. 3.1 and 3.2) need to be over-approximations to be correct. Intuitively, they need to capture all possible solutions, even at the cost of letting a few impossible ones slip in. The determinacy semantics (Sect. 3.3) needs to be an under-approximation, which in that context has the effect of strengthening the determinacy condition. Weakening would lead to a loss of soundness there. A renaming operator \(\rho_{x,y}\) is defined on \(Con^1\) thus: \(\rho_{x,y}(\Theta) = \exists_x(\exists_{\xi x}(\Theta) \cap \{x = y\})\). (Notice here that \(\rho_{x,y}(\Theta) = \rho_{x,y}(\exists_x(\Theta))\).)
For a single constraint \(\theta\), \(\text{vars}(\theta)\) is the set of all variables occurring in \(\theta\).
Similar to the notion of definiteness defined by Baker and Søndergaard (1993), a constraint \(\theta\) fixes those variables, in respect to which it cannot be strengthened: \(\text{fix}(\theta) = \{y \mid \forall \psi \cdot ((\psi \models \theta \land \psi \neq \text{false}) \rightarrow \exists_\xi(\theta) \models \exists_\xi(\psi))\}\).
Put simply, \(\text{fix}(\theta)\) is the set of variables that are fixed or grounded by \(\theta\).
In addition to these fairly standard constructions, we define two binary operators on \(Con^1\) to express more complex relations between its elements: Given \(\Theta_1, \Theta_2 \in Con^1\) their mutual exclusion (\(\text{mux}\)) is the union of all those \(\phi \in Con\), which fix a set of variables, on which \(\Theta_1\) and \(\Theta_2\) are inconsistent:
\[
\text{mux}(\Theta_1, \Theta_2) = \{\phi \mid \exists Y \subseteq \text{fix}(\phi), (\exists_Y(\Theta_1) \cap \exists_Y(\Theta_2) = \{\text{false}\})\}
\]
For example, given two sets \(\Theta_1 = \{[A = c, B = d]\}, \Theta_2 = \{[A = e, B = d]\}\), their mutual exclusion will contain all constraints which fix the variable \(A\) to any constant \(f\): \(\text{mux}(\Theta_1, \Theta_2) = \{[A = f]\}\). Notice that, since \(\Theta_1\) and \(\Theta_2\) do not disagree on \(B\), fixing \(B\) will not distinguish between them and \(\Theta\) is therefore not constrained in \(\text{mux}(\Theta_1, \Theta_2)\). Observe that for \(\Theta_1, \Theta_2 \in Con^1\), \(\text{mux}(\Theta_1, \Theta_2) \in Con^1\), i.e. the \(\text{mux}\) of two closed sets is closed and that \(\text{mux}(\Theta_1, \Theta_2) = \{[\text{true}]\}\) if \(\Theta_1\) or \(\Theta_2\) is \{false\}.
Given \(\Theta_1, \Theta_2 \in Con^1\), their implication is defined as the union of all those elements of \(Con^1\) which, when combined with \(\Theta_1\), form subsets of \(\Theta_2\):
\[
\Theta_1 \rightarrow \Theta_2 = \bigcup\{\Phi \mid \Phi \cap \Theta_1 \subseteq \Theta_2\}
\]
For example, given two sets \(\Theta_1 = \{[B = d]\}\) and \(\Theta_2 = \{[A = c, B = d]\}\), \(\Theta_1 \rightarrow \Theta_2 = \{[A = c]\}\). Notice that this construction mirrors material implication on boolean
formulæ in that the following statements are true for any $\Theta$: $[[\text{true}]] \rightarrow \Theta = \Theta$, $\Theta \rightarrow [[\text{true}]] = [[\text{true}]]$, $[[\text{false}]] \rightarrow \Theta = [[\text{true}]]$, $\Theta \rightarrow [[\text{false}]] = [[\text{false}]]$. Notice also that it is possible to recover $\Theta_2$ from $\Theta_1 \rightarrow \Theta_2$ by simply intersecting the latter with $\Theta_1$: $\Theta_1 \rightarrow \Theta_2$ is, in a sense, a systematic weakening of $\Theta_2$ by $\Theta_1$.
2.1.2 $\text{Con}_\text{seq}^1$
To model the indeterministic behaviour of Prolog semantically, we extend $\text{Con}_\text{seq}$ to finite sequences of its elements which do not contain the set $\{\text{false}\}$, the elements of which are denoted by $\vec{\Theta}$. Concatenation is denoted $\cdot$; e.g., $\Theta_1 : \Theta_2, \Theta_3] = [\Theta_1, \Theta_2, \Theta_3]$. To obtain a top element we add a single infinite sequence, $\omega = [[\text{true}], [[\text{true}], \ldots]]$ and define $\text{Con}_\text{seq}^1 = \{(\text{Con}_\text{seq}^1 - \{\text{false}\})^n | n \geq 0\} \cup \{\omega\}$. $\text{Sub}_m(\vec{\Theta})$ denotes the set of all subsequences of $\vec{\Theta}$ of length $\ell$. Eg: $\text{Sub}_2(\vec{\Theta}_1, \vec{\Theta}_2, \vec{\Theta}_3]) = \{[\Theta_1, \Theta_2], [\Theta_2, \Theta_3], [\Theta_1, \Theta_3]\}$. Given a sequence of elements of $\text{Con}_\text{seq}^1$, $\Theta^*$, $\text{trim}(\Theta^*)$ is the result of removing all instances of $\{\text{false}\}$ from $\Theta^*$.
$\text{Con}_\text{seq}$ can be partially ordered by a prefix-ordering (as is done by Debray and Mishra (1988)). However, under that ordering, the presence of cut poses problems in defining suitable monotonic semantic operators. Therefore, we define a partial order on $\text{Con}_\text{seq}^1$ $\subseteq$ thus: $\forall \vec{\Theta}_1, \vec{\Theta}_2 \in \text{Con}_\text{seq}^1, (\vec{\Theta}_1 \subseteq \vec{\Theta}_2)$ iff $\exists \vec{\Phi} \in \text{Sub}_m(\vec{\Theta}_2) \cdot (\vec{\Theta}_1 \subseteq_{\text{pw}} \vec{\Phi})$ where $|\vec{\Theta}_1| = m$ and $\subseteq_{\text{pw}}$ is point-wise comparison on sequences of equal length. The lattice $\text{Con}_\text{seq}^1$ is complete (see Appendix), with $\cap$ and $\cup$ defined as follows (note that $\cap$ is needed only to define the fixpoints):
$$\vec{\Theta}_1 \cap \vec{\Theta}_2 = \begin{cases} \vec{\Theta}_2 & \text{if } \vec{\Theta}_1 = \omega \\ \vec{\Theta}_1 \cap \vec{\Theta}_2 & \text{if } \vec{\Theta}_2 = \omega \\ \text{trim}(\cup_{\text{pw}} \{\vec{\Theta}_1 \cap_{\text{pw}} \vec{\Phi} | \vec{\Phi} \in \text{Sub}_m(\vec{\Theta}_2)\}) & \text{otherwise} \end{cases}$$
where $|\vec{\Theta}_1| = m$, $|\vec{\Theta}_2| = n$ and $\cup_{\text{pw}}$ and $\cap_{\text{pw}}$ are point-wise union and intersection, which require their operands to be equal length. $\cap$ is defined as the lifting of $\cap$ to sets in the natural way. From this we can define $\cup S = \cap \{\vec{\Theta} | \forall \vec{\Phi} \in S \vec{\Phi} \subseteq \vec{\Theta}\}$ in the normal way. The operators $\cdot, \exists \vec{X}, \forall \vec{X}$ and $\rho_{\vec{X}, \vec{S}}$ are all lifted straightforwardly to the elements of $\text{Con}_\text{seq}^1$ as the results of applying the same operations to each member of a given $\vec{\Theta}$. Eg: $\exists \vec{X}([\Theta_1, \Theta_2]) = [[\exists \vec{X} (\Theta_1), \exists \vec{X} (\Theta_2)]]$. $\cup$ denotes the union of all the elements of $\vec{\Theta}$, which itself is an element of $\text{Con}_\text{seq}^1$. Finally, to save some space in the presentation of the definition of $\mathcal{F}_G$ in Section 3.1, a mixed $\cap$ is defined thus: $(\vec{\Phi} : \vec{\Phi}) \cap \vec{\Theta} = (\vec{\Phi} \cap \vec{\Theta}) : (\vec{\Phi} \cap \vec{\Theta})$.
2.2 Cut normal form
To simplify the presentation of the semantics, we require each predicate in the analysed program to be defined in a single definition of the form $p(\vec{x}) \leftarrow G_1; G_2; !, G_3; G_4$. For example, the memberchk and member predicates can be transformed to:
```
memberchk(X, L) :- false; (member(X, L), !, true); false.
member(X, L) :- L = [X| _]; (false, !, true); (L = [_| L_1], member(X, L_1)).
```
where true and false abbreviate post(true) and post(false) respectively. This does not introduce a loss of generality. (For details on this transformation see Appendix.)
2.3 Syntax and stratification
Given this normal form, the syntax of our programs is defined as follows:
\[
\begin{align*}
\text{Head} & ::= p(\vec{x}) \quad \text{(where } \vec{x} \text{ is a vector of distinct variables)} \\
\text{Goal} & ::= \text{post}(\theta) \mid \text{Head} \mid \text{Goal} \\
\text{Predicate} & ::= \text{Head} \leftarrow \text{Goal} \mid \text{Goal} \\
\text{Program} & ::= \epsilon \mid \text{Predicate} \cdot \text{Program}
\end{align*}
\]
where \( \text{post}(\phi) \) indicates that \( \phi \) is added to the current constraint store. Again, \( \text{vars}(G) \) is the set of variables in a goal \( G \). Further, \( \text{heads}(P) \) contains the heads of the predicates defined in \( P \).
One would expect that an off-the-shelf denotational semantics could be taken and abstracted to distill a form of determinacy inference. However, the non-monotonic nature of \( \text{cut} \) poses a problem for the definition of such a semantics. In particular, \( \text{cut} \) can be used to define inconsistent predicates, eg: \( p \leftarrow \text{false} \mid p \mid ! \mid \text{false} \mid \text{true} \).
To construct a denotational semantics, we have to address the problem posed by predicates like \( p \), which cannot be assigned a consistent semantics.
Apt et al. (1988) address a parallel problem in the context of negation by banning the use of such viciously circular definitions. To this end, they introduce the notion of stratification with respect to negation. In their view, negation is used ‘safely’, if all predicates falling under the scope of a negation are defined independently of the predicate in which that negation occurs. Given the similarity between \( \text{cut} \) and \( \text{not} \), it is natural to adopt a similar approach towards our analogous problem.
We define stratification with respect to \( \text{cut} \), assuming that \( \text{cut} \) is used safely, if only predicates that are defined independently of the context of a \( \text{cut} \), can decide whether it is reached or not: A program \( P \) is \( \text{cut} \)-stratified, if there exists a partition \( P = P_1 \cup \ldots \cup P_n \) such that the following two conditions are met for all \( 1 \leq i \leq n \):
1. For all \( p(\vec{x}) \leftarrow G_1; G_2; !; G_3; G_4 \) in \( P_i \), all calls in \( G_2 \) are to predicates in \( \bigcup_{j<i} P_j \).
2. For all \( p(\vec{x}) \leftarrow G_1; G_2; !; G_3; G_4 \) in \( P_i \), all calls in \( G_1, G_3 \) and \( G_4 \) are to predicates in \( \bigcup_{j\leq i} P_j \).
Henceforth, we shall simply write ‘stratified’ to mean ‘\( \text{cut}\)-stratified’.
Notice that this restriction is almost purely theoretical. In the worst case, a \( \text{cut} \) after a recursive call produces a situation like or similar to that of the predicate \( p \) above, which has no stable semantics and in practice introduces an infinite loop. In the best case, such a \( \text{cut} \) is simply redundant. Either way, we have not been able to find such a \( \text{cut} \) in an actual Prolog program, nor have we been able to come up with an example in which such a \( \text{cut} \) is put to good use.
3 Semantics
Given these preliminaries, we can now define a denotational semantics for Prolog with \( \text{cut} \) (section 3.1), over \( \text{Con}_{\epsilon}^{\downarrow} \), which is expressive enough to capture multiple answers, and a determinacy semantics (section 3.3), over \( \text{Con}^{\downarrow} \), suitable for abstraction to boolean conditions. The success set semantics presented in between these two (section 3.2) provides a link between them.
### 3.1 Denotational semantics
To establish a basis for arguing the determinacy semantics presented in the following sections correct, we define a denotational semantics for Prolog with cut. The driving intuition here is, that the semantics of a program $P$ is a mapping from goals called in the context of $P$ to sequences of possible answer substitutions. The context is provided by an environment ($\mu$), henceforth called a success environment to distinguish it from other types of environments, which is a mapping from predicate heads and $\text{Con}_{\text{seq}}$ to $\text{Env}_{\text{seq}}$: $\text{Env} := \text{Head} \rightarrow \text{Con}_{\text{seq}} \rightarrow \text{Con}_{\text{seq}}$. The notation $\mu[p(\vec{y})] \mapsto \Theta$ denotes the result of updating $\mu$ with a new assignment from $p(\vec{y})$ to $\Theta$. For a given program $P$, the set $E_P$ of success environments is point-wise partially ordered by: $\mu_1 \sqsubseteq \mu_2$ iff $\forall p(\vec{y}), \Theta (\mu_1(p(\vec{y}))(\Theta) \sqsubseteq \mu_2(p(\vec{y}))(\Theta))$. For any program $P$ the lattice $(E_P, \sqsubseteq, \mu_1, \mu_\top, \sqcup, \sqcap)$ is complete, where:
- $\mu_\bot = \lambda p(\vec{y}) \Theta \top$
- $\mu_\top = \lambda p(\vec{y}) \Theta \bot$
- $\mu_1 \sqcup \mu_2 = \mu_3$ s.t. $\forall \Theta, p(\vec{y}) \in \text{heads}(P)$, $(\mu_3(p(\vec{y}))(\Theta) = \mu_1(p(\vec{y}))(\Theta) \sqcup \mu_2(p(\vec{y}))(\Theta))$
- $\mu_1 \sqcap \mu_2 = \mu_3$ s.t. $\forall \Theta, p(\vec{y}) \in \text{heads}(P)$, $(\mu_3(p(\vec{y}))(\Theta) = \mu_1(p(\vec{y}))(\Theta) \sqcap \mu_2(p(\vec{y}))(\Theta))$
And $\sqcup$ and $\sqcap$ are lifted to sets of environments in the normal way.
**Definition 1**
For a given stratified program $P$, its semantics - $\mu_P$ - is defined as a fixpoint of $\mathcal{F}_P$:
- $\mathcal{F}_P[\tau][\mu] = \mu$
- $\mathcal{F}_P[P \cdot Ps][\mu] = \mathcal{F}_P[Ps][\mu \mapsto (\mathcal{F}_H[P][\mu](p(\vec{y})))$
where $P = p(\vec{y}) \leftarrow B$
- $\mathcal{F}_H[\tau][\mu] = \mu$
- $\mathcal{F}_H[p(\vec{y}) \leftarrow B][\mu] = \mu[p(\vec{y})] \mapsto \lambda \Theta \cdot \downarrow \tilde{\Psi}(\mathcal{F}_G[G_1][\mu \Theta : \tilde{\Psi}])$
where $\tilde{\Psi} = \{ \mathcal{F}_G[G_3][\mu \Phi] : \Phi : \tilde{\Phi} \}$ if $\mathcal{F}_G[G_2][\mu \Theta \top] = \tilde{\Phi}$, otherwise
and $B = G_1 \cup G_2 \cup \ldots \cup G_n$
- $\mathcal{F}_G[\tau][\mu] = \top$
- $\mathcal{F}_G[\text{post}(\tilde{\phi})][\mu(\Theta : \tilde{\Theta})] = \text{trim}([\{ \tilde{\phi} \} \cap \Theta : \mathcal{F}_G[\text{post}(\tilde{\phi})][\mu \Theta])$
- $\mathcal{F}_G[p(\vec{x})][\mu(\Theta : \tilde{\Theta})] = (\downarrow \mu_{\tilde{x}, \vec{y}}(\mu(p(\vec{y})) \downarrow \mu_{\tilde{x}, \vec{y}}([\Theta]))) \cap \Theta : \mathcal{F}_G[p(\vec{x})][\mu \Theta]$ where $p(\vec{y}) \in \text{dom}(\mu)$
and $\text{vars}(\vec{x}) \cap \text{vars}(\vec{y}) = \emptyset$
- $\mathcal{F}_G[G_1, G_2][\mu(\Theta : \tilde{\Theta})] = \mathcal{F}_G[G_2][\mu(\mathcal{F}_G[G_1][\mu(\Theta : \tilde{\Theta})])$
Observe that given a stratified program $P = P_1 \cup \ldots \cup P_n$, $\mathcal{F}_P$ is monotonic, under our sub-sequence order, within each stratum $P_i$. By Tarski’s theorem, $\mathcal{F}_P[P_1]$ has a least fixed point, $\mu_P$ can therefore be defined as the result of evaluating all strata in order from lowest to highest, starting with $\mu_\bot$ and then taking the least fixed point of the previous stratum as input to the evaluation of the next stratum.
The crucial part is in $\mathcal{F}_H$, which updates the assignments in the success environment and reflects the possible indeterminacy in a predicate by splitting the
resulting sequence up into the possibility resulting from executing $G_1$ and that resulting from either executing $G_3$ or $G_4$, depending on the success of $G_2$. Given a call to a predicate, $F_G$ imposes onto each open possibility (i.e. each member of $\Theta$) the constraints associated with that predicate in the given $\mu$. The constraints are determined by the application of $\mu$ to that predicate, after first applying projection and renaming operations required to match formal and actual parameters. Information about other variables, which is lost in that process, is recovered by intersecting the result of the predicate call with the previous state of computation. The effect of this is, that constraints on the variables that the predicate is called on are strengthened in accordance with its definition, while those on all other variables are preserved. Given a goal of the form ‘post($\phi$)’ or ‘$G_1$, $G_2$’, $F_G$ does what you would expect: In the former case, it imposes $\phi$ onto each open possibility in the current state of computation, filtering out those possibilities which fail as a result. In the latter case, it successively evaluates $G_1$ and $G_2$. Notice further that given an empty sequence (i.e. a failed state of computation), $F_G$ simply returns an empty sequence, regardless of its other parameters.
**Example 1**
To illustrate, suppose $\text{member}(A,S)$ and $\text{memberchk}(A,S)$ are called at a point in a program where there is only one possible set of bindings $\Theta = \{A = 3 \land S = [3, 2, 3]\}$. $F_G[\text{member}(A,S)]$ $\mu[\Theta] = [\Theta \cap \{S = [A]\}, \Theta]$ $F_G[\text{memberchk}(A,S)]$ $\mu[\Theta] = [\Theta \cap \{S = [A]\}]$
### 3.2 Success set semantics
For the purposes of the determinacy inference, a coarser representation of the constraints under which a goal can succeed is given by the following pair of functions.
**Definition 2**
For a given program $P$, $S_G : \text{Goal} \rightarrow \text{Con}^+$ and $S_H : \text{Head} \rightarrow \text{Con}^+$ are defined as the least maps, such that:
\[
\begin{align*}
S_G[\text{post}(\phi)] & = [\phi] \\
S_G[p(\bar{x})] & = \bot \rho_{\bar{x}, \bar{\phi}}(S_H[p(\bar{y})]) \\
& \text{where } p(\bar{y}) \hookrightarrow B \in P \\
& \text{and } \text{vars}(\bar{x}) \cap \text{vars}(\bar{y}) = \emptyset \\
S_H[p(\bar{y})] & = \exists_0(S_G[G_1] \cup S_G[G_2, G_3] \cup S_G[G_4]) \\
& \text{where } p(\bar{y}) \hookrightarrow B \in P \text{ and } B = G_1 ; G_2 , \ldots , G_3 ; G_4
\end{align*}
\]
**Example 2**
To illustrate consider again $\text{member}$ and $\text{memberchk}$: $S_G[\text{memberchk}(A,S)] = S_G[\text{member}(A,S)] = \{S = [A]\} \cup \{S = [\ldots, A, \ldots]\} \cup \{S = [\ldots, A, \ldots]\} \cup \ldots$
Theorem 1 states that $S$ is a sound over-approximation of $F$:
\[
\bigcup F_G[G] \mu_p \Theta \subseteq (\bigcup \Theta) \cap S_G[G] \quad \text{Proof: See Appendix.}
\]
3.3 Determinacy semantics
With these in place, we can construct and prove correct a group of functions to derive a set of constraints which guarantee the determinacy of a goal in the context of a program $P$, its determinacy condition, henceforth abbreviated to ‘dc’. As before, the context is provided as an environment: A determinacy environment $(\delta)$ is a mapping from predicate heads to $Con^1$: $DEnv := Head \rightarrow Con^1$. Again, $\delta[p(\bar{y})] \mapsto \Theta$ is an update operation. As above, the set $E^\delta_D$ of determinacy environments for a program $P$ is partially ordered point-wise by: $\delta_1 \sqsubseteq \delta_2$ iff $\forall p(\bar{y}) \; (\delta_1(p(\bar{y}))) \subseteq \delta_2(p(\bar{y}))$.
The lattice $(E^\delta_D, \sqsubseteq, \delta_\perp, \top, \sqcap, \sqcup)$ is complete, with:
- $\delta_\perp = \lambda p(\bar{y}). \{false\}$
- $\delta_\top = \lambda p(\bar{y}). \{true\}$
- $\delta_1 \sqcup \delta_2 = \delta_3$ such that $\forall p(\bar{y}) \in heads(P) \cdot (\delta_3(p(\bar{y}))) = \delta_1(p(\bar{y})) \sqcup \delta_2(p(\bar{y}))$
- $\delta_1 \sqcap \delta_2 = \delta_3$ such that $\forall p(\bar{y}) \in heads(P) \cdot (\delta_3(p(\bar{y}))) = \delta_1(p(\bar{y})) \cap \delta_2(p(\bar{y}))$
And again, $\top$ and $\bot$ are lifted to sets in the normal way.
Definition 3
The determinacy semantics - $\delta_P$ - of a program $P$ is the greatest fixpoint of $D_P[P]$:
- $D_P :: Program \rightarrow DEnv \rightarrow DEnv$
- $D_P[x] \delta = \delta$
- $D_P[P \cdot Ps] \delta = D_P[P](\delta[p(\bar{y})] \mapsto (D_H[P]|\delta)(p(\bar{y})))$
where $P = p(\bar{y}) \leftarrow B$
- $D_H :: Predicate \rightarrow DEnv \rightarrow DEnv$
- $D_H[p(\bar{y}) \leftarrow B] \delta = \delta[p(\bar{y})] \mapsto \bigcup \delta_g (D_G[G_1]|\delta_\perp) \cap (S_G[G_2] \rightarrow D_G[G_3]|\delta)
\cap (S_G[G_4]|\delta \cap \Theta_1 \cap \Theta_2)$
where $\Theta_1 = \text{max}(S_G[G_1], S_G[G_3])$
and $\Theta_2 = \text{max}(S_G[G_1], S_G[G_2, G_3])$
and $p(\bar{y}) \leftarrow G_1 ; G_2 ; G_3 ; G_4 \in P$
- $D_G :: Goal \rightarrow DEnv \rightarrow Con^1$
- $D_G[\text{post}(\phi)] \delta = \{true\}$
- $D_G[p(\bar{f})] \delta = \rho_{\bar{g}, \bar{z}} \overline{\bar{y}} (\delta[p(\bar{y}))$
where $p(\bar{y}) \in \text{dom}(\delta)$
- $D_G[G_1, G_2] \delta = (S_G[G_2] \rightarrow D_G[G_1]|\delta) \cap (S_G[G_1] \rightarrow D_G[G_2]|\delta)$
Given a goal of the form ‘$\text{post}(\phi)$’, $D_G$ returns $\{true\}$ since the goal cannot introduce indeterminacy in the computation. As before, given a predicate call, $D_G$ applies the projection and renaming necessary to match parameters before calling $D_H$. Notice that the projection used here is $\overline{\bar{y}}$, since an under-approximation is required to derive a sufficient condition. $D_H$ maps predicates defined in cut normal form to a condition that entails: (a) the dc for $G_1$, (b) the dc for $G_2$ weakened by the success set of $G_2$ - the intuition here being that the dc for $G_3$ will only be relevant if $G_2$ can succeed and therefore its dc can be weakened by the success set of $G_2$ - (c) the dc for $G_4$, and finally mutual exclusion conditions for the two possibilities arising from the structure of the predicate definition. (The case that needs to be excluded
is that of \( G_1 \) succeeding and subsequently \( G_2 \) and \( G_3 \) succeeding or subsequently \( G_2 \) failing and \( G_4 \) succeeding.) Finally, when given a compound goal ‘\( G_1, G_2 \)’, \( D_G \) returns a condition that entails both the dc for \( G_2 \) weakened by the success set of \( G_1 \) and the dc for \( G_1 \) weakened by the success set of \( G_2 \). The intuition here is, that the temporal order of execution is irrelevant. Weakening the dc for \( G_2 \) by the success set of \( G_1 \) is intuitive, since one can safely assume that \( G_1 \) will have succeeded at the point when determinacy of \( G_2 \) needs to be enforced. But similarly, when enforcing determinacy on \( G_1 \), one can safely assume that \( G_2 \) will succeed, since both \( G_1 \) and \( G_2 \) need to succeed for the compound goal to succeed.
**Example 3**
Consider again \( \text{member} \) and \( \text{memberchk} \). Observe that \( D_G[\text{member}(A,S)]\delta = \{\text{false}\} \) since \( \text{mux}(S_G[G_1],S_G[G_4]) = \{\text{false}\} \) is a component of \( D_H[\text{member}(X,L)]\delta \), where \( G_1 = (L = [X],,) \) and \( G_4 = (L = [,L_1], \text{member}(X, L_1)) \). \( \text{member} \) is therefore inferred to be non-deterministic for exactly the right reason: There is no groundedness condition on its parameters such that only one of its clauses can succeed.
\[
\begin{align*}
\mathcal{D}_G[\text{memberchk}(A,S)]\delta &= \beta g_\bar{x}\forall y(\{\text{true}\} \land (S_G[\text{member}(A,S)] \rightarrow \{\text{true}\}) \land \{\text{true}\} \cap \text{mux}(\{\text{false}\}, \{\text{true}\}) \cap \text{mux}(\{\text{false}\}, S_G[\text{member}(A,S), \text{true}])) \\
&= \{\{\text{true}\}\}
\end{align*}
\]
The crucial observation here is, that \( \mathcal{D}_G[\text{member}(A,S)]\delta \) is not required in this construction at all; \( \text{memberchk} \) does not simply inherit its condition from \( \text{member} \).
Theorem 2 states that, in the context of a stratified program \( P \), the condition given by \( \mathcal{D}_G[G]\delta_P \) is indeed sufficient to guarantee the determinacy of a call to \( G \):
**Theorem 2**
If \( \Theta \subseteq \mathcal{D}_G[G]\delta_P \) then \( |\mathcal{F}_G(G)[\mu P]| \leq 1 \) for stratified \( P \) (i.e. \( P = P_0 \cup \ldots \cup P_n \)).
Proof: See Appendix
### 4 Abstraction
In order to synthesize a determinacy inference from the above determinacy semantics, we systematically under-approximate sets of constraints with boolean formulae that express groundedness conditions. \( \text{Pos} \), however, is augmented with a constant for falsity, so as to express unsatisfiable requirements. The abstract domain \( \langle \text{Pos}_\perp, \models, \text{true}, \text{false}, \land, \lor \rangle \) is a complete lattice (Armstrong et al., 1998) and to define the abstraction of a single atomic constraint we introduce:
\[
\alpha_x(\theta) = (\land(vars(\bar{x}) \cap \text{fix}(\theta))) \land \neg \lor(vars(\bar{x}) \setminus \text{fix}(\theta))) \lor \land vars(\bar{x})
\]
For example, if \( \theta = A = c \) then \( \alpha_{(A)}(\theta) = A \), while \( \alpha_{(A,B,C)}(\theta) = (A \land \neg B \land \neg C) \lor (A \land B \land C) \). Notice that finiteness is achieved by limiting the scope to a finite vector of variables \( \bar{x} \). A Galois connection can then be established thus:
\[
\begin{align*}
\alpha_x &\models \text{Con}_1 \rightarrow \text{Pos}_\perp \\
\gamma_x &\models \text{Pos}_\perp \rightarrow \text{Con}_1 \\
\alpha_x(\Theta) &= \lor\{\alpha_x(\theta) \mid \theta \in \Theta \land \theta \neq \text{false}\} \\
\gamma_x(f) &= \cup\{\Theta \in \text{Con}_1 \mid \alpha_x(\Theta) \models f\}
\end{align*}
\]
For instance, if \( \Theta = \{ A = c, B = d \} \) then \( \alpha_{(A,B)}(\Theta) = A \land B \).
The following two propositions and two axioms establish relations between the concrete notions of implication, mutual exclusion and the projections and their abstract counterparts. (Notice that abstract implication is simply boolean implication.)
Abstract Implication Proposition 1 establishes the link between concrete (\(\rightarrow\)) and abstract (\(\Rightarrow\)) implication as follows:
**Proposition 1** If \(\Theta_1 \subseteq \gamma_\varnothing(f_1)\) and \(\gamma_\varnothing(f_2) \subseteq \Theta_2\) then \(\gamma_\varnothing(f_1 \Rightarrow f_2) \subseteq \Theta_1 \rightarrow \Theta_2\) Proof: See Appendix.
Abstract Mutual Exclusion In order to construct an abstract mutual exclusion operator we need to approximate elements of \(Con\). We do so with depth-\(k\) abstractions which are finite sets \(\Theta^{DK} \subseteq Con\) such that each atomic constraint \(\theta\) of the form \(x = t\) occurring in \(\Theta^{DK}\) has a term \(t\) whose depth does not exceed \(k\). From these we synthesize boolean requirements sufficient for mutual exclusion thus:
\[
\max^2_\varnothing(\Theta_1^{DK}, \Theta_2^{DK}) = \lor \left\{ Y \subseteq \text{vars}(\varnothing) \land \forall \theta_1 \in \Theta_1^{DK}, \theta_2 \in \Theta_2^{DK} (\exists Y (\theta_1) \land \exists Y (\theta_2) = \bot) \right\}
\]
Notice, again, that \(\max^2_\varnothing(\Theta_1^{DK}, \Theta_2^{DK}) = \text{true}\) if either of \(\Theta_1^{DK}\) or \(\Theta_2^{DK}\) is \{false\}.
**Example 4** Consider \(\max^2_{(X, L)}(\{L = []\}, S_G[G_4]^{DK})\) where \(G_4 = (L = [\bot L_1], \text{member}(X, L_1))\). If depth \(k=3\), then \(S_G[G_4]^{DK} = \{\theta_1, \theta_2\}\) where \(\theta_1 = (L_1) = ([X \bot] \land L = [\bot L_1])\) and \(\theta_2 = (L_1) = ([X \bot] \land L = [\bot L_1])\). In this situation \(\max^2_{(X, L)}(\{L = []\}, S_G[G_4]^{DK})\) is \(L \lor (L \land X) = L\).
**Proposition 2** states how this abstract construction and the concrete one are related:
**Proposition 2** \(\gamma_\varnothing(\max^2_\varnothing(\Theta_1^{DK}, \Theta_2^{DK})) \subseteq \max(\Theta_1, \Theta_2)\) Proof: See Appendix.
Abstract Projections Had we defined a specific concrete projection on single constraints, we could synthesize abstract ones in the standard way (Cousot and Cousot, 1979). However, since both concrete projection operators on \(Con\) are defined in terms of an arbitrary projection on single constraints, we follow Giacobazzi (1993, Sect.7.1.1) in simply requiring the following to hold for any such projection:
\[
\exists_\varnothing(\gamma(f)) \subseteq \gamma(\exists_\varnothing(f)) \quad \gamma(\forall_\varnothing(f)) \subseteq \forall_\varnothing(\gamma(f))
\]
In addition to the above two axioms, a requirement on the relation between concrete and abstract renaming functions in the context of universal projection is stipulated:
\[
\gamma_{\text{vars}(\varnothing)}(\rho_{\varnothing, \varnothing}^{\varnothing}(\exists_\varnothing(f))) \subseteq \rho_{\varnothing, \varnothing}(\exists_\varnothing(\gamma_{\text{vars}(\gamma)}(f)))
\]
4.1 Abstract success semantics
The last construction that needs to be abstracted in order to mechanise the determinacy semantics presented above is the success set construction \(S\).
Definition 4
The abstract success semantics is defined as the least maps $S^o_G$, $S^o_H$ such that:
$$
S^o_G[\text{post}(\phi)] = \alpha_{\text{vars}(\phi)}(\phi)
$$
$$
S^o_G[p(\tilde{x})] = \downarrow \rho^o_{\tilde{x}}(\exists_y (S^o_H[p(y)]))
$$
where $p(\tilde{y}) \leftarrow B \in P$
$$
S^o_G[G_1, G_2] = S^o_G[G_1] \land S^o_G[G_2]
$$
$$
S^o_H[p(\tilde{y})] = \downarrow \exists_y (S^o_G[G_1] \lor S^o_G[G_2, G_3] \lor S^o_G[G_4])
$$
where $p(\tilde{y}) \leftarrow B \in P$ and $B = G_1 \land G_2 \land G_3 \land G_4$
Proposition 3 formalises the connection between $S^o$ and its concrete counterpart:
**Proposition 3**
$S_G[G] \subseteq \gamma_{\text{vars}(G)}(S^o_G[G])$
Proof: standard.
Depth-$k$ abstractions can be derived analogously to groundness dependencies and therefore we omit these details.
### 4.2 Determinacy inference
Finally, an abstract determinacy environment ($\delta^o$) is a mapping from predicate heads to Boolean formulae representing groundness conditions on the arguments of the predicate sufficient to guarantee determinacy of a call to that predicate: $ADEnv := \text{Head} \rightarrow \text{Pos}_1$. As in the case of determinacy environments, the set of abstract determinacy environments for a given program $(E^o_D)$ is partially ordered point-wise by $\delta^o_1 \subseteq \delta^o_2$ iff $\forall p(\tilde{y}). (\delta^o_1(p(\tilde{y})) \models \delta^o_2(p(\tilde{y})))$. The lattice $\langle E^o_D, \subseteq, \delta^o_1, \delta^o_2, \bot, \top \rangle$ is complete, where $\delta^o_2 = \lambda p(\tilde{y}). \text{true}$, $\delta^o_1 = \lambda p(\tilde{y}). \text{false}$ and $\bot$ and $\top$ are constructed analogously to the case of concrete environments. For a given program $P$, its abstract determinacy semantics $\delta^o_P$ is defined as the greatest fixed point of $D^o_P[P] \delta^o$, where $D^o_P$ is given by the following construction which, unsurprisingly, is very similar in structure to the definition of $D_P$: (We write $(S_G[G])^{DK}$ as $S_G^{DK}[G]$).
**Definition 5**
$$
D^o_P \quad :: \quad \text{Program} \rightarrow ADEnv \rightarrow ADEnv
$$
$$
D^o_P[P] = \delta^o
$$
$$
D^o_P[P, P'] = D^o_P[P \cdot P']\delta^o
$$
where $P = p(\tilde{y}) \leftarrow B$
$$
D^o_H \quad :: \quad \text{Predicate} \rightarrow ADEnv \rightarrow ADEnv
$$
$$
D^o_H[p(\tilde{y}) \leftarrow B] = \delta^o[p(\tilde{y}) \rightarrow \exists_y (D^o_H[p(y)]\delta^o) \land (S^o_G[G_1] \Rightarrow D^o_G[G_1] \delta^o)]
$$
where $f_1 = \text{max}_{\text{vars}(\tilde{y})}(S^o_G^{DK}[G_1], S^o_G^{DK}[G_4])$
and $f_2 = \text{max}_{\text{vars}(\tilde{y})}(S^o_G^{DK}[G_1], S^o_G^{DK}[G_2, G_3])$
and $B = G_1 \land G_2 \land G_3 \land G_4$
\[ D_G^\alpha \] :: Goal \rightarrow ADEnv \rightarrow Pos_{\bot}
\[ D_G^\alpha [\text{post}(\phi)]_{\delta^{\alpha}} = \text{true} \]
\[ D_G^\alpha [p(\vec{x})]_{\delta^{\alpha}} = \rho_{\hat{\delta}^{\alpha}, \vec{x}}(\delta^{\alpha}(p(\vec{y}))) \]
where \( p(\vec{y}) \in \text{dom}(\delta^{\alpha}) \)
\[ D_G^\alpha [G_1, G_2]_{\delta^{\alpha}} = (S_G^\alpha[G_2] \Rightarrow D_G^\alpha[G_1]_{\delta^{\alpha}}) \land (S_G^\alpha[G_1] \Rightarrow D_G^\alpha[G_2]_{\delta^{\alpha}}) \]
Theorem 3 states that each parallel application of \( D_P \) and \( D_P^\alpha \) preserves the correspondence between the dc and its abstract counterpart and Corollary 1 states a direct consequence of this, namely that the same correspondence holds between the greatest fixpoints of these constructions.
**Theorem 3**
\[ \forall i \in \mathbb{N} : \gamma_{\text{vars}(G)}(D_G^\alpha[G]_{\delta_i^{\alpha}}) \subseteq D_G[G]_{\delta_i^{\alpha}} \] where \( \delta_i^{\alpha} \) (resp. \( \delta_i \)) are the results of \( i \) applications of \( D_P^\alpha[P] \) (resp. \( D_P[P] \)) to \( \delta^{\alpha} \) (resp. \( \delta_T \)). **Proof:** See Appendix.
**Corollary 1**
\[ \gamma_{\text{vars}(G)}(D_G^\alpha[G]_{\delta_P^{\alpha}}) \subseteq D_G[G]_{\delta_P} \] **Proof:** Straightforward.
These two statements establish, in effect, that \( \delta_P^\alpha \) is correct with respect to (i.e. is a sound under-approximation of) \( \delta_P \). The significance of this is, that the correctness of \( D_G[P]_{\delta_P} \) as a determinacy condition for \( G \), which was proved in the last section, is carried over to \( D_G^\alpha[P]_{\delta_P^\alpha} \). Since the latter is finite and can be mechanised, an implementation is therefore proven to give a correct (if possibly overly strong) determinacy condition for a goal \( G \) in the context of a stratified program \( P \).
## 5 Implementation
The determinacy inference specified in the previous section is realised as a tool called ‘RedAlert’, using a simple bottom-up fixpoint engine in the style of those discussed by Codish and Søndergaard (2002). Boolean formulae are represented in CNF as lists of lists of non-ground variables. In this way, renaming is straightforward and conjunction is reduced to list-concatenation (Howe and King, 2001). However, disjunction, implication and existential quantifier elimination are performed by enumerating prime implicants (Brauer et al., 2011), which reduces these operations to incremental SAT. The solver is called through a foreign language interface following Codish et al. (2008). It is interesting to note, that we have not found any of the benchmarks to be non-stratified, though even if this were the case, a problematic cut could be discarded albeit at the cost of precision.
In the case of the memberchk predicate mentioned in the introduction, the implementation does indeed infer \text{true} as its determinacy condition, as desired. To discuss a more interesting case, consider the partition predicate of quicksort.
\[
\begin{align*}
\text{pt}([],
\end{align*}
\]
\[
\begin{align*}
\text{pt}([X | Xs], M, [X | L], G) :&= X =< M, !, \text{pt}(Xs, M, L, G). \\
\text{pt}([X | Xs], M, L, [X | G]) :&= \text{pt}(Xs, M, L, G).
\end{align*}
\]
The method presented in King et al. (2006) handles this cut by enforcing monotonicity on the predicate. To this end, the negation of the constraint before the cut \((X > M)\) is conceptually added to the last clause and the cut then disregarded. The
groundness requirement inferred in this way for $pt(w, x, y, z)$ is $(w \land x) \lor (x \land y \land z)$. The determinacy condition inferred for the same predicate by the method presented in this paper is: $w \land (y \lor z)$, which is clearly an improvement, though still sufficient. Improvements similar to this can be observed when analysing a number of benchmark programs. Table 1 summarises the results of this comparison on 22 benchmarks (which are available at http://www.cs.kent.ac.uk/people/staff/amk/cut-normal-form-benchmarks.zip). Under ‘org’ is the number of predicate definitions in the original program. To give a measure of the impact of the cut normal form transformation, under ‘new’ is the number of new predicates introduced by it. Under ‘impr’ is the number of predicates in the original benchmark (excluding any newly introduced ones) on which the determinacy inference is improved by our method over King et al. (2006). Under ‘mean’ is the mean size of improvement (i.e. the mean number of variables which occur in the previous determinacy condition but not in the new one). The results show a uniform improvement. Note that randc, dialog, neural and boyer give precision improvements but no determinancy conditions are inferred which involve strictly fewer variables. The runtime for the groundness analysis, the depth-$k$ analysis and the backwards analysis, that propagates determinacy requirements against the control flow, are all under a second for all benchmarks (and not even SCCs are considered in the bottom-up fixpoint calculations). However, the overall runtime is up to an order of magnitude greater, due to the time required to calculate the mutual exclusion conditions. This is because the definition of the abstract mutual exclusion in section 4 is inherently exponential in the arity of a predicate. This is currently the bottleneck.
<table>
<thead>
<tr>
<th>benchmark</th>
<th>org</th>
<th>new</th>
<th>impr</th>
<th>mean</th>
<th>benchmark</th>
<th>org</th>
<th>new</th>
<th>impr</th>
<th>mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>asm</td>
<td>44</td>
<td>157</td>
<td>5</td>
<td>0.6</td>
<td>peval</td>
<td>108</td>
<td>14</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>crypt_wamcc</td>
<td>11</td>
<td>12</td>
<td>2</td>
<td>2</td>
<td>nandc</td>
<td>12</td>
<td>5</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>semi</td>
<td>22</td>
<td>19</td>
<td>0</td>
<td>0</td>
<td>life</td>
<td>10</td>
<td>11</td>
<td>7</td>
<td>1.85</td>
</tr>
<tr>
<td>qsort</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>ronp</td>
<td>16</td>
<td>5</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>browse</td>
<td>15</td>
<td>7</td>
<td>1</td>
<td>2</td>
<td>tsp</td>
<td>23</td>
<td>2</td>
<td>10</td>
<td>1.4</td>
</tr>
<tr>
<td>ga</td>
<td>58</td>
<td>102</td>
<td>2</td>
<td>1.5</td>
<td>flatten</td>
<td>27</td>
<td>25</td>
<td>6</td>
<td>1.5</td>
</tr>
<tr>
<td>dialog</td>
<td>30</td>
<td>11</td>
<td>3</td>
<td>0</td>
<td>neural</td>
<td>34</td>
<td>23</td>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>unify</td>
<td>26</td>
<td>33</td>
<td>3</td>
<td>1.33</td>
<td>nbody</td>
<td>48</td>
<td>34</td>
<td>11</td>
<td>2</td>
</tr>
<tr>
<td>peep</td>
<td>20</td>
<td>189</td>
<td>0</td>
<td>0</td>
<td>boyer</td>
<td>26</td>
<td>95</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>read</td>
<td>42</td>
<td>89</td>
<td>0</td>
<td>0</td>
<td>qplan</td>
<td>65</td>
<td>41</td>
<td>7</td>
<td>2.57</td>
</tr>
<tr>
<td>reducer</td>
<td>31</td>
<td>57</td>
<td>9</td>
<td>2</td>
<td>simple Analyzer</td>
<td>60</td>
<td>50</td>
<td>9</td>
<td>2.22</td>
</tr>
</tbody>
</table>
Table 1. Comparison
6 Related Work
Determinacy inference and analysis As mentioned above, Lu and King (2005) and King et al. (2006) address the problem of inferring determinacy conditions on a predicate. Since their limitations have been discussed above, we will not repeat them here. Dawson et al. (1993) present a method for inferring determinacy information from a program by adding constraints to the clauses of a predicate which allow the inference of mutual exclusion conditions between these clauses rather than
determinacy conditions for a whole predicate. Sahlin (1991) presents a method for determinacy analysis, based on a partial evaluation technique for full Prolog which detects whether there are none, one or more than one ways a goal can succeed. This approach has been developed by Mogensen (1996) (see below). Le Charlier et al. (1994) present a top-down framework for abstract interpretation of Prolog which is based on sequences of substitutions and can be instantiated to derive an analysis equivalent to that of Sahlin (1991).
Denotational semantics for Prolog with cut Mogensen (1996) constructs a denotational semantics for Prolog with cut based on streams of substitutions as the basis for a formal correctness argument for the determinacy analysis. The problem of constructing a denotational semantics for Prolog with cut has been addressed before by Billaud (1990), Debray and Mishra (1988) and de Vink (1989) a good 20 years ago, around the same time that Apt et al. (1988) first published their theory of non-monotonic reasoning, introducing the idea of stratification. Billaud (1990) constructs an elegant denotational semantics based on streams of states of computation and proves it correct with respect to an operational semantics. Debray and Mishra (1988) construct a more complex semantics over a domain of sequences of substitutions, comparable to our $\text{Con}^\downarrow_{\text{seq}}$, which is partially ordered, in contrast to $\text{Con}^\downarrow_{\text{seq}}$, by a prefix-ordering, rather than a sub-sequence-ordering. Both proceed by first defining a semantics for cut-free Prolog and then extending it to cut. In both cases, they argue monotonicity for the former of these constructions and appear to assume that it carries over to the latter. Finally de Vink (1989), too, presents a denotational semantics of Prolog with cut. His approach is probably closest to ours, using environments to represent the context provided by a program in a similar fashion. However, as in the case of Debray and Mishra (1988), no argument is provided for the monotonicity of their semantic operators, which casts some doubt over the question whether the semantics is well-defined. Common to all these approaches is the view of cut as essentially an independent piece of syntax. This view requires cut to be treated on a par with success and failure, having an evaluation by itself, which creates the need for complex constructions involving the introduction and later elimination of cut-flags into the streams or sequences, to semantically simulate the effect that cut has on a computation. In contrast, we view cut as essentially relational. In our view, a cut has no semantics of its own, but only affects the evaluation of the goals in the context where it occurs. This reliefs us of the need for systematically introducing and eliminating cut-flags.
7 Conclusions
This paper has presented a determinacy inference for Prolog with cut, which treats cut in a uniform way, while being more elegant and powerful than previously existing methods. The inference has been proved correct with respect to a novel denotational semantics for Prolog with cut. We have demonstrated the viability of the method by reporting on the performance of an implementation thereof and evaluating it against a comparable existing method.
Acknowledgements This work was inspired by the cuts that are ravaging the UK, but funded by a ACM-W scholarship and a DTA bursary. We thank Lunjin Lu and Samir Genaim for discussions that provided the backdrop for this work. We thank Michel Billaud for sending us copies of his early work and for his comments on the wider literature. We also thank an anonymous reviewer for invaluable help with the proofs in the appendix.
References
|
{"Source-Url": "https://kar.kent.ac.uk/30748/1/main.pdf", "len_cl100k_base": 16038, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 73887, "total-output-tokens": 18876, "length": "2e13", "weborganizer": {"__label__adult": 0.00033545494079589844, "__label__art_design": 0.0003786087036132813, "__label__crime_law": 0.0004394054412841797, "__label__education_jobs": 0.0015277862548828125, "__label__entertainment": 7.748603820800781e-05, "__label__fashion_beauty": 0.00017130374908447266, "__label__finance_business": 0.0002951622009277344, "__label__food_dining": 0.0003986358642578125, "__label__games": 0.0007996559143066406, "__label__hardware": 0.0006628036499023438, "__label__health": 0.0005502700805664062, "__label__history": 0.0002651214599609375, "__label__home_hobbies": 0.0001232624053955078, "__label__industrial": 0.0005435943603515625, "__label__literature": 0.0005221366882324219, "__label__politics": 0.00030875205993652344, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.04217529296875, "__label__social_life": 0.00010484457015991212, "__label__software": 0.007663726806640625, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.0002689361572265625, "__label__transportation": 0.0006279945373535156, "__label__travel": 0.00017523765563964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57725, 0.02417]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57725, 0.63048]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57725, 0.81632]], "google_gemma-3-12b-it_contains_pii": [[0, 1158, false], [1158, 3803, null], [3803, 7751, null], [7751, 10885, null], [10885, 15171, null], [15171, 19602, null], [19602, 23245, null], [23245, 26945, null], [26945, 29928, null], [29928, 33246, null], [33246, 37089, null], [37089, 40346, null], [40346, 43046, null], [43046, 46553, null], [46553, 49974, null], [49974, 53307, null], [53307, 56017, null], [56017, 57725, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1158, true], [1158, 3803, null], [3803, 7751, null], [7751, 10885, null], [10885, 15171, null], [15171, 19602, null], [19602, 23245, null], [23245, 26945, null], [26945, 29928, null], [29928, 33246, null], [33246, 37089, null], [37089, 40346, null], [40346, 43046, null], [43046, 46553, null], [46553, 49974, null], [49974, 53307, null], [53307, 56017, null], [56017, 57725, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57725, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57725, null]], "pdf_page_numbers": [[0, 1158, 1], [1158, 3803, 2], [3803, 7751, 3], [7751, 10885, 4], [10885, 15171, 5], [15171, 19602, 6], [19602, 23245, 7], [23245, 26945, 8], [26945, 29928, 9], [29928, 33246, 10], [33246, 37089, 11], [37089, 40346, 12], [40346, 43046, 13], [43046, 46553, 14], [46553, 49974, 15], [49974, 53307, 16], [53307, 56017, 17], [56017, 57725, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57725, 0.0379]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
1ca29d4e6a171a74b5af27078484009ff39d3179
|
HeyTAP: Bridging the Gaps Between Users' Needs and Technology in IF-THEN Rules via Conversation
Original
Availability:
This version is available at: 11583/2829354 since: 2020-10-04T15:45:58Z
Publisher:
ACM
Published
DOI:10.1145/3399715.3399905
Terms of use:
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
ACM postprint/Author's Accepted Manuscript
(Article begins on next page)
HeyTAP: Bridging the Gaps Between Users’ Needs and Technology in IF-THEN Rules via Conversation
Fulvio Corno
Politecnico di Torino
Torino, Italy
fulvio.corno@polito.it
Luigi De Russis
Politecnico di Torino
Torino, Italy
luigi.derussis@polito.it
Alberto Monge Roffarello
Politecnico di Torino
Torino, Italy
alberto.monge@polito.it
ABSTRACT
In the Internet of Things era, users are willing to personalize the joint behavior of their connected entities, i.e., smart devices and online service, by means of IF-THEN rules. Unfortunately, how to make such a personalization effective and appreciated is still largely unknown. On the one hand, contemporary platforms to compose IF-THEN rules adopt representation models that strongly depend on the exploited technologies, thus making end-user personalization a complex task. On the other hand, the usage of technology-independent rules envisioned by recent studies opens up new questions, and the identification of available connected entities able to execute abstract users’ needs become crucial. To this end, we present HeyTAP, a conversational and semantic-powered trigger-action programming platform able to map abstract users’ needs to executable IF-THEN rules. By interacting with a conversational agent, the user communicates her personalization intentions and preferences. User’s inputs, along with contextual and semantic information related to the available connected entities, are then used to recommend a set of IF-THEN rules that satisfies the user’s needs. An exploratory study on 8 end users preliminary confirms the effectiveness and the appreciation of the approach, and shows that HeyTAP can successfully guide users from their needs to specific rules.
CCS CONCEPTS
• Human-centered computing → Natural language interfaces; Ubiquitous and mobile devices; User studies; • Computing methodologies → Natural language processing; Knowledge representation and reasoning; • Information systems → Recommender systems.
KEYWORDS
Trigger-Action Programming, Abstraction, Conversational Agent, Recommender System, Semantic Web, Internet of Things
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
AVI ’20, September 28-October 2, 2020, Salerno, Italy. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/nmnnnn.nmnnnn
1 INTRODUCTION
In the contemporary Internet of Things (IoT) era, people can interact with a multitude of smart devices, always connected to the Internet, in the majority of the today’s environments [6]. Smart lamps, thermostats, and many other Internet-enabled appliances are becoming popular in homes and workplaces. Furthermore, by using PCs and smartphones, users can access a variety of online services, ranging from social networks to news and messaging apps. In this complex scenario, the End-User Development (EUD) vision aims at putting personalization mechanisms in the hands of end users, i.e., the subjects who are most familiar with the actual needs to be met [13]. Through visual trigger-action programming platforms such as IFTTT [3] and Zapier [4], users can “program” the joint behaviors of their own connected entities, i.e., smart devices and online service, by defining trigger-action (IF-THEN) rules such as “if I publish a photo on Facebook, then upload it to my Google Drive”, or “if the security camera detects a movement, then blink the kitchen lamp.”
Despite apparent simplicity, previous studies [8, 15, 19, 20] highlighted many interoperability, scalability, and understandability challenges suffered by contemporary trigger-action programming platforms. In such environments, smart devices and online services are typically modeled on the basis of the underlying brand or manufacturer [8]: as the number of supported technologies grows, so do the design space, i.e., the combinations between different triggers (ifs) and actions (thens), and users often experience difficulties in discovering rules and related functionality [20]. As a result, trigger-action programming becomes a complex task for people without any previous programming experience [16]. Some previous works, e.g., [8, 13], tackled the identified issues by proposing to move towards a new breed of trigger-action programming platforms supporting a higher level of abstraction, with abstract and technology-independent rules that can be adapted to different contextual situations. With triggers such as “when user is sleeping” and actions such as “illuminate the room”, users can personalize their connected entities by saving time and reducing errors, without the need of explicitly programming every single involved technology. While this vision seems promising, however, it is yet unclear how to effectively move from abstract users’ needs to the real devices and services needed for implementing them. How can a system decide how to “illuminate a room”? Is turning the lights on the right choice for the user? Does the user prefer to open the blinds, e.g., because she is interested in saving energy?
HeyTAP is a conversational and semantic-powered platform for personalizing the behavior of connected entities. First, it allows users to communicate their personalization intentions and preferences (a). Then, it analyzes users’ inputs, along with contextual and semantic information related to the available connected entities, to recommend a set of IF-THEN rules able to map the abstract users’ needs to real connected entities (b).
In this paper, we present HeyTAP, a conversational and semantic-powered platform able to map abstract users’ needs to executable IF-THEN rules. By exploiting a multimodal interface, the user can interact with a conversational agent, either by typing or by voice, to communicate her personalization intentions for different contexts, e.g., to personalize her room’s temperature when she is near home (Figure 1a). By interacting with the agent, the user can also specify her preferences on how to reach the goal of her personalization intention, e.g., convenience and preserving security in Figure 1a.
To model such concepts, we extended the EUPont model [7], a semantic representation for End-User Development in the IoT. We exploited the OWL classes and individuals of EUPont to categorize triggers and actions offered by user’s connected entities in terms of provided functionality, and to model contextual information, e.g., the devices and services owned by the user and the relative position. Furthermore, we added classes and restrictions to automatically characterize triggers and actions on the basis of the user’s preferences, e.g., to discriminate between energy demanding and privacy invasive behaviors. All these semantic information are used to suggest a set of IF-THEN rules that satisfies the user’s needs, i.e., intentions and preferences. The user can finally inspect the recommended rules in the multimodal interface and select one or more of them to personalize her connected entities (Figure 1b).
To understand to what extent HeyTAP is able to successfully guide participants from abstract needs to actual IF-THEN rules, we ran an exploratory experiment with 8 users. In the study, we challenged participants in freely personalizing a set of connected entities in different contexts. Results confirm the effectiveness of the approach, and show that HeyTAP can successfully “translate” abstract users’ needs into IF-THEN rules that can be instantiated and executed by contemporary trigger-action programming platforms. Despite participants expressed their personalization intentions with different level of abstractions, in particular, the tool was able to address the 90.63% of the collected needs, by providing IF-THEN recommendations that satisfied the participants. The collected participants’ feedback also highlights possible improvements that could inform future works that aim at assisting users in personalizing their smart devices and online services.
2 RELATED WORKS
2.1 Trigger-Action Programming: Opportunities and Issues
One of the most popular paradigm to empower end users in directly programming their connected entities is trigger-action [11, 19]. By defining trigger-action (IF-THEN) rules, users can connect a pair of devices or online services in such a way that, when an event (the trigger) is detected on one of them, an action is automatically executed on the latter. Trigger-action programming offers a very simple and easy to learn solution for creating end-user applications [5], and trigger-action programming platforms such as IFTTT and Zapier are becoming popular [10, 15].
\footnote{https://www.w3.org/OWL/, last visited on January 18, 2020}
Recently, researchers started to investigate different aspects of these solutions, e.g., through empirical characterization of usage performances [18] and large-scale analysis of publicly shared rules [20]. Despite apparent simplicity, indeed, the process of composing IF-THEN rules in trigger-action programming platforms has been found to be a complex task for non programmers [16], and the expressiveness and understandability of solutions like IFTTT have been criticized since they are rather limited [15, 19, 20].
Barricelli and Valtolina [5] analyzed the most popular end-user tools for personalizing connected entities, including IFTTT, and found that some of them “offers a too complex solution for supporting end users in expressing their preferences.” By evaluating thousands of trigger-action rules publicly shared on IFTTT, Ur et al. [19] found that the trigger-action approach can be both useful and usable for end-user development in IoT settings like smart homes, but they also found that the level of abstraction end users employ to express triggers needs to be better explored: many users, indeed, express triggers one level of abstraction higher, e.g., “when I am in the room” instead of “when motion is detected by the motion sensor.” In another study, Ur et al. [20] found that a large number of users is using IFTTT to create a diverse set of IF-THEN rules, which represents a very broad array of connections for filling gaps in devices and services functionality. According to the authors, however, the continuous growth of supported entities and connections highlights the need to provide users with more support for discovering functionality and managing collections of IF-THEN rules. The analysis emphasizes also the future need of making “IFTTT rules more expressive.” Similarly, Huang and Cakmak [15] conducted two user studies to systematically study how different types of triggers and actions, e.g., states vs. events, influence the understandability of trigger-action artifacts. They found users’ inconsistencies in interpreting the behavior of IF-THEN rules and some errors in creating programs with a desired behavior.
### 2.2 Towards a Higher Level of Abstraction
The aforementioned issues are strictly related to the “low-level” of abstraction of the adopted representations. Contemporary trigger-action programming platforms, indeed, mainly model smart devices and online services on the basis of the underlying brand or manufacturer, thus opening the way to interoperability, scalability, and understandability issues [8]: to program their IoT ecosystems, users need to know all the involved technologies, and they have to define many different rules even if they perform the same logical operations.
To overcome the drawbacks of low-level representations, different previous works [8, 13, 19] envisioned a new bread of trigger-action programming platforms supporting a higher level of abstraction. In the context of smart homes, for example, Funk et al. [12] asserted that we need “a new approach aimed at first capturing end-users’ intentions and potential usage scenarios, then providing this information to a control system that learns to resolve intentions and scenarios for available devices in the context.” Following this need, Ghiani et al. [13] proposed a novel trigger-action programming platform to let end users personalize the contextual behavior of their IoT applications through trigger-action rules. By exploiting an authoring tool, in particular, users can specify trigger-action rules that indicate the desired specific application behavior for the target contexts of use, e.g., “when user is sleeping, do turn-off bedroom television.” Corno et al. [8], instead, developed EUrOnt, a high-level representation for IoT personalization that allows users to model abstract trigger-action rules like “if I enter a closed space, then illuminate it.” Such rules can be adapted to different contextual situations, independently of manufacturers, brands, and other technical details. Besides describing the model, the authors presented its integration in the architecture of a trigger-action programming platform, and they explored the advantages of using the model in the definition of trigger-action rules thanks to a user study. They found that the usage of a higher level of abstraction allows users to define IF-THEN rules with fewer errors and in less time with respect to existing solutions.
While a higher level of abstraction in IF-THEN rules is a promising direction, the identification of the real devices and services to be used to satisfy users’ needs becomes crucial. In this paper, we aim at presenting a conversational and semantic-powered platform able to map abstract users’ needs to IF-THEN rules that can be executed by available connected entities.
### 2.3 Programming the IoT via Conversation and Recommendations
By using popular conversational agents such as Amazon Alexa [1] and Google Assistant [2] it is now possible to interact with a variety of different smart devices and online services via conversation. To the best of our knowledge, however, the only example of a conversational system that allows to personalize connected entities through the definition of IF-THEN rules is InstructableCrowd, a research prototype developed by Huang at al. [16]. InstructableCrowd is a crowd-sourcing system that enables users to create IF-THEN rules based on their needs. By exploiting a custom user interface on their smartphones, users can converse with crowd workers to describe some problems they are encountering, e.g., being late for a meeting. Crowd workers can therefore exploit a tailored interface to combine triggers and actions in appropriate IF-THEN rules that are then sent back to the users’ phones.
In our work, we focus on a similar goal by trying to automatically map abstract users’ needs to actual IF-THEN rules, i.e., without the help of other users such as crowd workers. The idea is to adopt a semantic-based approach to analyze users’ inputs and contextual information to recommend a set of appropriate IF-THEN rules from which a user can choose. Recommendations, indeed, could be useful to help end users use trigger-action programming systems, and advances in EUD have expanded the opportunities for offering recommendations [14]. In this context, in particular, some recent works investigated how to provide users with recommendations. Yao et al. [21], for example, developed a probabilistic framework to suggest relevant smart “things” to be personalized based on user interests. Corno et al. [9], instead, proposed RecRules, a semantic recommendation system that suggests trigger-action rules on the basis of content-based and collaborative information. None of such works, however, explore how to calculate recommendations by extracting users’ needs via conversation.
3 HEYTAP: ARCHITECTURE AND IMPLEMENTATION
In this section, we first describe the architecture of HEYTAP, and we highlight the choices we made in implementing a first prototype of the system.
Figure 2 shows the architecture HEYTAP, our conversational and semantic-powered platform for personalizing the behavior of connected entities. The web-based multimodal User Interface (UI) allows the interaction between the user and the conversational agent, and it is responsible for visualizing suggested IF-THEN rules. The UI is implemented through the Angular framework, a TypeScript-based open-source web application framework. The conversational agent, instead, exploits DialogFlow as the conversational engine. The HEYTAP Server stores all the data related to users, connected entities, and rules, and it interacts with DialogFlow to get users’ inputs and provide responses. Furthermore, it is responsible of calculating recommendations on the basis of the collected users’ inputs.
HEYTAP support users to move from their abstract needs to IF-THEN rules that involve real smart devices and online services in 2 main steps, namely conversation and recommendation.
3.1 Conversation
By interacting with the conversational agent (Figure 2a), either by typing or by voice, the user first expresses her personalization intentions. In this phase, she can use different level of abstraction, and she can refer to different contexts, e.g., she can generically communicate her intention of programming the temperature of a room, or she can refer to a specific lamp in the kitchen to be turned on. Then, the user can communicate her preferences, i.e., to specify how to reach the goal of her personalization intention. To model such concepts, we developed EUPont-conversational (Figure 2b), an extension of the EUPont [7] model. We exploited, in particular, the instantiation of EUPont for IFTTT. Such an ontology abstracts details such as brands and manufacturers by categorizing the IFTTT’s low-level triggers and actions under a hierarchy of OWL classes that model the provided functionality. Furthermore, the ontology models all the supported connected entities on the basis of their capabilities, and can store contextual information such as entities’ position and ownership.
3.1.1 Intentions Elicitation. As exemplified in Figure 1a, personalization intentions are extracted in 2 subsequent phases representing the action and the trigger of an IF-THEN rule, respectively. First, the conversational agent asks the user what she would like to be done, e.g., “when” or “if”, to split the intention in the corresponding action and trigger. For the sake of simplicity, we focus on detecting simple intentions, i.e., that can be mapped on rules with a single trigger and a single action. This choice is also enforced by the format of the IFTTT rules modeled by EUPont, which do not model trigger conditions and actions involving multiple connected entities. Both action and trigger intentions, in particular, are defined as a set of the following elements, that are automatically extracted by the DialogFlow conversational engine from the user’s text:
- a functionality, i.e., how users would like to act on their physical and virtual environments. Examples include “increase,” “turn off,” and “get.” To model functionalities, we exploited the OWL classes of the semantic model that classifies all the available IFTTT triggers and actions according to their final goal;
- a category, i.e., on which category of connected entities users would like to act. Examples include “temperature,” “lighting,” and “communication.” To model categories, we added new OWL classes to characterize all the available IFTTT triggers and actions;
- an entity, i.e., a generic indication of a connected entity type. Examples include “door,” “camera,” and “social network.” To model entities, we exploited the OWL classes that provide a hierarchy of connected entities ranging from physical to virtual objects;
- a technology, i.e., a specific indication of a particular technology. Examples include “Philips Hue,” “Nest,” and “Facebook.” To model technologies, we exploited the OWL individuals representing the full set of contemporary IFTTT “services”;
- a where, i.e., a location in which executing an action or monitoring a trigger. Examples include “kitchen,” “home,” and “office.” To model locations, we specialized the OWL Location class into a series of sub-classes modeling homes, rooms, and workplaces;
- a when, i.e., a time condition. Examples include “in the evening,” “at 10 PM,” and “on 27 May.” To model time, we used the OWL individuals representing the triggers offered by the Date & Time IFTTT service.
Users can express their intentions with different level of abstractions, by specifying all the described elements, or a subset of them. Table 1 reports some examples of trigger and action intentions. While “Increase the air quality” is a very generic action intention that includes a functionality and a category, only, the trigger intention “when the temperature on my home Nest thermostat drops below 20 degrees” is way more specific, since it includes a functionality (increase), a category (temperature), an entity (the thermostat), a technology (Nest), and a where (home). If HEYTAP is not able to map an action or a trigger intention onto the modeled elements, it asks the user to reformulate her message. Moreover, the tool explicitly warns the user in case of personalizations that are modeled but not available, e.g., when the available connected entities do not provide a specific functionality.
3.1.2 Preferences Elicitation. Preferences represent a filter on how to implement the user’s personalization intentions. We derived the available preferences from the work of Funk et al. [12], that analyzed the temporal, preferential, technical, and social complexity of mapping high-level end-user intents to rules in the smart home environment.
-------------------
1https://angular.io, last visited on January 21, 2020
2https://dialogflow.com/, last visited on January 21, 2020
3http://elite.polito.it/ontologies/eupont-ifttt.owl, last visited on January 21, 2020
4https://ifttt.com/services, last visited on January 21, 2020
Figure 2: The architecture of HeyTAP. The user interacts with a conversational agent (a) to communicate to the system her personalization intentions for different contexts, along with her preferences, with different levels of abstraction. By exploiting a semantic model, i.e., EUPont-conversational (b), the server analyzes the user’s input, along with contextual information related to the available connected entities (c), to infer a set of IF-THEN rules that satisfies the user’s needs (d). The recommended rules are visualized in the multimodal user interface to the user, that can decide (e) which rules have to be instantiated and executed onto real smart devices and online services (f).
Table 1: Some examples of action and trigger intentions that can be extracted by DialogFlow. Users can adopt different levels of abstraction by specifying one or more elements characterizing an intention.
<table>
<thead>
<tr>
<th>Type</th>
<th>Functionality</th>
<th>Category</th>
<th>Entity</th>
<th>Technology</th>
<th>Where</th>
<th>When</th>
</tr>
</thead>
<tbody>
<tr>
<td>Increase the air quality.</td>
<td>ACTION</td>
<td>Increase</td>
<td>Air quality</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Open the window in the kitchen.</td>
<td>ACTION</td>
<td>Open</td>
<td>Window</td>
<td></td>
<td>Kitchen</td>
<td></td>
</tr>
<tr>
<td>Get a Telegram notification on my smartphone.</td>
<td>ACTION</td>
<td>Get</td>
<td>Communication</td>
<td>Smartphone</td>
<td>Telegram</td>
<td></td>
</tr>
<tr>
<td>When I’m on my way home by car.</td>
<td>TRIGGER</td>
<td>Arrive</td>
<td>Car</td>
<td></td>
<td>Home</td>
<td></td>
</tr>
<tr>
<td>In the evening.</td>
<td>TRIGGER</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>When the temperature on my home Nest thermostat drops below 20 degrees.</td>
<td>TRIGGER</td>
<td>Increase</td>
<td>Temperature</td>
<td>Thermostat</td>
<td>Nest</td>
<td>Home</td>
</tr>
</tbody>
</table>
The user can also ignore the request, e.g., by saying “don’t mind.” While different methods could be used to model the reported preferences, in our first implementation of HeyTAP we adopted a simple approach based on semantic filters. To filter intentions according to the described preferences, in particular, we introduced a set of OWL classes and restrictions in EUPont-conversational to automatically infer the “behavior” of the triggers and actions offered by the supported connected entities. For the sake of simplicity,
- an energy-demanding behavior is defined as a trigger or an action that involve a Turn on functionality, i.e., a behavior
context. As exemplified in Figure 1a, the conversational agent asks to the user if she is interested in:
- convenience, i.e., using all the available connected entities in an unrestricted manner;
- sustainability, i.e., acting on the available connected entities by trying to save energy;
- security, i.e., defining IF-THEN rules that preserve the user’s security; and,
- privacy, i.e., defining IF-THEN rules that preserve the user’s privacy.
that result in a new smart device that is permanently turned on unless someone (or some other rules) turns it off;
• a privacy-invasive behavior is defined as a trigger or an action that involve smart devices analyzing personal images, e.g., Cameras, or “public” online services such as Social Networks;
• A security-critical behavior is defined as a trigger or an action that involve smart devices controlling the access to physical buildings, e.g., Doors and Windows.
3.2 Recommendation
The HeyTAP Server uses the user’s input, i.e., her intentions and preferences, along with the contextual information related to the available connected entities (Figure 2c), to infer a set of IF-THEN rules that include available and real connected entities, i.e., with IFTTT triggers and actions (Figure 2d).
By adopting a reasoning process, the server starts by analyzing the user’s action and trigger intentions and extract a set of appropriate IFTTT triggers and triggers, respectively. In this phase, it first extracts all the available actions, and it filters them according to available intention’s elements, i.e., functionality, category, entity, technology, where, and when. The same steps are then used to extract a set of triggers, that are combined with the retrieved actions to generate a first set of IF-THEN rules. Such a set of IF-THEN rules is finally filtered by considering the user’s preferences. If the user is interested in convenience, for example, such a filter has no effect. If the user is interested in preserving her privacy, instead, all the rules involving privacy-invasive behaviors are excluded. As shown in Figure 1b, the final set of recommended rules is finally visualized to the user (Figure 2e). The user can select (and complete with any additional details) one or more recommended rules involving real smart devices and online services (Figure 2f).
4 USER STUDY
To understand to what extent HeyTAP is able to successfully guide users from abstract needs to actual IF-THEN rules we performed an exploratory study with 8 participants. We were guided by the following research questions:
RQ1. How would users interact with HeyTAP?
RQ2. Is HeyTAP able to map abstract users’ needs to executable IF-THEN rules?
RQ3. What is the users’ satisfaction in using HeyTAP?
4.1 Participants
We recruited participants by sending emails to students enrolled in different university courses and private messages to our social circles. At the end, we involved 8 students (3 females and 5 males) with a mean age of 26 years ($SD = 1.73$, range : 24 – 30).
All the participants had a computer science background. On a Likert-scale from 1 (Very Low) to 5 (Very High), they stated their familiarity with the trigger-action programming approach ($M = 3.90$, $SD = 1.22$). 7 participants never used any trigger-action programming platform, while only one of them had used IFTTT a few times, sporadically.
4.2 Procedure
We devised a controlled experiment during which participants were requested to personalize a scenario by impersonating a fictional user owning a set of 24 connected entities in different contexts. The fictional user was subscribed to different online services (like Facebook and Gmail) and owned 2 smartphones. Furthermore, her home and her office were equipped with smart devices and systems, including smart doors, lights, and air conditioning systems. At the beginning of the study, we introduced participants to the trigger-action programming for personalizing connected entities, and we gave them a sheet of paper with the full list of connected entities available in the scenario, including the entity’s type (e.g., lights), brand (e.g., Philips Hue), and position (e.g., ubiquitous, kitchen, office, ...). In a 15-minutes session with HeyTAP, participants were then free to interact with HeyTAP to communicate their personalization intentions and preferences. Whenever HeyTAP provided recommendations, we asked participants to evaluate whether the suggestions fitted their needs. At the end of the study, we performed a semi-structured debriefing session with each participant.
4.3 Measures
Since the final goal of HeyTAP is to suggest IF-THEN rules based on users’ intentions and preferences, we defined the metrics to be collected by taking inspiration from the work of Knijnenburg et al. [17], i.e., a framework to evaluate recommender systems with a user-centric approach. According to the framework, it is important to distinguish the following aspects:
• Objective System Aspects (OSA), e.g., the proposed suggestions;
• Subjective System Aspects (SSA), i.e., the users’ perception of the objective system aspects;
• User Experience (EXP), i.e., users’ evaluation of their interaction with the system; and,
• Interaction (INT), i.e., users’ behaviors.
Table 2 describes the measures we collected during the study, with the indication of the related aspects, and the modality with which they have been collected. We use logs to evaluate the effectiveness of HeyTAP in addressing users’ needs by means of IF-THEN recommendations (OSA), and to record the interaction (INT) between participants and HeyTAP. We measured, in particular, the number of exchanged messages, the number of expressed needs and weather they resulted or not in some recommendations, and the level of abstraction adopted by the participants in expressing their intentions, i.e., which of the elements described in Section 3.1.1 they specified.
Whenever HeyTAP provided recommendations for an expressed need, we asked participants to answer a Likert-scale question from 1 (absolutely no) to 5 (absolutely yes) to evaluate whether the provided suggestions fitted their need, i.e., the Perceived Recommendation Quality (PRQ). Furthermore, in the debriefing session, we used a Likert-scale question from 1 (absolutely no) to 5 (absolutely yes) to evaluate the Perceived Effectiveness and Fun (PEF) in using HeyTAP, and we asked some open questions about the perceived advantages and disadvantages of the HeyTAP approach.
5 RESULTS AND DISCUSSION
Results are organized across our 3 research questions. First, we report on how participants interacted with HeyTAP (RQ1), i.e., which level of abstraction they adopted in their personalization intentions and which preferences they expressed. Then, we investigate the ability of HeyTAP in addressing users’ needs (RQ2), and we analyze the participants’ satisfaction in using our conversational agent (RQ3).
5.1 Interacting with HeyTAP
5.1.1 Intentions. The heat-map of Figure 3 provides an overview on the level of abstraction adopted by the participants to express their personalization intentions (RQ1).

Participants rarely included technologies, e.g., “Philips Hue” or “Gmail,” to express action (4.44%) and trigger (2.22%) intentions, thus confirming the limitations of platforms like IFTTT and Zapier [8, 20]. In line with previous works [20], in particular, trigger intentions were generally expressed in a more abstract way than action intentions. Indeed, while the 86.67% of action intentions specified an entity such as a door or a window, only the 17.78% of trigger intentions referred to a type of device or online service. On the contrary, trigger intentions were more likely to refer to a generic category, e.g., “temperature” or “communication,” with respect to action intentions (20.00% vs. 6.67%, respectively), while action intentions included a specific functionality, e.g., “turn on” or “send,” more often that trigger intentions (80.00% vs. 51.11%, respectively). Not surprisingly, a similar number of action and trigger intentions included a where (35.56% vs. 31.11%, respectively), while the when element was more common for trigger intentions than action intentions (35.56% vs. 2.22%, respectively).
5.1.2 Preferences. Table 3 reports the distribution of the preferences expressed by the participants during the study (RQ1).
<table>
<thead>
<tr>
<th>Preference</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sustainability</td>
<td>37.93%</td>
</tr>
<tr>
<td>Convenience</td>
<td>24.14%</td>
</tr>
<tr>
<td>Don’t mind</td>
<td>20.69%</td>
</tr>
<tr>
<td>Security</td>
<td>10.34%</td>
</tr>
<tr>
<td>Privacy</td>
<td>6.90%</td>
</tr>
</tbody>
</table>
In the majority of cases, participants expressed their preference towards sustainability (37.93%) and convenience (24.14%). In the 20.69% of cases, instead, participants did not declare any particular preference, while security and privacy were mentioned in a limited number of cases (10.34% and 6.90%, respectively).
5.2 Mapping Users’ Needs
To investigate whether HeyTAP is able to map abstract users’ needs to executable IF-THEN rules (RQ2), we analyzed the number of messages exchanged between participants and the tool, the number of expressed needs, and the number of needs resulting in some IF-THEN recommendations (Table 4). Each participant took advantage of her 15-minutes session with HeyTAP to express 4.00 needs (SD = 0.93) on average, by exchanging 28.12 messages (SD = 14.60). In 3.63 cases (SD = 0.74), HeyTAP was able to address the participant’s need by providing some IF-THEN recommendations.
Table 2: The measures we collected during our user study. Through different logs, we recorded the interaction (INT) between participants and HeyTAP, and the effectiveness of the underlying recommender algorithm (OSA). Likert-scale questions and a final debriefing session were instead used to measure Subjective System Aspects (SSA) and the User Experience (EXP) with HeyTAP.
<table>
<thead>
<tr>
<th>Measure</th>
<th>Description</th>
<th>Collection Type</th>
<th>Aspect</th>
</tr>
</thead>
<tbody>
<tr>
<td>Level of abstraction</td>
<td>How the user expressed her personalization intentions</td>
<td>Logs</td>
<td>INT</td>
</tr>
<tr>
<td>Messages #</td>
<td>Number of messages between HeyTAP and the user</td>
<td>Logs</td>
<td>INT</td>
</tr>
<tr>
<td>Needs #</td>
<td>Number of needs, i.e., intention and preferences, expressed by the user</td>
<td>Logs</td>
<td>INT</td>
</tr>
<tr>
<td>Addressed #</td>
<td>Number of needs addressed by HeyTAP, i.e., resulting in some recommendations</td>
<td>Logs</td>
<td>OSA</td>
</tr>
<tr>
<td>PRQ</td>
<td>The Perceived Recommendation Quality of the proposed suggestions</td>
<td>Likert-scale question</td>
<td>SSA</td>
</tr>
<tr>
<td>PEF</td>
<td>The Perceived Effectiveness and Fun in using HeyTAP</td>
<td>Likert-scale question</td>
<td>EXP</td>
</tr>
<tr>
<td>Advantages & Disadvantages</td>
<td>The perceived advantages and disadvantages of HeyTAP</td>
<td>Open questions</td>
<td>EXP</td>
</tr>
</tbody>
</table>
Table 4: Average results on how HeyTAP mapped users’ needs to executable IF-THEN rules.
<table>
<thead>
<tr>
<th></th>
<th>M</th>
<th>SD</th>
</tr>
</thead>
<tbody>
<tr>
<td>Messages #</td>
<td>28.12</td>
<td>14.60</td>
</tr>
<tr>
<td>Needs #</td>
<td>4.00</td>
<td>0.93</td>
</tr>
<tr>
<td>Addressed #</td>
<td>3.63</td>
<td>0.74</td>
</tr>
</tbody>
</table>
Overall, the total number of exchanged messages between participants and HeyTAP was 225, corresponding to 32 distinct needs and 7.03 messages on average per need ($SD = 4.61$). Of the 32 needs, HeyTAP successfully addressed 29 of them (90.63%). In 3 cases, only, participants were not able to get any recommendations by interacting with the tool. To understand why HeyTAP was not able to address such 3 needs, we analyzed the collected logs. In one case, the tool was not able to map participant’s messages to the modeled elements, e.g., functionality and categories. The other 2 cases, instead, highlight an important interaction that is currently missing in HeyTAP. At the beginning of their usage sessions, in particular, two participants expressed their discomfort for not knowing what their connected entities could do, and used HeyTAP to get some recommendations:
"Hi! Suggest me some actions!" (P8)
"Which services can I use?" (P6)
As suggested by the analysis of the debriefing session (Section 5.3), knowing in advance what can be done could facilitate users in expressing their personalization intentions.
5.3 Participants’ Satisfaction
To explore the participants’ satisfaction of the participants in using HeyTAP (RQ3), we first analyzed the Perceived Recommendation Quality (PRQ) and the Perceived Effectiveness and Fun (PEF) metrics collected through the related 5-points Likert-scale questions. As reported on Table 5, participants were satisfied with the IF-THEN rules recommended by HeyTAP ($M = 3.93$, $SD = 1.20$). Furthermore, participants enjoyed using HeyTAP and perceived it as effective ($M = 3.75$, $SD = 0.83$).
Table 5: Average results on how users evaluated the Perceived Recommendation Quality (PRQ) and the Perceived Effectiveness and Fun (PEF) of HeyTAP.
<table>
<thead>
<tr>
<th></th>
<th>M</th>
<th>SD</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRQ</td>
<td>3.93</td>
<td>1.20</td>
</tr>
<tr>
<td>PEF</td>
<td>3.75</td>
<td>0.83</td>
</tr>
</tbody>
</table>
We also analyzed what participants stated during the debriefing session about the perceived advantages and disadvantages of HeyTAP. All the participants talked about HeyTAP as a useful tool to automatize users’ routines. According to them, in particular, HeyTAP is convenient because "it simplifies the processes needed to define automation rules" (P7), and "it allows the discovery of new rules from textual inputs" (P7), especially "for non-expert users" (P8 and P5). Furthermore, the usage of HeyTAP could help users saving time and avoid possible errors, according to P4.
The majority of the perceived disadvantages were instead related to the ability of HeyTAP in recognizing users’ sentences. P1 stated that sometimes HeyTAP did not immediately understand her messages, thus forcing her in rephrasing some of her requests, while P5 and P8 highlighted that the interaction with the agent was difficult when their requests were very specific. Furthermore, consistently with the results reported in Section 5.2, P6 and P8 highlighted that HeyTAP was not able to describe neither which devices were available nor their capabilities. Without knowing these information, they experienced difficulties in evaluating how HeyTAP was addressing their needs. In one case, in particular, P8 said:
"I was expecting some rules involving the alarm clock of my smartphone, but I do not know if this is supported." (P8)
5.4 Limitations
The main limitation of the study is that it was exploratory in nature. In addition, this study targeted a limited number of users having a computer science background, only. A more ecologically-valid study would be to deploy HeyTAP in-the-wild, by testing it with different types of users. As such, our results clearly highlight the potential of the approach, and could inform follow-up studies and future development.
6 CONCLUSIONS AND FUTURE WORKS
On the one hand, contemporary trigger-action programming platforms exploit representation models that are highly technology-dependent, thus making end-user personalization of connected entities a complex task. On the other hand, the usage of a higher level of abstraction requires an effective way of selecting the real entities, triggers, and actions with which satisfying the abstract needs of the user. In this paper, we presented HeyTAP, a conversational and semantic-powered platform able to map abstract users’ needs to executable IF-THEN rules. By exploiting a multimodal interface, users first interact with a conversational agent to communicate her personalization intentions, e.g., to program the temperature of her room, and preferences, e.g., to preserve her privacy. User's inputs, along with contextual and semantic information related to the available connected entities, are then used to extract a set of recommended IF-THEN rules, that are finally visualized to the user.
Results of an exploratory study on 8 end users preliminary confirm the effectiveness of the approach, and show that HeyTAP can successfully “translate” abstract users’ needs into IF-THEN rules that can be instantiated and executed by contemporary trigger-action programming platforms. In future works, we will extend HeyTAP with the suggestions we extracted from the participants of the exploratory study, e.g., by adding the possibility of asking the conversational agent which connected entities can be personalized and which capabilities do they offer. Furthermore, we are also investigating how to include in HeyTAP more complex rules, e.g., by supporting multiple actions, and we are planning a more ecologically-valid study that involves the in-the-wild deployment of the tool.
HeyTAP: Bridging the Gaps Between Users’ Needs and Technology in IF-THEN Rules via Conversation
ACM ’20, September 28–October 2, 2020, Salerno, Italy
REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2829354/e384c431-e921-d4b2-e053-9f05fe0a1d67/eudconversational.pdf", "len_cl100k_base": 9340, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31274, "total-output-tokens": 10427, "length": "2e13", "weborganizer": {"__label__adult": 0.0003018379211425781, "__label__art_design": 0.001003265380859375, "__label__crime_law": 0.00029206275939941406, "__label__education_jobs": 0.0009279251098632812, "__label__entertainment": 0.000148773193359375, "__label__fashion_beauty": 0.00019693374633789065, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.0003027915954589844, "__label__games": 0.0006833076477050781, "__label__hardware": 0.002132415771484375, "__label__health": 0.0004336833953857422, "__label__history": 0.00037384033203125, "__label__home_hobbies": 0.00016236305236816406, "__label__industrial": 0.0003795623779296875, "__label__literature": 0.00035071372985839844, "__label__politics": 0.0002008676528930664, "__label__religion": 0.0003635883331298828, "__label__science_tech": 0.12249755859375, "__label__social_life": 0.00014209747314453125, "__label__software": 0.034271240234375, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.00017702579498291016, "__label__transportation": 0.0004572868347167969, "__label__travel": 0.0001926422119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44598, 0.04937]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44598, 0.31628]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44598, 0.91783]], "google_gemma-3-12b-it_contains_pii": [[0, 877, false], [877, 6385, null], [6385, 10017, null], [10017, 16887, null], [16887, 23119, null], [23119, 26055, null], [26055, 32097, null], [32097, 36867, null], [36867, 42693, null], [42693, 44598, null]], "google_gemma-3-12b-it_is_public_document": [[0, 877, true], [877, 6385, null], [6385, 10017, null], [10017, 16887, null], [16887, 23119, null], [23119, 26055, null], [26055, 32097, null], [32097, 36867, null], [36867, 42693, null], [42693, 44598, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44598, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44598, null]], "pdf_page_numbers": [[0, 877, 1], [877, 6385, 2], [6385, 10017, 3], [10017, 16887, 4], [16887, 23119, 5], [23119, 26055, 6], [26055, 32097, 7], [32097, 36867, 8], [36867, 42693, 9], [42693, 44598, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44598, 0.17188]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
0b0a5024ef53554d3394a5fa48ca32f281997ec9
|
Towards Conceptual Foundations for Service-oriented Requirements Engineering: Bridging Requirements and Services Ontologies
Verlaine, Bertrand; Dubois, Yves; Jureta, Ivan; Faulkner, Stéphane
Published in:
IET Software Journal
Publication date:
2012
Document Version
Early version, also known as pre-print
Link to publication
Citation for published version (HARVARD):
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain.
• You may freely distribute the URL identifying the publication in the public portal.
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Download date: 13. Oct. 2018
Towards Conceptual Foundations for Service-oriented Requirements Engineering:
Bridging Requirements and Services Ontologies*
Bertrand Verlaine, Yves Dubois, Ivan J. Jureta, Stéphane Faulkner
PReCISE Research Center
University of Namur
Rempart de la Vierge, 8
B-5000 Namur, Belgium
{bverlain, ydubois, ijureta, sfaulkne}@fundp.ac.be
February 21, 2011
Abstract
The engineering of a service-oriented system requires the specification
of functions that Web Services (ws) should provide, before ws are
built or selected. Written in a service description language, the service
specification instantiates concepts different than those used for Requirement
Engineering (RE): the former speaks in terms of operations, metrics and
bindings, while the latter manipulates, goals, evaluations and domain
assumptions. It is, however, clear that functions expected of ws to select
or build will be relevant to the stakeholders if they satisfy the stakeholders’
requirements. As a result, there is a gap between the two specifications
which must be bridged in order to ensure that the ws system is adequate
w.r.t. requirements. This paper proposes mappings between the concepts
*An initial version of this work was presented at the First International Workshop on the Web and Requirements Engineering (WERE 2010, Sydney, Australia) [1].
of requirements ontology and those of service taxonomy induced by the
WSLD and the WSLA languages. A working prototype is presented that
implements the mappings and is used to translate the instances of RE
concepts into instances of WSLD and WSLA concepts. The mappings and
the prototype facilitate the engineering of WS systems, as fragments of WS
descriptions can be generated from requirements as a first specification of
a service request.
**Keywords:** Requirements Engineering for Service-oriented Systems, Ontol-
ogy Mapping
1 Introduction
Engineering and managing the operation of increasingly complex information
systems (IS) is a key challenge in computing (e.g., [2, 3]). It is now widely
acknowledged that degrees of automation needed in response cannot be achieved
without distributed, interoperable, and modular systems. Among the various,
often overlapping approaches to building such systems, service-orientation stands
out in terms of its reliance on the World Wide Web infrastructure, availability
of standards for describing and enabling interaction between services, attention
to interoperability and uptake in industry.
A service, the central concept in Service-Oriented Computing (SOC), is a
self-describing and self-contained modular application designed to execute a well-
delimited task, and that can be described, published, located, and invoked over
a network [4, 3]. A Web Service (WS) is a service that relies on standards such
as SOAP [5], WSDL [6] or UDDI [7] to enable its use, and that can be invoked over
the World Wide Web. A WS is thus the technical implementation of the service
concept. WSs are offered by service providers that ensure service implementa-
tions, advertise service descriptions, and provide related technical and business support.
Service consumers have to find the appropriate WS among the WSs available to
satisfy their requirements.
The engineering of service-oriented systems involves many issues treated in the
literature – among them, infrastructure for services (e.g., [5, 7, 8]), descriptions
of services’ interfaces, capabilities, behaviours, and qualities (e.g., [6, 9, 10, 11]), service discovery (e.g., [12]), service composition (e.g., [13, 14, 15, 16]), and ontologies and ontology languages (e.g., [11, 17, 18, 19, 20, 21]). A considerable part of the research focuses on service provision problems, i.e., “the current SOA [service-oriented architecture] is producer centric” [22]. In contrast, this paper focuses on the service consumer side.
**Problem statement.** A service-oriented system will be satisfactory only if it satisfies the requirements of the system’s stakeholders. The Requirements Engineering (RE) for such systems is a promising area of inquiry that already attacked some of the key issues. RE is usually defined as the process by which the stakeholders of a system-to-be are identified, their requirements elicited in order to model the specifications of the system-to-be, which should then be implemented [23, 24, 25]. One pressing concern, which has received less attention and is the focus of this paper, is: *How to bridge the gap between a specification of requirements and WS descriptions?* A description of a WS specifies the functions that the WS can provide. It is based on such a specification that WSs are developed, or sought among available ones. Specialized languages have been designed for the description of WSs using concepts of, e.g., operation and binding, tailored to the WS description. On the other hand, requirements that these services ought to satisfy are classified according to ontologies tailored to RE, which rely on concepts such as goal, task, and domain assumption. While clearly the functions expected of WSs will be relevant to the system if and only if they satisfy the stakeholders’ requirements, the differences in the conceptualizations that underlie WS descriptions and RE specifications make it unclear how exactly to translate the content of specific requirements into WS descriptions, hence the gap.
**Contributions.** This paper is a first step towards addressing the gap between RE specifications and WS descriptions by mapping the concepts of the Core Ontology for REquirements (CORE) [26] to the concepts of the Web Service Description Language (WSDL) [6] proposed by the World Wide Web Consortium (W3C) and the IBM’s Web Service Level Agreement (WSLA) formalism [27]. Two
contributions are made. Firstly, the mappings between the two representations of requirements are presented both informally and in the Distributed Description Logic formalism, and the rationale for the mappings is discussed. Once the mappings are available and a specification of requirements is given, it is possible to facilitate the writing of WS descriptions in WSLA/WSDL by translating the specification of the requirements captured by propositions into fragments of WSLA/WSDL descriptions. The second contribution is the working prototype tool that implements the mappings, allowing thereby the translation of the instances of RE concepts into instances of WSLA/WSDL concepts. The mappings and the prototype facilitate the engineering of WS systems, as fragments of service descriptions can be generated from requirements.
**Organization.** The remaining parts of this paper are structured as follows. First, we discuss our technological choices and briefly present the selected ontologies and technologies on which our mappings are built (§2). Then, the formalization of the two conceptualizations is presented (§3), followed by the mapping between them (§4). This mapping allows us to build a tool which should help requirements engineers to specify the service consumers’ requirements and translate them into initial WS descriptions (§5). Finally, we briefly relate comparable research efforts (§6) before drawing up conclusions and summarizing relevant directions for future work (§7).
## 2 Baseline
To bridge the gap between the requirements expressed by the service consumer and the specifications of service requests, we use a requirements ontology (§2.2) and we build a service taxonomy (§2.3). Below, we discuss our choices of the ontologies (§2.1), namely CORE as the RE ontology, while we work with the WSDL and WSLA languages at the service level.
2.1 Choices of Ontologies
An ontology is a set of concepts and relations, where a concept defines the properties that every member of its class should have, and a relation defines joint properties of a set of members, each of which participants in the same or different class. An ontology is thus an explicit specification of a particular conceptualization shared by a community [28]. We ought to distinguish top-level ontologies which “describe very general concepts [regardless of] a particular problem or domain” [29]. They are shared by large communities of users. At the second level, there are the domain ontologies and the task ontologies respectively used for the vocabulary description of a generic domain, and the description of a generic task/activity. A domain ontology or a task ontology specialize the terms of the top-level ontologies in, respectively, a domain centric way or in a task centric way. At the lower level, there are application ontologies which describe “concepts depending on both a particular domain and task” [29]. Thus, these low-level ontologies specialize both a domain ontology and a task ontology.
On the RE side, our choice is core (cf., §2.1.1). This ontology specifies the domain of requirements and their relations that stakeholders may express concerning a system-to-be.
On the service side, we decide to build our own service taxonomy (cf., §2.1.2). A taxonomy is a structured description of objects into classes related to a specific domain. In the scope of our work, two relevant differences between an ontologies and a taxonomy must be underlined. First, an ontology must be founded upon a kind of formalism. In contrast, this is not required to build a taxonomy in which definitions in natural language can be used. Secondly, while particular relations between the concepts can be specified in an ontology, only two relations can be used in taxonomies. Those two relations are the subsumption relation (is-a) –the class A has the relation is-a with the class B means that A is a subclass of B–, and the membership relation (is-of) –the object c has the relation is-of with the class C means that c can be classified in C.
2.1.1 A conceptualization for requirements
The concept of requirement as well as some of its subconcepts, i.a., the notion of goal, softgoal or assumption, have been discussed at length in the research on RE (e.g., [23, 25, 30, 31, 32, 33, 34]), CORE offers a simple set of essential concepts for RE, by covering the main notions that were previously identified and used, and by defining them within a single ontology.
2.1.2 A conceptualization for service
There are two significant views on the service notion in the SOC: a syntactical view and a semantic view. This distinction comes mainly as a response to the service interoperability problem [35], which is one of the most significant issues in SOC. The first view is mainly supported by WS technologies and Web technologies such as WSDL [6], Universal Description Discovery and Integration (UDDI) [7], Hypertext Transfer Protocol (HTTP) [36], SOAP [5], WS-LA [37] or Web Service Agreement (WS-Agreement) [38]. Most of the WS technologies are based on the Extensible Markup Language (XML) [39] which structures the information, and describes it to allow an informal interpretation. The second view on a service conceptualization is based on technologies using logic languages and domain/task ontologies to describe the service capabilities, e.g., Web Service Modeling Language (WSML) [40], QoSont [41], OWL-S (previously named DAML-S) [42], WSDL-S [43] or SAWSDL [44]. Their common objective is to make the informational content amenable to processing by a computer.
In this work, we choose the syntactic view on the service-oriented paradigm1. Therefore, all conceptualizations built within the semantic view are excluded. Seeing that all service ontologies or taxonomies, e.g., WSMO, OWL-S and the Semantic Web Services Ontology (SWSO) [47], fit into the semantic view on the service-oriented paradigm, we build our own WS taxonomy. This taxonomy has to be wide enough to cover functional and non-functional characteristics of WS. Given that there is not any syntactic technology, which satisfactorily
---
1We have discussed elsewhere [45, 46] the mapping based on a semantic view on the service-oriented paradigm.
covers all those characteristics, we need at least two technologies, one for the functional features and one for the non-functional features of WSs. There is one attempt—the Web Service Offerings Language (WSOL) [48]—to encompass whole WS characteristics. However, WSOL proves to be inefficient concerning this targeted objective: this technology still needs WSDL to work and only allows to specify some of the non-functional characteristics compared with, e.g., WS-Agreement or WSLA.
With regard to the functional characteristics, an Interface Definition Language (IDL) is needed [49]. An IDL gives a framework to specify a machine-readable interface for computational components, such as WSs, independently of the coding languages and underlying technologies used. The WSDL language, which has the status of recommendation by the World Wide Web Consortium (W3C), is an appropriate IDL. The WS community uses and/or advises this language for the engineering of Service-Oriented Architectures (SOA) [35, 3, 50, 51, 52, 53]. This technology is also applied in the computing industry (e.g., [54, 55, 56, 57, 58]).
In relation to non-functional characteristics, and thereby Quality of Service (QoS), the main technologies proposed in the literature are WSLA [37], WS-Agreement [38], SLang [59, 60] and Universal Service Description Language (USDL) [61]. SLang, which can describe the two involved parties and their responsibilities during the WS use, divides Service Level Agreements (SLAs) into horizontal contracts (e.g., between two equal parties) and vertical contracts (e.g., between entities in different layers). This language focuses on WS-based Internet services such as Application Service Provision, Internet Service Provision and Storage Service Provision. Moreover, SLang does not allow to specify financial terms associated with the SLA. USDL can be used to specify SLAs for services—it is thus not only focused on WSs—which must be associated to another language specific to the service oriented paradigm. The authors chose WS-Agreement. Clearly, SLang and USDL do not answer to our needs. Concerning WS-Agreement, this technology has one drawback in comparison with WSLA: it does not allow to describe obligations of parties as WSLA allows it. The obligation is an explicit duty that a party has to achieve in regard to the service level objectives (SLOS)
specified in the SLA document. Furthermore, WSLA is expressively built to complete WSDL, our first choice on which we base our service taxonomy.
2.2 Overview of the Core Ontology for REquirements
The root concept of the core ontology is Communicated information\(^2\), specialized as follows [62]:
1. Goal, specialized on Functional goal, Quality constraint and Softgoal;
2. Plan;
3. Domain assumption, specialized on Functional domain assumption, Quality domain assumption and Soft domain assumption;
4. Evaluation, specialized on Individual evaluation and Comparative evaluation.
A basic idea in core is that requirements are communicated by the stakeholders to the requirements engineer, so that the latter classifies requirements based on what was communicated and how it was communicated. The Communicated information concept is a catchall one; its instances are propositions communicated by the stakeholders. Once an instance of that concept is available, the question to ask is what mode was that proposition communicated in. The notion of mode—or modus in linguistics—reflects the idea that we can distinguish between the content of a communication and the intentional state it was communicated in, whereby different kinds of mode correspond to different intentional states of the stakeholder. If the stakeholder tells the engineer that she believes that some condition holds in the operating environment of the system-to-be, then the proposition stating the condition is an instance of the Domain assumption concept. If she instead desires that the condition be made to hold by the system-to-be, then the proposition is an instance of the Goal concept. In case an intention to perform particular actions is conveyed, which may then be delegated to the system-to-be, the engineer classifies the propositions describing these actions.
\(^2\)A core concept is written Concept and starting with an uppercase letter, while an instance thereof starts with a lowercase letter instance.
as instances of the Plan concept. Since stakeholders can also indicate that they prefer some goals to be satisfied than others, or that some of them must be satisfied, while others are optional, CORE includes the concept of Evaluation. Propositions belonging to this concept convey evaluations arising out of emotions of the stakeholders.
CORE distinguishes three kinds of goals. The Functional goal concept refers to a desired condition for which its satisfaction is verifiable, i.e., the comparison scale is shared among the stakeholders and the requirements engineer(s), and is binary, i.e., the functional goal is either satisfied or not. A quality constraint defines the desired value of a non-binary measurable property of the system-to-be (e.g., how many seconds it takes to answer a query). As functional goals and quality constraints are not necessarily known at the very start of the RE process, the Softgoal concept is instantiated to capture requirements which refer to vague properties of the system-to-be (e.g., a “fast” answer to the queries). Same specialization applies to the Domain assumption concept, which has its functional variant—a functional domain assumption refers to binary properties of the system-to-be and/or its environment—, its quality variant, Quality domain assumption, and its soft variant, Soft domain assumption. Finally, Evaluation can qualify individual requirements through the Individual evaluation concept, or compares goals, domain assumptions, and/or plans through the Comparative evaluation concept.
2.3 Overview of the Web Service taxonomy
IBM’s WSLA technology [27] intends to specify contracts, called SLA’s. They state constraints on QoS properties of WSS. While WSLA focuses on the QoS levels of WSS, WSDL [6], the second formalism chosen, allows to specify the functional characteristics of WSS.
Note WSDL allows managing some possible use failures by the specification of fault conditions and repair actions, which certainly is relevant given that WS oriented systems are often distributed and given potential Web server breakdowns. We leave out this aspect of WSDL for future work (cf., §7.1).
The WSLA concepts are Party\(^4\), Service definition, Metric and Obligations. The WSDL concepts are Operation, Binding and Service. We retain the following four of these seven concepts:
1. **Metric** identifies an observable QoS property of a WS, and indicates its measurement directive(s), i.e., it specifies how that QoS property can be accessed and/or computed [37, 27].
2. The **Obligations** concept defines the guaranteed QoS level of the WS identified in the service definition as well as constraints imposed on the metrics and triggered actions [27, 37]. The two subconcepts of the Obligations are:
(a) **Service level objective** which defines the different QoS levels regarding the observable characteristics –described in a metric – of the WS, and
(b) **Action guarantee** which groups promises of the signatory parties and/or of third parties concerning the achievement of an action when a determined precondition occurs\(^5\).
3. **Operation** defines the interaction between the service provider and the other parties involved in the interaction, as a sequence of input and output messages [63, 6].
4. **Binding** specifies concrete message format and transmission protocol details concerning the WS use [63].
**Party**, Service definition, and **Service** are not retained as concepts of our WSLA/WSDL taxonomy for the following reasons:
- Instances of **Party** identify the WS provider, the WS consumer and possible third parties, which may be stakeholders expressing requirements w.r.t. the service they would like to use. As the definition of the requirements problem abstracts from these identifiers, we do not carry at the service level the information on which stakeholder gave which requirement.
\(^4\)An WSLA or an WSDL concept is written as Concept and an instance of one of those concepts is depicted as instance.
\(^5\)Note the precondition can simply be always.
• A **Service definition** instance is not directly evaluated by the WS consumer. Its purpose is to link a WSLA specification of a WS to a document which describes the functional characteristics of that WS. As we use WSDL, the WS consumer—i.e., the stakeholder—can directly evaluate the functional characteristics through the WSDL document.
• **Service** is not relevant in the present discussion, as the actual Web location of the WS is unimportant. Only its presence or absence is crucial. The possible unresponsiveness of the WS could be evaluated through other selected concepts, e.g., an **obligations**.
3 Formalization of CORE and WSLA/WSDL
In order to formalize the bridging of CORE with the WSLA/WSDL taxonomy, we use the description logic $$\mathcal{SIN}$$ [64] to rewrite each conceptualization. This rewriting allows us to connect WSDL to WSLA (to get what we refer to as WSLA/WSDL taxonomy), and then CORE to WSLA/WSDL (see §4.3).
3.1 The CORE ontology in description logic
Table 1 is based on the definitions and axioms of the CORE ontology given in §2.2. Line 1 defines the root concept of CORE. Requirements expressed during the RE process are classified into the four main classes of CORE, i.e., **Goal**, **Plan**, **Domain assumption** and **Evaluation**, and finally in the leaves of CORE, i.e., **Quality constraint**, **Soft domain assumption**, **Comparative evaluation**, and so on (see Lines 6, 11 and 14). Detailed informal definitions of the CORE concepts are not repeated here. Unchanged **softgoals** and **soft domain assumptions** cannot be propagated to the level of service descriptions: given their inexplicit nature, they need to be replaced by more precise requirements. Just as, say, imprecise **goals** are refined, so are **softgoals** and **soft domain assumptions** approximated [26, 62], whereby their approximation involves the identification of **quality constraints** and **quality domain assumptions**, while **comparative evaluations** may indicate how alternative **quality constraints** or **quality domain assumptions** may be rated in terms
of relative desirability. Lines 10 and 13 reflect this in the formalized ontology.
<table>
<thead>
<tr>
<th>Table 1: The core ontology written in description logic $SLN$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Communicated information ≡ Goal ⊔ Plan ⊔ Domain assumption ⊔ Evaluation</td>
</tr>
<tr>
<td>2: ⊥ ⊑ Goal ⊔ Plan ⊔ Domain assumption ⊔ Evaluation</td>
</tr>
<tr>
<td>3: refine ≡ refined-by⁻</td>
</tr>
<tr>
<td>4: refined-by = refine⁻</td>
</tr>
<tr>
<td>5: ⊤ ⊑ ∀ refine. Communicated information</td>
</tr>
<tr>
<td>6: ∀ refine. Goal ≡ Functional goal ⊔ Quality constraint ⊔ Softgoal</td>
</tr>
<tr>
<td>7: ⊥ ⊑ ∀ Functional goal ⊔ Quality constraint ⊔ Softgoal</td>
</tr>
<tr>
<td>8: approximate ≡ approximated-by⁻</td>
</tr>
<tr>
<td>9: approximated-by = approximate⁻</td>
</tr>
<tr>
<td>10: Softgoal ⊑ ∃ approximate. Quality constraint</td>
</tr>
<tr>
<td>11: ∀ refine. Domain assumption ≡ Functional domain assumption ⊔ Quality domain assumption ⊔ Soft domain assumption</td>
</tr>
<tr>
<td>12: ⊥ ⊑ ∀ Functional domain assumption ⊔ Quality domain assumption ⊔ Soft domain assumption</td>
</tr>
<tr>
<td>13: Soft domain assumption ⊑ ∃ approximate. Quality domain assumption</td>
</tr>
<tr>
<td>14: ∀ refine. Evaluation ≡ Comparative evaluation ⊔ Individual evaluation</td>
</tr>
<tr>
<td>15: ⊥ ⊑ ∀ Comparative evaluation ⊔ Individual evaluation</td>
</tr>
</tbody>
</table>
### 3.2 The WSLA/WSDL taxonomy in description logic
Table 2 is based on publications about the wsla formalism [27, 37] and on the W3C recommendations concerning wsdl 2.0 [6, 63, 65]. In Tables 2 and 4, the prefixes “WSLA:” and “WSDL:” indicate that the concept respectively belongs to wsla or to wsdl. In Table 4, the prefix “CORE:” indicates that the concept belongs to the core ontology. Line 17 (wsla) states the use of the wsla specification as a proposal or an agreement. The latter is the primary purpose of wsla. A proposal could be suggested either by a ws consumer or a ws provider. Requirements concerning non-functional ws properties are specified via wsla.
“COMMITMENT”, used in Lines 18, 21 and 41, refers to a promise to achieve (conditionally or not) a predetermined task. “SLA PARAMETER” is an observable characteristic used to evaluate the qos of the ws as well as their measurement process (Lines 34 and 37). Line 36 uses Distributed Description Logic (DDL) [66] in order to bridge wsla with wsdl: in this context, the sign $\subseteq^\rightarrow$ means that the “WSLA:OPERATION” concept subsumes the “WSDL:OPERATION” concept.
Line 44 has the same purpose as Line 17, but for the WSDL-oriented part of the formalized taxonomy. Line 54 covers the Operation concept: by ordering the messages exchanged between the ws provider and the ws consumer, it organizes the data flow. Though this data exchange flow, the actual service provided by the ws is structured. It enables to know what is the functionality of the service provided.
4 Mapping of CORE with WSLA/WSDL
We introduce a simple but comprehensive case study (§4.1) first below. It will be used to illustrate the mappings developed later (§4.2–§4.3) to relate the requirements expressed as natural language statements, and the corresponding instances of the service taxonomy concepts specified in the WSDL and the WSLA formalisms.
4.1 A scenario: the trucking company
An entrepreneur owns an express transport company and would like to optimize the routes taken by his trucks. Orders and clients data are centralized in his existing is where the routes of each truck are calculated depending on urgent/deleted orders, truck breakdowns, delays, traffic jams, and so on. He has equipped all his trucks with a navigation system based on both the GPS and the UMTS technologies. The GPS device allows to locate the truck and to help the driver in finding the appropriate route, while the UMTS technology allows his is to exchange data with the system embedded in the trucks, which includes the GPS device. The company owner would like that an is sends the data needed in real time to the trucks when the previous job is ending. To avoid wasting time, the device can directly find the way with the coordinates (longitude and latitude) of the client. However, his current is only stores the postal addresses of the delivery locations given by the clients when they order a transport of goods. In this way, the software engineer in charge of this improvement would like to use a service available on the Web, i.e., a ws. The main functionality of this
Table 2: The WSLA/WSDL taxonomy written in description logic $\mathcal{SLN}$
<table>
<thead>
<tr>
<th>Taxonomy for WSLA</th>
<th>WSLA document</th>
<th>WSLA document</th>
<th>WSLA document</th>
<th>WSLA Proposal</th>
<th>WSLA Proposal</th>
</tr>
</thead>
<tbody>
<tr>
<td>16.</td>
<td>≡ Party ∩ Service definition ∩ Metric ∩ Obligations</td>
<td>WSLA document</td>
<td>≡ WSLA Proposal ∪ WSLA Agreement</td>
<td>≡ proposed-by.(QoS Level ∩ Commitment)</td>
<td>propose ≡ proposed-by</td>
</tr>
<tr>
<td>17.</td>
<td>≡ WSLA document</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>propose ≡ proposed-by</td>
</tr>
<tr>
<td>18.</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>propose ≡ proposed-by</td>
</tr>
<tr>
<td>19.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>propose ≡ proposed-by</td>
</tr>
<tr>
<td>20.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>≡ proposed-by.</td>
<td>propose ≡ proposed-by</td>
</tr>
<tr>
<td>21.</td>
<td>≡ QoS Level ∩ Commitment ∩ ∀ agreed-by.WS Consumer ∩</td>
<td>WSLA Agreement</td>
<td>≡ ∀ agreed-by.WS Provider</td>
<td>≡ ∀ agreed-by.WS Provider</td>
<td>agree ≡ agreed-by</td>
</tr>
<tr>
<td>22.</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>≡ WSLA Proposal</td>
<td>agree ≡ agreed-by</td>
</tr>
<tr>
<td>23.</td>
<td>≡ ∀ proposed-by.Signatory party ∪</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>24.</td>
<td>≡ ∀ agreed-by. Signatory party</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>25.</td>
<td>Party ≡ Signatory party ∪ Third party</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>26.</td>
<td>Party ≡ ∀ involved-in.WS Use</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>27.</td>
<td>involve ≡ involved-in</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>28.</td>
<td>involved-in ≡ involve</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>29.</td>
<td>Signatory party ≡ WS Consumer ∪ WS Provider</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>30.</td>
<td>Third party ≡ ¬ Signatory party ∪ ∀ provide.Metric</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>31.</td>
<td>provide ≡ provided-by</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>32.</td>
<td>provided-by ≡ provide</td>
<td>agree ≡ agreed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>33.</td>
<td>Service definition ≡ Service object ∩ Operation</td>
<td>propose ≡ proposed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>34.</td>
<td>Service object ≡ SLA Parameter ∩ Metric</td>
<td>propose ≡ proposed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>35.</td>
<td>Operation ≡ Service object</td>
<td>propose ≡ proposed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>36.</td>
<td>WSLA:Operation ⊑ WSDL:Operation</td>
<td>propose ≡ proposed-by</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>37.</td>
<td>Metric ≡ ∀ measure.SLA Parameter</td>
<td>WSDL:Operation</td>
<td>≡ WSDL:Operation</td>
<td></td>
<td></td>
</tr>
<tr>
<td>38.</td>
<td>measure ≡ measured-by</td>
<td>operation ≡ Operation</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>39.</td>
<td>measured-by ≡ measure</td>
<td>operation ≡ Operation</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>40.</td>
<td>Obligations ≡ Service level objective ∪ Action guarantee</td>
<td>operation ≡ Operation</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>41.</td>
<td>Service level objective ≡ Commitment</td>
<td>operation ≡ Operation</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>42.</td>
<td>Action guarantee ≤ Promise ∪ Action</td>
<td>operation ≡ Operation</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Taxonomy for WSDL
<table>
<thead>
<tr>
<th>Description</th>
<th>Message types ∩ Interface ∩ Binding ∩ Service</th>
</tr>
</thead>
<tbody>
<tr>
<td>43.</td>
<td>Description</td>
</tr>
<tr>
<td>44.</td>
<td>Description</td>
</tr>
<tr>
<td>45.</td>
<td>WSLA Proposal</td>
</tr>
<tr>
<td>46.</td>
<td>WSDL Proposal</td>
</tr>
<tr>
<td>47.</td>
<td>proposed-by</td>
</tr>
<tr>
<td>48.</td>
<td>WSDL Agreement</td>
</tr>
<tr>
<td>49.</td>
<td>WSDL Agreement</td>
</tr>
<tr>
<td>50.</td>
<td>agreed-by</td>
</tr>
<tr>
<td>51.</td>
<td>∀ proposed-by.WS Actor ∪ ∀ agreed-by.WS Actor</td>
</tr>
<tr>
<td>52.</td>
<td>WS Actor</td>
</tr>
<tr>
<td>53.</td>
<td>Interface</td>
</tr>
<tr>
<td>54.</td>
<td>Operation</td>
</tr>
<tr>
<td>55.</td>
<td>order</td>
</tr>
<tr>
<td>56.</td>
<td>ordered-by</td>
</tr>
<tr>
<td>57.</td>
<td>Binding</td>
</tr>
<tr>
<td>58.</td>
<td>Service</td>
</tr>
</tbody>
</table>
ws is to provide the coordinates, i.e., the longitude and the latitude, when it receives a postal address.
Requirements related to this case study are refined and specified throughout the next sections (§4.2–§4.3).
4.2 Bridging the service concepts with the four main CORE classes
The first step to achieve is the classification of the WSLA/WSDL concepts into one of the four main classes of CORE, i.e., in Goal, Plan, Domain assumption and/or Evaluation. Depending on how the consumer expressed the requirements, we categorize them in the relevant CORE concept. Then, we verify if the WSLA or the WSDL specification allows the representation of what the requirement conveys. Otherwise, some requirements could be lost during the mapping (cf., Requirement 9).
Table 3, based on the definitions of the CORE concepts and of the WSLA/WSDL concepts, illustrates this classification; explanations and illustrative requirements based on the case study are given afterwards.\(^6\)
Table 3: Classification of WSLA and WSDL concepts into the first four CORE concepts. The sign \(\checkmark\) means that the WSLA or WSDL concept is mapped with the corresponding CORE concept. Otherwise, the sign \(\times\) is used.
<table>
<thead>
<tr>
<th>WSLA concept</th>
<th>WSDL concept</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Metric</td>
</tr>
<tr>
<td>Goal</td>
<td>(\checkmark)</td>
</tr>
<tr>
<td>Plan</td>
<td>(\checkmark)</td>
</tr>
<tr>
<td>Domain assumption</td>
<td>(\checkmark)</td>
</tr>
<tr>
<td>Evaluation</td>
<td>(\times)</td>
</tr>
</tbody>
</table>
A goal captures conditions not yet satisfied that the service consumer desires to see become true in the future [62]. Requirements 1, 2 and 3 are examples of goals based on the developed scenario.
**Requirement 1.** goal: The owner wants that the average availability of the WS
\(^6\)Complete examples showing the mapping of requirements to one concept of the service taxonomy is given later (§4.3), where a complete mapping is developed.
is measured.
**Requirement 2. goal:** The availability of the service must be high.
**Requirement 3. goal:** The service has to translate a postal address into coordinates.
Goal is mapped with the four WSLA/WSDL concepts. The consumer can express her desire about the presence or absence of a particular observable property, i.e., a metric, which can be included in the future electronic agreement (e.g., Requirement 1). The ws consumer can also express her desire (i) to set the value of a service level objective to a specific number (e.g., Requirement 2 once approximated), and/or (ii) that a party involved in the future agreement achieves a particular action specified via an action guarantee. Those two kinds of desires can be specified in an WSLA proposal as obligations. Concerning the Operation and Binding concepts, the service consumer can respectively indicate her desire about a precise pattern of exchanged messages with particular input and output (e.g., Requirement 3), and/or her desire about a particular message format and a specific transmission protocol. These two requirements can be specified inside an operation –where the important pieces of information for the ws consumer is the output in which he sends his core data, and the input in which he receives the relevant data for his business activity– or a binding.
A plan catches intentions that the service consumer intends to perform. Requirement 4 is an example of a plan.
**Requirement 4. plan:** The is will communicate based on the SOAP-over-HTTP middleware.
The Plan concept is also mapped with all WSLA/WSDL concepts. The ws consumer can express her intention to perform the measurements of qos properties via a metric and then deliver the results to other parties. The ws consumer can aim at performing an action guarantee, instance of Obligation. The ws consumer can also promise to send predetermined messages, which are specified inside an operation, or to use particular message formats and/or communication protocols which can be specified through a binding (e.g., Requirement 4).
A domain assumption indicates that its content is believed true by the service consumer, or that its content is made true by the service consumer’s speech act as illustrated by Requirement 5.
**Requirement 5. domain assumption:** the truck company owner intends to compute the average response time of the service use.
The Domain assumption concept is only mapped with Metric: a ws consumer can suggest a description of an observable parameter that she believes true regardless of the actual state of affairs. She also has the capacity to structure and to organize herself the measurements of some observable parameters (e.g., Requirement 5). On the other hand, Domain assumption is not mapped with Obligations, Operation and Binding respectively because (i) action guarantees can only be promised or desired by a party and service level objectives result from a negotiation so that a ws consumer is not expected to have beliefs about them, and she cannot make them true alone, (ii) it seems inappropriate to assume that a ws consumer would believe in particular messages sent by the ws provider without any information about them neither about the (future) ws provider and she cannot make the messages exchange pattern true alone, and (iii) a ws consumer dealing with the communication protocol or the message format is expected to have some basic knowledge about those kinds of technologies, and she cannot make them true alone; otherwise, she is expected not to worry about the way messages are formatted and sent.
An evaluation captures the preference, or the appraisal, of the ws consumer about a single condition (e.g., Requirements 6 and 7), or between conditions that may hold (e.g., Requirements 8 and 9).
**Requirement 6. evaluation:** A response time of 600ms is appraised.
**Requirement 7. evaluation:** A response time of 400ms is appraised.
**Requirement 8. evaluation:** A response time of 400 ms is preferred to a response time of 600ms.
**Requirement 9. evaluation:** The use of the middleware soap-over-http is
preferred to the middleware SOAP-over-JMS.
During the re process, a ws consumer can express appraisals or preferences of/between goals, domain assumptions and plans, i.e., the conditions evaluated by the service consumers. Unfortunately, only appraisals and preferences about obligations can be specified through the WSLA/WSDL languages (e.g., Requirement 7, 6 and 8). This lack of expressiveness of the WSDL and WSLA languages compared to CORE leads to possible gaps: some evaluations could be lost during their translation to the WSLA/WSDL taxonomy. For example, Requirement 9 cannot be specified with the WSLA and/or WSDL languages, although it can be expressed by the truck owner, and more generally by any ws consumer. Given the scope of this paper, we let the discussion of this issue for future work.
4.3 The mappings between CORE and WSLA/WSDL
In Table 4, we use DDL [66] to formalize the mapping between CORE and the WSLA/WSDL taxonomy. In the mappings, concepts are prefixed by the name of the taxonomy they belong to. The sign \(\equiv\rightarrow\) means that the mapping is complete: each instance of the CORE concept has a corresponding instance in the WSLA and/or WSDL concepts. The sign \(\sqsubseteq\rightarrow\) indicates that an evaluation can be lost because the scope of CORE is wider than the scope of WSLA/WSDL (see §4.2). We refine the mapping by comparing the definition of the subclasses of the four main CORE concepts with the WSLA/WSDL concepts.
Table 4: The mapping between CORE and the WSLA/WSDL taxonomy formalized with DDL
<table>
<thead>
<tr>
<th>CORE concept</th>
<th>WSLA/WSDL concept</th>
</tr>
</thead>
<tbody>
<tr>
<td>Functional goal</td>
<td>WSLA:Metric ⊔ WSLA:Action guarantee ⊔ WSDL:Operation</td>
</tr>
<tr>
<td>Quality constraint</td>
<td>WSLA:Service level objective ⊔ WSDL:Binding</td>
</tr>
<tr>
<td>Plan</td>
<td>WSLA:Metric ⊔ WSLA:Action guarantee ⊔ WSDL:Operation ⊔ WSDL:Binding</td>
</tr>
<tr>
<td>Functional domain assumption</td>
<td>WSLA:Metric</td>
</tr>
<tr>
<td>Individual evaluation</td>
<td>WSLA:Obligations</td>
</tr>
<tr>
<td>Comparative evaluation</td>
<td>WSLA:Obligations</td>
</tr>
</tbody>
</table>
Table 3 indicates that Goal is bridged to all WSLA/WSDL concepts. Lines 59 and 60 from Table 4 specialize it.
Line 59: Functional goal is linked to Metric, Action guarantee and Operation. A metric specifies how the measurement of a QoS property is achieved. The WS consumer can desire the presence or absence of a specific metric. This desire is not the representation of a quality, i.e., its evaluation is binary. An action guarantee or an operation are the representation of a process to perform, and they are not the representation of a quality. Requirements 10 (refined from Requirement 1) and 11 (refined from Requirement 3) are functional goals. They respectively correspond to a metric (see Specification 1) and an operation (see Specification 2).
**Requirement 10. functional goal:** The owner of the truck company wants that a third company, EvalCompany, measures the average availability rate of the service.
**Specification 1.**
```
<ServiceDefinition>
<Operation>
<SLAParameter name="AvgAvailability" type="float" unit="percent">
<Metric>AverageAvailabilityMetric</Metric>
</SLAParameter>
<Metric name="AvgAvailabilityMetric" type="float" unit="percent">
<Source>EvalCompany</Source>
<MeasurementDirective xsi:type="Availability" resultType="float">
<MeasurementURI>http://www.eval.com/availability</MeasurementURI>
</MeasurementDirective>
</Metric>
</Operation>
</ServiceDefinition>
```
**Requirement 11. functional goal:** The service must return the geographic coordinates –longitude and latitude– when it receives a postal address.
**Specification 2.** "AddressTransmissionType" and "CoordinatesTransmissionType" are defined in Appendix B.
```
<interface>
<operation name="CoordinatesTranslator"
pattern="http://www.w3.org/ns/wsdli-in-out"
```
Line 60: Quality constraint is linked to Service level objective and Binding. Seeing that the observable parameters are described into a metric, the Service level objective's quality space is common to the parties. The descriptions of the communication protocol and of the message format are two qualities of, respectively, the communication process and of the structure of the data exchanged. Their respective quality spaces are shared among the parties. They can easily notice the use of one or another protocol/data structure. Requirement 12 refines Requirement 2. It corresponds to a service level objective which is specified in Specification 3. Note Requirement 2 is actually a softgoal; it is thus approximated by Requirement 12 in which the measurement scale is shared among the involved parties.
**Requirement 12. quality constraint:** The average availability rate of the service should be at least 97%.
**Specification 3.**
```xml
<Obligations>
<ServiceLevelObjective name="Availability">
<Obliged>Provider</Obliged>
<Validity>... </Validity>
<Expression>
<Predicate xsi:type="Greater">
<SLAParameter> AvgAvailability </SLAParameter>
<Value> 0.97 </Value>
</Predicate>
</Expression>
</ServiceLevelObjective>
</Obligations>
```
Line 61 does not add any information compared with Table 3 because Plan has not subclasses in the CORE ontology. Requirement 13 refines Requirement 4:
its specification captured inside a binding is proposed in Specification 4.
**Requirement 13. plan:** The IS will communicate based on the SOAP-over-HTTP middleware.
**Specification 4.**
```
<binding name="SOAPBinding" interface="tns:InterfaceName"
type="http://www.w3.org/ns/wSDL/soap"
wsoap:protocol="http://www.w3.org/2003/05/soap/bindings/HTTP/">
<operation ref="tns:CoordinatesTranslator"
wsoap:mep="http://www.w3.org/2003/05/soap/mep/soap-response"/>
</binding>
```
Line 62: For the same reason as the refinement of the Goal concept–i.e., a metric is not the representation of a quality–, Functional domain assumption is mapped to Metric. Requirement 5 is refined by Requirement 14; the latter is specified in Specification 5.
**Requirement 14. functional domain assumption:** The truck company owner intends to compute himself–thanks to its own IS– the average response time based on the 50 last service uses.
**Specification 5.**
```
<ServiceDefinition>
<Operation>
<Metric name="AverageResponseTime" type="float" unit="milliseconds">
<Source>Customer</Source>
<Function xsi:type="Divide" resultType="float">
<Operand>
<Metric>SumResponseTime</Metric>
</Operand>
<Operand>
<Metric>Transactions</Metric>
</Operand>
<Function xsi:type="Divide" resultType="float">
<Operand>
<Metric>SumTransactions</Metric>
</Operand>
<Window>50</Window>
</Function>
</Function>
</Metric>
<Metric name="Transactions" type="Q" unit="transactions">
<Source>Customer</Source>
<Function xsi:type="QConstructor" resultType="Q">
<Metric>SumTransactions</Metric>
<Window>50</Window>
</Function>
</Metric>
</Operation>
</ServiceDefinition>
```
Note there is no mapping link between the Quality domain assumption concept and an WSLA/WSDL concept. Since “[...] domain assumptions concern what is true [in the future IS and its environment]” [26], we expected to have only a few mapping links for this class. Our application domain—the IS use process and its environment—is specific because many characteristics are negotiable between the involved parties. The few non-negotiable elements mainly concern the unreliable network infrastructure used to exchange the data.
Lines 63 and 64 (of Table 4) refine the mapping between an evaluation and an obligations. The use of a measurement scale based on the money allows the IS consumer to express his emotions and feelings captured by evaluations. An
action guarantee can be tied to the respect of one or more determined service level objective(s). Through those action guarantees, service level objectives can be linked to financial penalties and rewards [37]. A positive compensation reflects his favour toward a service level objective; a negative one reflects his disfavour. If the rewards (penalties) of two service level objectives are different, then the ws consumer expresses a preference for the more expensive one: if he agrees to pay more for a specific level of a QoS characteristic, that means he prefers this characteristic in comparison with other (cheaper) ones. Then, the ws discovery tool has to find the accurate service which respects this SLO for the price set by the consumer.
Requirements 6 and 7 are refined as individual evaluation (see respectively Requirements 15 and 16). Requirement 8 is refined as a comparative evaluation (see Requirement 17). Requirements 16 and 17 are respectively reproduced in Specifications 6 and 7.
**Requirement 15.** individual evaluation: A response time of 600ms is the maximum accepted.
**Requirement 16.** individual evaluation: A response time of 400ms is evaluated to 0.02 monetary unit per use.
**Specification 6.** “PaymentType” is defined in Appendix A.
```
<Obligations>
<ServiceLevelObjective name="RP400ms">
<Obliged>Provider</Obliged>
<Expression>
<Predicate xsi:type="wsla:Less">
<SLAParameter>ResponseTime</SLAParameter>
<Value>400</Value>
</Predicate>
</Expression>
<EvaluationEvent>NewValue</EvaluationEvent>
</ServiceLevelObjective>
<ActionGuarantee name="RewardRP400ms">
<Obliged>consumer</Obliged>
<Not>
<Expression>
<Predicate xsi:type="wsla:Violation">
<ServiceLevelObjective>RP400ms</ServiceLevelObjective>
</Predicate>
</Expression>
</Not>
</ActionGuarantee>
```
Requirement 17. **comparative evaluation**: a response time of 400 ms is preferred to a response time of 600 ms.
**Specification 7.**
```
<Qualification>
<QualifiedAction>
<Party>customer</Party>
<Action actionName="RewardPayment" xsd:type="PaymentType">
<Debtor>Customer</Debtor>
<Amount>0.002</Amount>
<CausingGuarantee>RP400ms</CausingGuarantee>
<Currency>USD</Currency>
</Action>
</QualifiedAction>
<ExecutionModality>Always</ExecutionModality>
</Qualification>
```
<Party>customer</Party>
<Action actionName="RewardPayment" xsd:type="PaymentType">
<Debtor>Customer</Debtor>
<Amount>0.002</Amount>
<CausingGuarantee>RP400ms</CausingGuarantee>
<Currency>USD</Currency>
</Action>
</QualifiedAction>
</ExecutionModality>Always</ExecutionModality>
</ActionGuarantee>
</Obligations>
5 A tool operating thanks to the proposed mappings: STR@WS
A tool, named STR@WS for Specifications Transcribed from Requirements in a WS environment (hence the @WS in the name), has been implemented. It employs the mappings developed in §4.3. In this section, we present STR@WS. First, we briefly state the technologies used to implement the tool (§5.1) followed by a description of the tool architecture (§5.2). In §5.3, we illustrate how to use STR@WS. In order to refine the one-to-many mappings, we build decision trees which are developed in §5.4.
5.1 The technologies used
Our tool is developed with the language Java O.O. We also use the JAXB API\(^7\) which allows us to translate XML document into Java object as well as marshalling, unmarshalling and validating XML documents based on XSD or DTD documents.
5.2 The functionalities of STR@WS
STR@WS is compounded of the five following modules:
1. **RequirementsEditor** allows a WS consumer to add and remove requirements about a service he is describing.
\(^7\)https://jaxb.dev.java.net/
Figure 1: The main window of STR@WS and its menu
2. **Translator** bridges the requirements expressed by the WS consumer with the WSLA/WSDL concepts based on the mapping between CORE and the WSLA/WSDL specifications.
3. **MappingRefinement** helps refining the one-to-many mappings –see Lines 59, 60 and 61 of Table 4. We build three decision trees, which are used by the requirement engineering to refine the problematic mappings (See 5.4 for the development of these decision trees). STR@WS supports this process.
4. **OpenFile** enables to open a specification file or a requirements file which has been saved with STR@WS. The file format chosen is XML.
5. **SaveFile** enables to save a specification file or requirements file.
Fig. 1 shows the main window of STR@WS as well as the tool menu.
**5.3 The use of STR@WS through our scenario**
We now go back to the scenario explained in §4.1 and discussed in §4.2 and in §4.3. In this section, we illustrate how our tool uses the mappings between CORE and the WSLA/WSDL taxonomy and can help requirements engineers during the development of a service-based system.
In Fig. 2, Requirements 10 to 15 are entered in the RequirementsEditor of STR@WS. Once the nature of the requirement is selected by the user –i.e., the requirement is a **functional goal** for instance–, STR@WS gives the corresponding
concept of the WSLA/WSDL taxonomy. This information is displayed in green at the very right of the window. If the core concept has several corresponding service concepts, then the message displayed in red is “One to Many” and the Refine button is clickable. By clicking on it, a new window is opened. It allows the user refining the one-to-many mappings according to the decision trees described in §5.4. This window is shown in Fig. 3; the refinement of the first functional goal is the example shown (the decision path followed is surrounded). At the end of the refinement process, the right service concept is displayed in green on the main window and that information is saved in the tool database –see the first requirement of Fig. 2 which is the only one to have been refined.
str@ws allows the user to enter a requirement having its category set at “Raw” if he does not yet know the right nature of this requirement.
The lower part of Fig. 2 depicts the translated file in which the requirements are mapped to their corresponding concept in the service taxonomy. The meaning of the tags used is as follows:
<metric/> for metrics, <ag/> for action guarantees, <op/> for operations, <slo/> for service level objectives, <bind/> for bindings, <oblig/> for obligations and <unkw/> for unlinked requirements\(^8\).
\(^8\)This last tag is used when a one-to-many mapping has not been refined, or if the requirement engineer uses the “Raw” category.
Figure 3: Illustration of the use of the decision trees through the refinement of a functional goal as example
Fig. 4 shows a WSLA extract of the individual evaluation entered in the main window of STR@WS which has been translated into a WSLA/WSDL extract.
5.4 The decision trees for one-to-many mappings
The mappings formalized by Lines 59, 60 and 61 (Table 4) are one-to-many relationships. For each of them, we build a decision tree in order to refine their categorization in the accurate WSLA/WSDL class. For each one-to-many mapping, some questions related to the content of the involved requirement are asked to the tool user; she only has to answer by ‘Yes’ or ‘No’. At the end of each decision tree, the right category is proposed. Figs. 5(a), 5(b) and 5(c) illustrate the decision trees developed below.
For the Functional goal requirements (Line 59), there are three possible corresponding classes: Metric, Action guarantee and Operation. The structure of the decision tree is shown in Fig. 5(a). Its content is as follows:
Figure 4: Corresponding result of the mapping of the individual evaluation
1: Does the functional goal describe interaction(s) between the parties involved in the service use?
If Yes, then link the requirement to the Operation class.
If No, then go to Question 1.1.
1.1: Does the functional goal describe how a QoS property is measured?
If Yes, then link the requirement to the Metric class.
If No, then link the requirement to the Action guarantee class.
Concerning the Quality constraint (Line 60), there are two possibilities in the mapping: Service level objective and Binding. The decision tree, illustrated in Fig. 5(b), is as follows:
2: Does the quality constraint capture the needs about the format or the technologies used to exchange data with the service provider?
Does the functional goal describe interaction(s) between the parties involved in the service use?
Yes
Operation
No
Does the functional goal describe how a QoS property is measured?
Yes
Metric
No
Action guarantee
(b) The decision tree for the one-to-many relationship implying the Quality constraint class
Does the plan describe a process to follow?
Yes
No
Does the plan describe how a QoS property is measured?
Yes
Metric
No
Operation
Does the plan state a commitment of a party involved in the service use?
Yes
Action guarantee
No
Binding
(c) The decision tree for the one-to-many relationship implying the Plan class
Figure 5: Decisions trees for the one-to-many mappings
If Yes, then link the requirement to the Binding class.
If No, then link the requirement to the Service level objective class.
The last one-to-many relationship implies the Plan concept (Line 61) with
four possible corresponding concepts: Binding, Metric, Action guarantee and Operation. The decision tree, illustrated by Fig. 5(c), is as follows:
3: Does the plan describe a process to follow?
If Yes, then go to Question 3.1.
If No, then go to Question 3.2.
3.1: Does the plan describe how a qos property is measured?
If Yes, then link the requirement to the Metric class.
If No, then link the requirement to the Operation class.
3.2: Does the plan state a commitment of a party involved in the service use?
If Yes, then link the requirement to the Action guarantee class.
If No, then link the requirement to the Binding class.
6 Related work
Two tools [67, 68] and a method [69] have been proposed in order to ease the ws discovery process. Based on textual requirements, ws matching the ws consumer needs are suggested. However, these works exclusively focus on functional requirements and the requirements are expressed without any re structure. That makes the discovery task more demanding in methods for extracting accurate information from the various requirements.
Rolland et al. [70] introduce a model for Intentional Service Modelling (ism): ws providers have to describe their ws and ws consumers use an “intentional matching mechanism” to select potential ws. This model requires new technologies for publishing, browsing and discovering services in comparison to the most widespread ones, i.e., UDDI and ebXML registries. The qos characteristics of ws are not considered in the discussion. Another relevant paper [71] uses the ism approach. The authors improve the work of Rolland and colleagues by taking into account the qos levels of ws during the matching and selection step. The Service-Based Applications (SBA) must be modeled in terms of stakeholders’ requirements, and not in terms of technical and procedural aspects. Similar to
the work of Rolland and colleagues, the use of ISM requires that both the service consumers and the service providers learn how this language has to be used.
Regarding the solutions of semantic matching between the WS descriptions and the needs of the WS consumer, related work is often built on technical languages and specifications. For instance, [72], [73] and [74] respectively use USQL (Universal Service Query Language), DAML-S and BPOL (Business Process Outsourcing Language). The handling of those technologies requires thorough knowledge of each of them. Works on semantic matching often concentrate on the WS provider side, e.g., [75, 76, 77, 78]. In order to have a comprehensive approach of the problem, we also need a user-friendly solution that eases the requirements elicitation task at the WS consumer side.
In [79], the authors propose a method and a tool which allow the service users to express their requirements. The tool analyzes them in order to help the users during the requirements refinement process and in the errors or conflicts discovery. The authors create their own meta-model for the four elements required in service consumption (i.e., role, goal, process and service). The method and the tool are very interesting. However, they are grounded in the WS literature turned towards the service producer [22]. By grounding the RE for services in a generic ontology for requirements, we take the point of view of the service consumers. This is very important to adopt the consumer point of view in order to build a comprehensive method and/or tool supporting the whole RE process for definition of service requests.
The work of Zachos et al. [80] shares some similarities with ours. They create a tool which is able to discover WSs based on requirements expressed by the user in natural language. The requirements elicitation process depends on use-case analysis. Requirements related to the use-cases are then added in the system, Ucare, which follows the VOlERE requirements shell. The scope of our work is more restricted than theirs: we focus exclusively on the mapping between the requirements of the WS consumer and WSLA/WSDL. Our approach uses Core as the source of requirements concepts, rather than use cases. Moreover, we formalize the mapping between the requirements, which could be expressed in
natural language, and their specifications. First, it will allow to keep the track of requirements when a ws is selected. If the system-to-be selecting Wss cannot replace a defective ws, it is able to identify too demanding requirements by comparing the characteristics of the best fitted ws and the consumer requirements contained in the service request. Secondly, it enables to directly analyze the consequences of requirements changes in comparison with the (composite) ws chosen. This is very important for requirements monitoring in an soa, as already noted in [81]. With regards to works related to RE monitoring in a service-oriented environment [81, 82], proposed methods to elicit requirements are based on RE techniques. Our contribution could be complementary to those works in order to improve the RE process.
In [83], the authors propose a ws composition framework based on state machines. Their system iteratively helps ws consumers to elicit their needs. In case of problems during the ws composition, the causes are exposed to the service consumer. Then, the system helps him to reformulate his needs. Seeing that there is not ontological grounding for the requirements expressed by ws consumers, the latter must know both the RE and the service context, and relates himself those two conceptualizations. Our view on the problem allows the ws users and their software/requirements engineers to concentrate only on the RE issue.
The last significant paper [84] related to this work proposes an online monitoring of the ws requirements. The aim of the authors is to make the behaviour of Wss consistent with the requirements of the service consumer. In this way, they design a novel language, the Web Service Constraint Description Language (WSCDL), with which values and events constraints are captured. As in many other works, a new “standard” is once again proposed. Secondly, the content of a WSCDL file obviously comes from an RE work. However, this is not clearly underlined neither explained. Therefore, our work is complementary to their research: we point out the origin of the service request content by bridging the requirement types to the service concepts. It should improve the monitoring of the requirements, and especially the understanding and the forecasting of the
consequences of changes in the service consumer needs.
7 Conclusion
Service-oriented computing raises new issues, included the management of the requirements: mainly, their elicitation, their capture, their analysis and their specification into a service request. In the literature, authors often work with pure technical specifications to capture and specify the service consumer’s requirements. Adding a clear link between an ontology for requirements and a service taxonomy allows (i) to move a step closer to the automation of the creation of service requests based on the WS consumers’ requirements, (ii) to help the WS composition system to identify easily non-suitable requirements asked by the WS consumer, (iii) to know which requirements are no longer satisfied when a WS provider fails to comply with the agreement and (iv) to know precisely which part of an WSLA and/or WSDL document must be modified when the WS consumer changes some of his requirements. Creating and keeping this link is enabled by the proposed mappings between the two conceptualization on the problem tackled in this paper. The main original idea is to base the high level representation from an ontology for RE and translate it to WS descriptions.
7.1 Future work
Taking into account the possible faults of the service oriented system in its actual operation is a priority for future work. Reinecke, Wolter and Malek’s contributions [85] appear to be a relevant starting point towards that aim. They propose an overview of the fault-models available both in WS technologies (e.g., WSDL, see §2.3), and in communication technologies (e.g., HTTP).
On the RE side, a requirement modeling language should be created or adapted in order to capture the requirements expressed by the WS consumers. In order to (automatically) reason on the requirements expressed, we have to structure them. The requirements modeling language could be grounded on Techne [86]. This would also ease the translation of a RE solution to a specification of the
service request which is usable by discovery tools.
This paper does not cover the difference between hard and soft SL0s. WS consumers often express their minimal requirements regarding the non-functional characteristics of the WS as well as additional (soft) SL0s increasing their satisfaction. It also avoids the issue of requirements concerning orchestration and choreography. Before tackling this question, RE for a single WS should be done more suitably.
Taking into account the gaps (see §4.2) between the two levels of requirements representation is also a future task. This can be done within a wider is composed of our tool as well as other computational modules enabling the discovery and the composition of WSs based on the WSLA/WSIDL specifications.
The last point to improve is the process followed to refine the one-to-many mappings. It could be enhanced by, e.g., adding a syntactic and/or semantic matching based on the requirements content.
References
A Appendix: Definition of PaymentType
Here is the XML Schema of the PaymentType element.
Specification 8.
```xml
<xsd:complexType name="PaymentType">
<xsd:sequence>
<xsd:element name="Debtor" type="xsd:string"/>
<xsd:element name="Amount" type="xsd:float"/>
<xsd:element name="CausingGuarantee" type="xsd:string"/>
<xsd:element name="Currency" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
```
B Appendix: Definition of AddressTransmissionType and CoordinatesTransmissionType
Here are the XML Schema of the AddressTransmissionType element and of the CoordinatesTransmissionType element.
```xml
<types>
<element name="AddressTransmissionType">
<complexType>
<sequence>
<element name="Street" type="string"/>
<element name="Number" type="integer"/>
</sequence>
</complexType>
```
<element name="Box" type="string"/>
<element name="ZIP" type="integer"/>
<element name="City" type="string"/>
<element name="Country" type="string"/>
</sequence>
</complexType>
</element>
<element name="CoordinatesTransmissionType">
<complexType>
<sequence>
<element name="Latitude" type="OneCoordinateType"/>
<element name="Longitude" type="OneCoordinateType"/>
</sequence>
</complexType>
</element>
<complexType name="OneCoordinateType">
<sequence>
<element name="Degree" type="intDegree"/>
<element name="Minute" type="int2"/>
<element name="Second" type="int2"/>
</sequence>
</complexType>
<simpleType name="intDegree">
<restriction base="integer">
<totalDigits value="3"/>
<minInclusive value="-180"/>
<maxInclusive value="180"/>
</restriction>
</simpleType>
<simpleType name="int2">
<restriction base="integer">
<totalDigits value="2"/>
<minInclusive value="-60"/>
<maxInclusive value="60"/>
</restriction>
</simpleType>
</types>
|
{"Source-Url": "https://pure.fundp.ac.be/ws/files/13068713/Straws_IETjournal_Final.pdf", "len_cl100k_base": 15706, "olmocr-version": "0.1.50", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 114896, "total-output-tokens": 23103, "length": "2e13", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.0010557174682617188, "__label__crime_law": 0.00034737586975097656, "__label__education_jobs": 0.003330230712890625, "__label__entertainment": 0.0002187490463256836, "__label__fashion_beauty": 0.00027632713317871094, "__label__finance_business": 0.0011510848999023438, "__label__food_dining": 0.000446319580078125, "__label__games": 0.001071929931640625, "__label__hardware": 0.0008578300476074219, "__label__health": 0.0005259513854980469, "__label__history": 0.0006451606750488281, "__label__home_hobbies": 0.00012993812561035156, "__label__industrial": 0.0006155967712402344, "__label__literature": 0.00109100341796875, "__label__politics": 0.0004148483276367187, "__label__religion": 0.0006656646728515625, "__label__science_tech": 0.1658935546875, "__label__social_life": 0.00018215179443359375, "__label__software": 0.02490234375, "__label__software_dev": 0.79443359375, "__label__sports_fitness": 0.0002751350402832031, "__label__transportation": 0.0007505416870117188, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 85769, 0.02379]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 85769, 0.31038]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 85769, 0.86039]], "google_gemma-3-12b-it_contains_pii": [[0, 1421, false], [1421, 2746, null], [2746, 4806, null], [4806, 7161, null], [7161, 9030, null], [9030, 11202, null], [11202, 13380, null], [13380, 15751, null], [15751, 17747, null], [17747, 19900, null], [19900, 21813, null], [21813, 23910, null], [23910, 26202, null], [26202, 28176, null], [28176, 32353, null], [32353, 34237, null], [34237, 36313, null], [36313, 38349, null], [38349, 40342, null], [40342, 42153, null], [42153, 43600, null], [43600, 45616, null], [45616, 46367, null], [46367, 48234, null], [48234, 48748, null], [48748, 50124, null], [50124, 51483, null], [51483, 52939, null], [52939, 53977, null], [53977, 54770, null], [54770, 55672, null], [55672, 57547, null], [57547, 59888, null], [59888, 62188, null], [62188, 64210, null], [64210, 65918, null], [65918, 67676, null], [67676, 69405, null], [69405, 71073, null], [71073, 72790, null], [72790, 74564, null], [74564, 76318, null], [76318, 78077, null], [78077, 79735, null], [79735, 81470, null], [81470, 83333, null], [83333, 84812, null], [84812, 85769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1421, true], [1421, 2746, null], [2746, 4806, null], [4806, 7161, null], [7161, 9030, null], [9030, 11202, null], [11202, 13380, null], [13380, 15751, null], [15751, 17747, null], [17747, 19900, null], [19900, 21813, null], [21813, 23910, null], [23910, 26202, null], [26202, 28176, null], [28176, 32353, null], [32353, 34237, null], [34237, 36313, null], [36313, 38349, null], [38349, 40342, null], [40342, 42153, null], [42153, 43600, null], [43600, 45616, null], [45616, 46367, null], [46367, 48234, null], [48234, 48748, null], [48748, 50124, null], [50124, 51483, null], [51483, 52939, null], [52939, 53977, null], [53977, 54770, null], [54770, 55672, null], [55672, 57547, null], [57547, 59888, null], [59888, 62188, null], [62188, 64210, null], [64210, 65918, null], [65918, 67676, null], [67676, 69405, null], [69405, 71073, null], [71073, 72790, null], [72790, 74564, null], [74564, 76318, null], [76318, 78077, null], [78077, 79735, null], [79735, 81470, null], [81470, 83333, null], [83333, 84812, null], [84812, 85769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 85769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 85769, null]], "pdf_page_numbers": [[0, 1421, 1], [1421, 2746, 2], [2746, 4806, 3], [4806, 7161, 4], [7161, 9030, 5], [9030, 11202, 6], [11202, 13380, 7], [13380, 15751, 8], [15751, 17747, 9], [17747, 19900, 10], [19900, 21813, 11], [21813, 23910, 12], [23910, 26202, 13], [26202, 28176, 14], [28176, 32353, 15], [32353, 34237, 16], [34237, 36313, 17], [36313, 38349, 18], [38349, 40342, 19], [40342, 42153, 20], [42153, 43600, 21], [43600, 45616, 22], [45616, 46367, 23], [46367, 48234, 24], [48234, 48748, 25], [48748, 50124, 26], [50124, 51483, 27], [51483, 52939, 28], [52939, 53977, 29], [53977, 54770, 30], [54770, 55672, 31], [55672, 57547, 32], [57547, 59888, 33], [59888, 62188, 34], [62188, 64210, 35], [64210, 65918, 36], [65918, 67676, 37], [67676, 69405, 38], [69405, 71073, 39], [71073, 72790, 40], [72790, 74564, 41], [74564, 76318, 42], [76318, 78077, 43], [78077, 79735, 44], [79735, 81470, 45], [81470, 83333, 46], [83333, 84812, 47], [84812, 85769, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 85769, 0.1248]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
4dcbc6c37522a43e77fd5c6feaf5c77acaf92faa
|
Year: 2015
Tracing Software Developers’ Eyes and Interactions for Change Tasks
Kevic, Katja; Walters, Braden M; Shaffer, Timothy R; Sharif, Bonita; Shepherd, David C; Fritz, Thomas
DOI: https://doi.org/10.1145/2786805.2786864
Posted at the Zurich Open Repository and Archive, University of Zurich
ZORA URL: https://doi.org/10.5167/uzh-112287
Published Version
Originally published at:
Kevic, Katja; Walters, Braden M; Shaffer, Timothy R; Sharif, Bonita; Shepherd, David C; Fritz, Thomas (2015). Tracing Software Developers’ Eyes and Interactions for Change Tasks. In: 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Bergamo, Italy, 30 August 2015 - 4 September 2015.
DOI: https://doi.org/10.1145/2786805.2786864
Tracing Software Developers’ Eyes and Interactions for Change Tasks
Katja Kevic†, Braden M. Walters‡, Timothy R. Shaffer*, Bonita Sharif†, David C. Shepherd†, Thomas Fritz†
†University of Zurich, Switzerland
Department of Informatics
{kevic,fritz}@ifi.uzh.ch
‡Youngstown State University, USA
Department of CS and IS
{bmwalters01,trshaffer}@student.ysu.edu
bsharif@ysu.edu
†ABB Corporate Research, USA
Industrial Software Systems
david.shepherd@us.abb.com
ABSTRACT
What are software developers doing during a change task? While an answer to this question opens countless opportunities to support developers in their work, only little is known about developers’ detailed navigation behavior for realistic change tasks. Most empirical studies on developers performing change tasks are limited to very small code snippets or are limited by the granularity or the detail of the data collected for the study. In our research, we try to overcome these limitations by combining user interaction monitoring with very fine granular eye-tracking data that is automatically linked to the underlying source code entities in the IDE.
In a study with 12 professional and 10 student developers working on three change tasks from an open source system, we used our approach to investigate the detailed navigation of developers for realistic change tasks. The results of our study show, amongst others, that the eye-tracking data does indeed capture different aspects than user interaction data and that developers focus on only small parts of methods that are often related by data flow. We discuss our findings and their implications for better developer tool support.
Categories and Subject Descriptors
D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement
General Terms
Human Factors, Experimentation
Keywords
eye-tracking, gaze, change task, user study
1. INTRODUCTION
Software developers spend a majority of their time working on change tasks, such as bug fixes or feature additions [25].
In order to successfully complete these tasks, they have to read, navigate and edit the relevant pieces of code [22, 16]. Since the inception of software development, researchers have been studying how developers read and navigate code, and what kind of knowledge they acquire (e.g., [43, 5, 22]). The more we know about a developer’s work, the better we are able to support her, for instance, by reducing information overload [21], improving defect prediction [24], or providing automatic navigation recommendations [13, 29].
Yet relatively few studies have been undertaken to investigate detailed navigation behavior of developers for realistic change tasks. The lack of realistic studies is due to the significant challenges and effort of acquiring the time of professional software developers to participate as well as of capturing, transcribing and coding longer sessions of developers’ work on change tasks. More recently, approaches have been developed to automatically capture more data from a developer’s interactions with source code elements in an integrated development environment (IDE) [2, 21]. These approaches capture source code elements mostly on the class and method level and are based on explicit user interactions with the mouse or keyboard.
Recent advances in technology afford new opportunities to collect a wide variety of more detailed information on a software developer and her work. Studies with sensors for tracking biometric features, such as eye gaze, have generated new insights on developers’ work on small code tasks, such as perceptions of difficulty [15], brain activation patterns [42], the scanning patterns of code [36] or the ease of comprehending different representations of code [40, 6]. Most of these studies focus on very small code comprehension tasks with a single method or class, in particular, since they require manual linking between the gaze data collected with an eye-tracker and the source code elements a developer looked at.
While these studies provide valuable first insights, the advances in technology open up the opportunity to address further important research questions, such as, what is a developer’s fine-grained navigation behavior for realistic change tasks, what is the difference in the data captured through eye-tracking and interaction logging and how can we use eye-tracking data to support developers. Answering these questions will allow us to better understand developers’ comprehension of large code bases and to develop better and more fine-granular tool support for developers.
In our research, we take advantage of the opportunities that eye-tracking provides, and extend previous research by addressing some of these questions by focusing on more realistic change tasks to investigate how developers read and navigate through code while working. In particular, we are examining how eye-tracking data differs from the data captured by monitoring user interactions in an IDE, how developers’ eyes move within and between methods, and how these newly gained insights can be used to better support developers in their work on change tasks. We developed an approach to automatically link eye-tracking data to the source code elements in the IDE, which combines the ease of automatically collecting data in an IDE with the finer granularity of eye-tracking data. Our approach also supports the scrolling and switching of code editor windows by developers and thus allows for change task investigations on a realistic-sized code base and is not limited to very small tasks as most previous studies are. This new approach for conducting user studies in software development provides the potential to reduce the cost of generating detailed, rich user data and valuable insights in developers’ navigation behavior.
We conducted a study with 22 participants, 12 professional developers and 10 students, working on three realistic change tasks for a total of 60 minutes while automatically tracing their eye gazes and their explicit user interactions in the code editor of the Eclipse IDE. Our analysis of the gathered data shows, amongst other results, that eye-tracking captures substantially different data than a developer’s navigation within the IDE, that developers only look at a few lines of a method when working on a change task and that these lines are often related to the data flow of variables within these methods. These results also provide evidence for the value of combining eye-tracking with interaction monitoring in an IDE in the future.
This paper makes the following contributions:
- Study findings based on eye-tracking and user interaction monitoring that provide insights into the detailed navigation behavior of 22 developers working on realistic change tasks.
- An approach to automatically and on-the-fly capture the fine-grained source code elements a developer looks at in an IDE while working with large files, thereby significantly improving current state-of-the-art that limits eye tracking studies to only single methods.
- A discussion on the value of the data gathered and the opportunities the data and the findings offer for better developer support.
2. RELATED WORK
Our work can be seen as an evolution of techniques to empirically study software developers working on change tasks. Therefore, we classify related work roughly along its evolution into three categories: manual capturing, user interaction monitoring, and biometric sensing of developers’ work.
Manual Capturing.
Researchers have been conducting empirical studies of software developers for a very long time. Many of the earlier studies focused on capturing answers of participants after performing small tasks to investigate code comprehension and knowledge acquisition (e.g., [10, 41, 32]). Later on, researchers started to manually capture more detailed data on developers’ actions. Altmann, for instance, analyzed a ten minute interval of an expert programmer performing a task and used computational simulation to study the near-term memory [5]. Perhaps one of the most well-known user studies from this category is the study by Ko et al. [22]. In this study, the authors screen captured ten developers’ desktops while they worked on five tasks on a toy-sized program and then hand-coded and analyzed each 70 minute session. In a study on developers performing more realistic change tasks, Fritz et al. [16] used a similar technique and manually transcribed and coded the screen captured videos of all participants. While all of these studies are a valuable source of learning and led to interesting findings, the cost of hand-coding a developers’ actions is very high, which led to only a limited number of studies providing detailed insights on a developers’ behavior.
User Interaction Monitoring.
More recently, approaches have been developed to automatically capture user interaction data within an IDE, such as Mylyn [2, 20, 21]. Based on such automatically captured interaction histories—logs of the code elements a developer interacted with along with a timestamp—researchers have, for instance, investigated how developers work in an IDE [27], how they navigate through code [28, 29, 47], or how developers’ micro interaction patterns might be used for defect prediction [24]. Even the Eclipse team themselves undertook a major data collection project called the Usage Data Collector that, at its peak, collected data from thousands of developers using Eclipse. Overall, the automatic monitoring of user interactions was able to significantly reduce the cost for certain empirical studies. However, these studies are limited to the granularity and detail of the monitoring approach. In case of user interaction monitoring, the granularity is predominately the method or class file level and detailed information, such as the time a developer spends reading a code element or when the developer is not looking at the screen, is missing and makes it more difficult to fully understand the developers’ traces.
Biometric Sensing.
In parallel to the IDE instrumentation efforts, researchers in the software development domain have also started to take advantage of the maturing of biometric sensors. Most of this research focuses on eye-tracking [31, 19], while only few studies have been conducted so far that also use other signals, such as an fMRI to identify brain activation patterns for small comprehension tasks [42], or a combination of eye-tracking, EDA, and EEG sensors to measure aspects such as task difficulty, developers’ emotions and progress, or interruptibility [15, 26, 52].
By using eye-tracking and automatically capturing where a developer is looking (eye gaze), researchers were able to gain deeper insights into developers’ code comprehension. One of the first eye-tracking studies in program comprehension was conducted by Crosby et al., who found that experts and novices differ in the way they looked at English and Pascal versions of an algorithm [11]. Since then, several researchers have used eye-tracking to evaluate the impact of developers’ eye gaze on comprehension for different kinds of representations and visualizations such as 3D visualizations [37], UML diagrams [51, 12], design pattern layout [39], programming languages [44], and identifier styles [40, 8]. Researchers have
also used eye-tracking to investigate developers’ scan patterns for very small code snippets, finding that participants first read the entire code snippet to get an idea of the program [45]. Other researchers examined different strategies to overcome the single page code task limitation of previous studies, allowing for change tasks on a realistic-sized code base with developers being able to naturally scroll and switch editor windows.
3. EXPLORATORY STUDY
We conducted an exploratory study with 22 participants to investigate the detailed navigation behavior of developers for realistic change tasks. Each participant was asked to work for a total of 60 minutes on three change tasks of the open source system JabRef in the Eclipse IDE, while we tracked their eyes and monitored their interaction in the IDE. For the eye-tracking part, we developed a new version of our Eclipse plugin called iTrace [49], by adding automatic linking between the eye gazes captured by the eye-tracking system to the underlying fine-grained source code elements in the IDE in real-time. All study materials are available on our website [9].
3.1 Procedure
The study was conducted in two steps at two physical locations. In the first step, we conducted the study with twelve professional developers on site at ABB. We used a silent and interruption free room that was provided to us for this purpose. In the second step, we conducted the study with ten students in a university lab at Youngstown State University. We used the same procedure as outlined below at both locations.
When a participant arrived at the study location, we asked her to read and sign the consent form and fill out the background questionnaire on their previous experience with programming, Java, bug fixing and Eclipse. Then, we provided each participant a document with the study instructions and a short description of JabRef. Participants were encouraged to ask questions at this stage to make sure they understood what they were required to do during the study. The entire procedure of the study was also explained to them by a moderator. In particular, participants were told that they will be given three bug reports from the JabRef repository and the goal was to fix the bug if possible. However, we did mention that the ultimate goal was the process they used to eventually fix the bug and not the final bug fix.
For the study, participants were seated in front of a 24-inch LCD monitor. When they were ready to start, we first performed a calibration for the eye-tracker within iTrace. Before every eye-tracking study, it is necessary to calibrate the system to each participants’ eyes in order to properly record gaze data. Once the system was successfully calibrated, the moderator turned on iTrace and Mylyn to start collecting both types of data while the participants worked on the change tasks. Participants were given time to work on a sample task before we started the one hour study on the three main tasks. At the end of each change task, we had a time-stamped eye gaze session of line-level data and the Mylyn task interactions saved in a file for later processing. We also asked each participant to type their answer (the class(es)/method (s)/attribute(s) where they might fix the bug) in a text file in Eclipse at the end of each change task.
For the study, each participant had Eclipse with iTrace and Mylyn plugins installed, the JabRef source code, a command prompt with instructions on how to build and run JabRef, and sample bib files to test and run JabRef. There were no additional plugins installed in Eclipse. The study was conducted on a Windows machine. Each participant was able to make any necessary edits to the JabRef code and run it. They were also able to switch back and forth between the Eclipse IDE and the JabRef application. iTrace detects when the Eclipse perspective is in focus and only then collects eye gaze data. We asked subjects not to resize the Eclipse window to maintain the same full screen setup for all subjects and not to browse the web for answers since we wanted to control for any other factors that might affect our results.
3.2 Participants
For our study, we gathered two sets of participants: twelve professional developers working at ABB Inc. that spend most of their time developing and debugging production software, and ten undergraduate and graduate computer science students from Youngstown State University. Participants were recruited through personal contacts and a recruiting email. All participants were compensated with a gift card for their participation.
All professional developers reported having more than five years of programming experience. Seven of the twelve reported having more than five years of experience programming in Java, while the other five reported having about one year of Java programming experience. Nine of the twelve professional participants also rated their bug fixing skills as above average or excellent. With respect to IDE usage, four of the twelve stated that they mainly use Visual Studio for work purposes and that they were not familiar with the Eclipse IDE, and one participant commented on mainly being a vim and command line user. Of the twelve professional developers, two were female and ten were male.
Among the ten student participants, one participant had more than five years of programming experience, five students had between three and five years programming experience, and four of them had less than two years programming experience. Three of the students reported having between three and five years of Java programming experience, while seven students had less than two years. Three of the ten students rated their bug fixing skills as above average, and seven rated them as average. All but one student stated that
they were familiar with the Eclipse IDE. Of the ten students, one was female and nine male.
3.3 Subject System and Change Tasks
We chose JabRef as the subject system in this study. JabRef is a graphical application for managing bibliographic databases that uses the standard LaTeX bibliographic format BibTeX, and can also import and export many other formats. JabRef is an open source, Java based system available on SourceForge [1] and consists of approximately 38 KLOC spread across 311 files. The version of JabRef used in our study was 1.8.1, release date 9/16/2005.
To have realistic change tasks in our study, we took the tasks directly from the bug descriptions submitted to JabRef on SourceForge. Information about each task is provided in Table 1. All of these change tasks represent actual JabRef tasks that were reported by someone on SourceForge and that were eventually fixed in a later JabRef release. The tasks were randomly selected from a list of closed bug reports with varied difficulty as determined by the scope of the solution implemented in the repository. We selected a set of three change tasks to be performed by all participants. We consider this to be a reasonable number of tasks without causing fatigue in the one hour of the study. A time limit of 20 minutes was placed for each task so that participants would work on all three tasks during the one hour study. To familiarize participants with the process and the code base, each participant was also given a sample task before starting with the three main tasks for which we did not analyze the tracked data. The task order of the three tasks was randomly chosen for each participant.
3.4 iTrace
For capturing eye-tracking data and linking it to source code elements in the IDE, we developed and use a new version of our Eclipse plugin iTrace [35]. For this new version, we added the ability to automatically and on-the-fly link eye gazes to fine-grained AST source code elements, including method calls, variable declarations and other statements in the Eclipse IDE. In particular, iTrace gives us the exact source code element that was looked at with line-level granularity. Furthermore, to support a more realistic work setting, we added features to properly capture eye gazes when the developer scrolls or switches code editor windows in the IDE, or when code is edited. Eye-tracking on large files that do not completely fit on one screen is particularly challenging as none of the state-of-the-art eye-tracking software supports scrolling while maintaining context of what the person is looking at. Our new version of iTrace overcomes this limitation and supports the collection of correct eye gaze data when the developer scrolls both, horizontally and vertically as well as when she switches between different files in the same or different set of artifacts.
iTrace interfaces with an eye-tracker, a biometric sensor usually in the form of a set of cameras that sit in front of the monitor. For our study, we used the Tobii X60 eye-tracker [4] that does not require the developer to wear any gear. Tobii X60 has an on-screen accuracy of 0.5 degrees. To accommodate for this and still have line-level accuracy of the eye gaze data, we chose set the font size to 20 points for source code files within Eclipse. We ran several tests to validate the accuracy of the collected data.
After calibrating the eye-tracker through iTrace’s calibration feature, the developer can start working on a task and the eye gazes are captured with the eye-tracker. iTrace processes each eye gaze captured with the eye-tracker, checks if it falls on a relevant UI widget in Eclipse and generates an eye gaze event with information on the UI in case it does. iTrace then uses XML and JSON export solvers, whose primary job is to export each gaze event and any information attached to it to XML and JSON files for later processing.
Currently, iTrace generates gaze events from gazes that fall on text and code editors in Eclipse. These events contain the pixels X and Y coordinates relative to the top-left corner of the current screen, the validation of the left and right eye as reported by the eye-tracker (i.e., if the eye was properly captured), the left and right pupil diameter, the time of the gaze as reported by the system and the eye-tracker, the line and column of the text/code viewed, the screen pixel coordinates of the top-left corner of the current line, the file viewed, and if applicable, the fully qualified names of source code entities at the gaze location. The fully qualified names are derived from the abstract syntax tree (AST) model of the underlying source code. For this study, we implemented iTrace to capture the following AST elements: classes, methods, variables, enum declarations, type declarations, method declarations, method invocations, variable declarations, any field access, and comments. These elements are captured regardless of scope, which includes anonymous classes.
3.5 Data Collection
For this study, we collected data on participants’ eye traces and their interactions with the IDE simultaneously. Since we conducted our study with the Eclipse IDE, we used the Eclipse plugin Mylyn [2, 20] to monitor user interactions. For the eye-tracking data, we used our new version of the Eclipse plugin iTrace [35].
We gathered a total of 66 change task investigations from the 12 professional developers and 10 computer science students who each worked on three different change tasks. For each of these investigations, we gathered the eye-tracking data and the user interaction logs. Due to some technical difficulties, such as a participant wearing thick glasses or too many eye gazes not being valid for a task, we excluded 11 change task investigations and ended up with 55 overall: 18 subjects investigating task 1, 16 subjects investigating task 2, 16 subjects investigating task 3, and 21 subjects investigating task 4. With respect to individual method investigations over all participants and tasks, we gathered a total of 688 method investigation instances.
4. STUDY RESULTS
Based on the collected logs of eye gazes (gaze context) and user interactions (interaction context) of the 22 participants we were able to make detailed observations on how developers navigate within source code. Table 2 summarizes the gaze and interaction contexts we collected and used to infer our observations from. In the following, we structure our observations along three research foci: the difference between gaze and user interaction data, developers’ navigation within methods and developers’ navigation between methods.
4.1 Interaction Context and Gaze Context
O1—Gaze contexts capture substantially more, and more fine-grained data. To compare the different amounts of elements within the gaze and the interaction contexts, we
used a paired-samples t-test\(^1\) with pairs consisting of the gaze and the interaction context for a task and subject.
This paired-samples t-test showed that the number of different classes contained in the gaze context \((M = 4.78, SD = 3.58)\) and the number of different classes contained in the interaction context \((M = 4.42, SD = 3.00)\) do not differ significantly \((t(54) = 1.98, p = .053)\). Nevertheless, there were more classes captured in the gaze contexts, which turned out to be internal classes or classes defined in the same file. While there is no significant difference on a class level, there is a significant difference in the amounts of methods captured. The number of different methods within the gaze contexts \((M = 12.51, SD = 11.75)\) is significantly higher than the number of different methods within the interaction contexts \((M = 6.04, SD = 4.53)\), \(t(54) = 4.57, p < .05\). This observation on the substantial difference in the number of elements within the gaze and interaction context provides evidence that developers often look at methods that they do not select. Approaches that only analyze interaction logs, thus miss a substantial amount of information.
When analyzing the method sequences captured in the logs, the data also shows that gaze context not only captures more elements, but also more details on the actual sequences of navigation between methods. A paired-samples t-test revealed a significant difference in the number of method switches captured in gaze contexts \((M = 73.45, SD = 78.47)\) and the number of method switches captured in interaction contexts \((M = 5.75, SD = 5.17)\), \(t(54) = 6.52, p < .05\). Table 2 summarizes the number of unique methods and the number of method switches for each context type and participant.
**O2—Gaze and Interaction Contexts capture different aspects of a developer’s navigation.** To evaluate whether gaze and interaction contexts capture different aspects of a developer’s navigation for change task investigations, we defined ranking models based on the data available in the different contexts and compared the top ranked methods. There are a variety of models that can be used to select the most important elements within a navigation sequence \([29]\). For our analysis, we used single-factor models to select the most important elements in each kind of context that were also suggested in previous studies \([28, 29]\). To rank the methods of a gaze context we used a time-based model. This model ranks methods higher for which a developer spends more time looking at. To rank the methods of an interaction context we used a frequency model, which ranks methods higher that were visited more often.
\(^1\)According to the central limit theorem, with large samples number \((>30)\), the distribution of the sample mean converges to a normal distribution and parametric tests can be used \([14]\).
The comparison of the top 5 methods for each change task investigation resulted in an average agreement of 65.03\% \((SD = 32.26\%)\). Comparing solely the highest ranked method for each context pair results in an agreement of 27.27\%. The agreement on the top 5 most important methods however is considerably lower for change task 2 \((M = 52.31\%, SD = 34.98\%)\) than for change task 3 \((M = 71.88\%, SD = 27.62\%)\) and for change task 4 \((M = 70.71\%, SD = 31.32\%)\). While the description for change task 3 and change task 4 include concrete hints to source code elements which are possibly important for performing the change task, change task 2 required to explore the source code more exhaustively in order to find the relevant code and a possible fix. These results illustrate that gaze context, especially in form of the time of gazes, captures aspects that are not captured in the interaction context and that might be used to develop new measures of relevance. Especially, since gaze contexts also capture elements that are not in the interaction context \((O1)\), the more fine-grained gaze data might provide better and more accurate measures of relevance.
### 4.2 Navigation Within Methods
We base the analysis of navigation within methods solely on the gaze data, since interaction contexts do not capture enough detail to analyze within method navigation.
**O3—Developers only look at few lines within methods and switch often between these lines.** Figure 1 depicts the lines a professional developer (middle) and a student developer (right) looked at within a certain method and over time during a change task investigation.
Across all subjects and tasks, developers only look at few lines within a method, on average 32.16\% \((SD = 24.95\%)\) of the lines. The lengths of methods included in this analysis thereby differed quite a lot, with an average length of 53.03 lines \((SD = 139.37)\), and had a moderate influence on the number of lines looked at by a developer, Pearson’s \(r = .398, p = .01\). Participants performed on average 39.95 \((SD = 100.99)\) line switches within methods. The method length again influences the amount of line switches moderately, Pearson’s \(r = .305, p = .01\).
Further examination of the kind of lines developers actually looked at shows that developers spend most of their time within a method looking at method invocations \((M = 4081.98ms)\) and variable declaration statements \((M = 1759.6 ms)\), but spent surprisingly little time looking at method signatures \((M = 1099.67)\). In fact, in 319 cases out of 688 method investigations analyzed, the method signature was ignored and not looked at. Our findings demonstrate that developers who are performing an entire change task involving several methods and classes, read methods differently than
<table>
<thead>
<tr>
<th>ID</th>
<th>Bug ID</th>
<th>Date Submitted</th>
<th>Title</th>
<th>Scope of Solution in Repository</th>
</tr>
</thead>
<tbody>
<tr>
<td>T2</td>
<td>1436014</td>
<td>2/21/2006</td>
<td>No comma added to separate keywords</td>
<td>multiple classes: EntryEditor, GroupDialog</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>FieldContentSelector, JabRefFrame</td>
</tr>
<tr>
<td>T3</td>
<td>1594123</td>
<td>11/10/2006</td>
<td>Failure to import big numbers</td>
<td>single method:</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>BibtexParser.parseFieldContent</td>
</tr>
<tr>
<td>T4</td>
<td>1489454</td>
<td>5/16/2006</td>
<td>Acrobat Launch fails on Win98</td>
<td>single method: Util.openExternalViewer</td>
</tr>
</tbody>
</table>
\[Table 1: Tasks used in the study.\]
Table 2: Summary of professional (pro) and student (stu) developers’ average (avg) of methods and method switches captured in the gaze and interaction context, as well as the percentage of lines read within methods.
<table>
<thead>
<tr>
<th>ID</th>
<th>avg # of method switches</th>
<th>avg # of unique methods</th>
<th>avg % of lines read in method</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>gaze context</td>
<td>interaction context</td>
<td>gaze context</td>
</tr>
<tr>
<td>P1</td>
<td>6.5</td>
<td>3</td>
<td>4.5</td>
</tr>
<tr>
<td>P2</td>
<td>59.7</td>
<td>10</td>
<td>12</td>
</tr>
<tr>
<td>P3</td>
<td>50</td>
<td>7.5</td>
<td>15</td>
</tr>
<tr>
<td>P4</td>
<td>46</td>
<td>3.5</td>
<td>16.5</td>
</tr>
<tr>
<td>P5</td>
<td>126</td>
<td>12.5</td>
<td>14</td>
</tr>
<tr>
<td>P6</td>
<td>22.5</td>
<td>4.5</td>
<td>5.5</td>
</tr>
<tr>
<td>P7</td>
<td>226</td>
<td>8.7</td>
<td>39.3</td>
</tr>
<tr>
<td>P8</td>
<td>47.7</td>
<td>3</td>
<td>5.3</td>
</tr>
<tr>
<td>P9</td>
<td>50.5</td>
<td>3</td>
<td>6.5</td>
</tr>
<tr>
<td>P10</td>
<td>172</td>
<td>9</td>
<td>9</td>
</tr>
<tr>
<td>P11</td>
<td>64</td>
<td>6.7</td>
<td>12.3</td>
</tr>
<tr>
<td>P12</td>
<td>138</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<td>avg pro</td>
<td>83.73</td>
<td>6.42</td>
<td>13.38</td>
</tr>
<tr>
<td>S1</td>
<td>13.3</td>
<td>2</td>
<td>8.7</td>
</tr>
<tr>
<td>S2</td>
<td>20</td>
<td>1.7</td>
<td>6.7</td>
</tr>
<tr>
<td>S3</td>
<td>45.3</td>
<td>2.4</td>
<td>8.7</td>
</tr>
<tr>
<td>S4</td>
<td>96.3</td>
<td>15</td>
<td>23.7</td>
</tr>
<tr>
<td>S5</td>
<td>96</td>
<td>7</td>
<td>11.7</td>
</tr>
<tr>
<td>S6</td>
<td>10.5</td>
<td>3.5</td>
<td>3</td>
</tr>
<tr>
<td>S7</td>
<td>142.3</td>
<td>0.7</td>
<td>9</td>
</tr>
<tr>
<td>S8</td>
<td>64</td>
<td>4.7</td>
<td>19.7</td>
</tr>
<tr>
<td>S9</td>
<td>59.7</td>
<td>5</td>
<td>8.3</td>
</tr>
<tr>
<td>S10</td>
<td>77</td>
<td>9</td>
<td>15</td>
</tr>
<tr>
<td>avg stu</td>
<td>64.24</td>
<td>5.14</td>
<td>11.72</td>
</tr>
<tr>
<td>total avg</td>
<td>73.45</td>
<td>5.75</td>
<td>12.51</td>
</tr>
</tbody>
</table>
developers who are reading methods disconnected from any task or context, in which case the method signature might play a stronger role.
O4—Developers chase data flows within a method. To better understand how developers navigate within a method, we randomly picked six change task investigation instances from the collected gaze contexts and manually retraced the paths participants followed through a method by drawing their line switches on printouts of the methods. Closely examining these printed methods with the eye traces drawn on top, allowed us to form the hypothesis that developers often trace variables when reading a method. To further investigate this hypothesis, we selected four methods which were investigated by most participants, resulting in 40 unique method investigation instances (see Table 3). The 40 method investigation instances stem from 18 different participants and two different task. 22 of these 40 investigations stem from professional software developers, while the other 18 stem from students.
For each method, we assigned a color for each variable used within the method and colored the lines in which the variable was either defined or used in the method. We did not color lines or statements that did not include a variable. Over all four methods, we identified an average of 7.25 variable slices per method with an average of 6.2 different lines of code per slice. Then, we applied this line-to-color mapping to the sequence logs of participants who investigated these methods (see Figure 2 for an example). Within each sequence log, we ignored the lines which did not map to a slice, such as brackets or empty lines. As we are investigating if developers trace variables when reading a method we further ignored control flow statements which did not use any variable. In the event of more than one variable used in a single line, we manually checked if a color was predominantly used before or after the line was visited and decided on a color according (using the predominant color). In cases where there was no evidence of a predominant color, we picked the color of the variable that was used first in the source code line.
Our analysis revealed that developers switched between the lines of these four methods on average 178.0 ($SD = 189.9$) times. We then used our color coding to examine how many of these line switches are within variable slices (lines with the same color). Overall method investigation instances we
found an average of 104.2 (112.1) line switches of the 178 to be within a variable slice, supporting our hypothesis that developers are in fact following data flows when investigating a method. The long green and yellow blocks within Figure 2 further illustrate the frequency of switching within a variable slice rather than switching between different variable slices.
4.3 Navigation Between Methods
Overall, subjects switched on average 73.45 (SD = 78.48) times between methods when working on a change task. Thereby, they revisited a method on average 5.44 times.
O5—Developers frequently switch to methods in close proximity and rarely follow call relationships. To investigate the characteristics of method switches we examined whether they were motivated by call relationships or due to the close proximity of methods. We assessed for each method switch within a class and for each method switch to a different class whether the switch was motivated by following the call graph of the method. In addition, we assessed for each method switch within the same class whether the sequentially next method looked at is directly above or directly below the current method. We conducted this analysis for both contexts: the gaze context and the interaction context.
To understand if a method switch was motivated by following the call graph we memorized the method invocations within a given method and assessed if the next method in the method sequence was one of the memorized invoked methods. While we had to consider all method invocations within a given method when analyzing the interaction context, we could precisely assess at which method invocation the developer actually looked at when analyzing the gaze context. If a next method in the sequence was equal to one of the memorized invoked methods, we concluded that it is likely that the developer followed the call relationship (switch potentially motivated by call graph), although, the next method
could have also been within spatial proximity and the call relationship not of importance for the navigation. If the next method was not contained within the memorized method invocations we concluded that the developer’s navigation was motivated by other means than the call relationships. To understand if a method which was looked at next is directly above or directly below a current method, we compared the line numbers in the source file.
**gaze context**
We found that merely 4.05% (SD = 6.68%) of all method switches were potentially motivated by following the call graph. On average, the subjects switched methods potentially motivated by the call graph more when they were investigating change task 1 (M = 6.57%, SD = 9.36%) than when they were investigating change task 2 (M = 1.87%, SD = 2.94%) and change task 3 (M = 3.18%, SD = 4.34%). A paired-samples t-test showed that developers switched methods more often within a class (M = 4.44%, SD = 7.12%) than between different classes (M = 0.70%, SD = 4.50%), t(54) = 3.17, p = .003.
At the same time, a larger amount of all method switches ended in methods which were right above or below a method (M = 36.95%, SD = 25.57%). These results suggest that the call graph of a project is not the main drive for navigation between methods, but the location of a method captures an important aspect for navigation between methods.
**interaction context**
We found that 22.61% (SD = 29.00%) of all method switches were potentially motivated by following the call graph. Different to the results of the gaze context analysis, participants switched between methods potentially motivated by the call graph substantially more when they were investigating change task 3 (M = 38.23%, SD = 31.56%) than when they were investigating change task 2 (M = 8.05%, SD = 13.89%) and change task 4 (M = 23.19%, SD = 31.42%). On average, subjects followed considerably more call relations when they were navigating within the class (M = 24.15%, SD = 34.71%) than when they were navigating to a method implemented in another class (M = 6.44%, SD = 20.74%).
We further found that on average 69.93% (SD = 39.01%) of the method switches within a class were aimed towards methods which are directly above or below a method.
Overall, these results also show that the more coarse grained interaction context indicates that developers follow structural call graphs fairly frequently (22.6%) while the more fine grained gaze context depicts a different image with only 4.1% of switches being motivated by structural call relations.
Our results on switches to methods in close proximity further support the findings of a recent head-to-head study that compared different models of a programmer’s navigation [29] and that suggested to use models to approximate a developer’s navigation based on the spatial proximity of methods within the source code.
**O6—Developers switch significantly more to methods within the same class.** A paired-samples t-test shows that developers switched significantly more between methods within the same class (M = 65.22, SD = 73.20) than they switched from a method to a method implemented in another class (M = 8.24, SD = 11.95), t(54) = 6.07, p < .001. While, over all three tasks, participants rarely switched to methods of different classes, the participants’ method switching within the same class differs between tasks. A Wilcoxon matched pairs signed rank test indicates that participants switched significantly more between methods within classes for task 2 (M = 103.50, SD = 106.23) than for task 4 (M = 36.31, SD = 39.08), z = -2.66, p = .008. While it is not surprising that different tasks result in different navigation behavior of participants, this also suggests that it is important to take into account the task for support tools, such as code navigation recommendations.
### 4.4 Differences Based on Experience
Previous empirical studies on software developers found differences in the patterns that experienced and novice developers exhibit (e.g., [11]). To investigate such differences, we analyzed our data for differences in navigation between our professional developers and our students. In particular, we tested each statistic that contributed to the above observations and examined whether there were any statistically significant differences in gaze, respectively interaction contexts. To compare the professional developers and the students we used a Mann-Whitney test, as there are different observations and examined whether there were any statistically significant differences in gaze, respectively interaction contexts. To compare the professional developers and the students we used a Mann-Whitney test, as there are different participants in each group and the data does not meet parametric assumptions. Overall, we did not find any statistically significant difference between the two groups of participants in the amounts of unique elements on different granularity levels within the gaze context (U = 341.0, p = .539 on class level, U = 363.5, p = .820 on method level) nor the interaction context (U = 368.0, p = .878 on class level, U = 286.5, p = .125 on method level). Furthermore, there was no significant difference in the amounts of switches conducted between different elements within a class (U = 314.5, p = .292 for the gaze contexts and U = 297.5, p = .174 for the interaction contexts) nor outside of a class (U = 337.0, p = .495 for the gaze contexts and U = 266.5, p = .058 for the interaction contexts). Finally, we also could not find any significant difference in the amount of call relationships followed (U = 325.5, p = .362 for the gaze contexts and U = 268.0, p = .055 for the interaction contexts) nor if any of these two groups switched more often to methods with a high spatial proximity (U = 367.5, p = .873)
<table>
<thead>
<tr>
<th>Method Name</th>
<th># Investigation Instances (pro,stu)</th>
<th>Length in lines</th>
<th># Identified Slices</th>
<th>Avg Lines Per Slice</th>
<th>Avg Line Switches</th>
<th>Avg Line Switches Within Slice</th>
</tr>
</thead>
<tbody>
<tr>
<td>BibTexParser.parseFieldContent</td>
<td>12 (6.6)</td>
<td>92</td>
<td>10</td>
<td>4</td>
<td>232.6</td>
<td>125.7</td>
</tr>
<tr>
<td>Util.openExternalViewer</td>
<td>11 (7.4)</td>
<td>132</td>
<td>10</td>
<td>8.1</td>
<td>158.3</td>
<td>77.6</td>
</tr>
<tr>
<td>BibTexParser.parseTextToken</td>
<td>9 (4.5)</td>
<td>29</td>
<td>3</td>
<td>4</td>
<td>95.0</td>
<td>66.6</td>
</tr>
<tr>
<td>BrowserLauncher.locateBrowser</td>
<td>8 (5.3)</td>
<td>108</td>
<td>6</td>
<td>8.8</td>
<td>216.3</td>
<td>150.9</td>
</tr>
</tbody>
</table>
for the gaze contexts and $U = 332.0, p = .445$ for the interaction contexts). So even though our exemplary figure (Figure 1) that depicts a sequence log for a professional and a student developer might suggest a difference in navigation behavior, our analysis did not produce any such evidence. Further analysis is needed to examine this aspect in more detail.
### 4.5 Threats to Validity
One threat to validity is the short time period each participant had for working on a change task. Unfortunately, we were limited by the time availability of the professional developers and therefore had to restrict the main part of the study to one hour. While the data might thus not capture full task investigations, it provides insights on investigations for multiple change tasks and thus the potential of being more generalisable.
Another threat to validity is the choice of JabRef as the subject system. JabRef is written in a single programming language and its code complexity and quality might influence the study. For instance, code with low complexity and/or high complexity might result in developers spending more time to read and understand it, and thus longer eye gaze times for certain parts of the code. We tried to mitigate this risk by choosing a generally available system that is an actively used and maintained open source application and that was also used in other studies. Further studies, however, are needed to examine the effect of factors, such as code quality, to generalise the results.
In our study, JabRef had to be run through the command prompt using ANT and not directly in Eclipse. This meant that participants were not able to use breakpoints and the debugger within Eclipse and might have influenced the results. We intend to conduct further study to investigate if our findings generalise to other settings, e.g., ones in which the project can be run from within Eclipse.
iTrace collects eye gazes only within Eclipse editors. This means that we do not record eye gaze when the developer is using the command prompt or running JabRef. However, since we were interested in the navigation between the code elements within the IDE, this does not cause any problems for our analysis.
If the user opens the “Find in File” or “Search Window” within Eclipse, or a tooltip pops up when hovering over an element in the code, the eye gaze is not recorded as this overlaps a new window on top of the underlying code editor window and iTrace did not support gazes on search windows at the time of the study. To minimize the time in which eye gazes could not be recorded, we made sure to let participants know that once they were done with the find feature within Eclipse to close these windows so gaze recording can continue.
Finally, most professional developers were mainly Visual Studio users for their work, we conducted our study in Eclipse. However, all professional developers stated that they did not have problems using Eclipse during the study.
### 5. DISCUSSION
Tracing developers’ eyes during their work on change tasks offers a variety of new insights and opportunities to support developers in their work. Especially, the study’s focus on change tasks, the richness of the data, and the finer granularity of the data provide potential for new and improved tool support, such as code summarization approaches or code and artifact recommendations. In the following, we will discuss some of these opportunities.
**Richness of Eye-Tracking Data and Gaze Relevance.**
Our findings show that the eye-tracking data captures substantially more ($O1$) and different aspects ($O2$) of a developer’s interaction with the source code. Therefore, eye-tracking data can be used complimentary to user interaction task context to further enhance existing approaches, such as task-focused UIs [21], or models for defect prediction [24]. In particular, since eye-tracking data also captures gaze times—how long a developer spends looking at a code element—more accurate models of a code element’s relevance could be developed as well as models of how difficult a code element is to comprehend which might inform the necessity of refactoring it.
To examine the potential of the gaze time, we performed a small preliminary experiment to compare a gaze-based relevance model with a model based on user interaction. We focused on professional developers and were able to collect and analyze user ratings from 9 professional developers within the group of participants, also since not everyone was willing to spend additional time to participate in this part. Each developer was asked to rate the relevance of the top 5 elements ranked by gaze time as well as the top 5 ranked by degree-of-interest (DOI) from Mylyn’s user interaction context [21] on a five-point Likert scale. Overall, participants rated 76% of the top 5 gaze elements relevant or very relevant and only 65% of the top 5 DOI elements as relevant or very relevant. While these results are preliminary and further studies are needed, the 17% improvement illustrates the potential of the data richness in form of the gaze time.
**Finer Granularity of Data and Task Focus.**
Most current tools and research approaches to support development work focus on method or class level granularity. Most prominently, editors of common IDEs, such as Visual Studio or Eclipse, display whole classes, but even the recently suggested new bubble metaphor for IDEs displays full methods [9]. Similarly, approaches to recommend relevant code elements for a task, such as Mylyn [21, 2] or wear-based filtering [13], operate on the class and method level. While the method and class level are important, our results show that developers only focus on small fractions (on average 32%) of methods that are important for the change task at hand ($O3$). These findings suggest that by identifying, highlighting and possibly filtering the parts within methods that are relevant for the task, we might be able to save developers time and effort to switch between relevant parts of code and avoid getting distracted by other irrelevant code. Since developers focus a lot on data flow within a method ($O4$) that is related to the task, we hypothesise that a task-focused program slicing approach might provide a lot of benefit to developers working on change tasks. Such an approach could take advantage of existing slicing techniques, such as static or dynamic slicing [50, 23], and identify the relevance of a slice based on its relevance to the task by, for instance, using textual similarity between the slice and the task description or previously looked at code elements.
By using eye-tracking to capture a more fine-grained task context while a developer is working, we are also able to better determine what a developer is currently interested in and
complement existing approaches to recommend relevant artifacts to the developer, such as Hipikat [46] or Prompter [30].
Finally, the insights from our study can also be used to inform summarization techniques to help developers comprehend the relevant parts of the code faster. Existing techniques to summarize code have mainly focused on summarizing whole methods [17, 18] rather than only summarizing the parts relevant for a given task. Similarly, the approach by Rodeghero et al. [34] focused on using eye-tracking to summarize whole methods. Our findings show that developers usually do not read or try to comprehend whole methods and rather focus on small method fractions and data flow slices for a change task. This suggests that a more task-focused summarization that first identifies relevant code within a method according to previous eye-tracking data or other slicing techniques and then summarizes these parts of the method, might help to provide more relevant summaries and aid in speeding up code comprehension.
Accuracy of Method Switches.
The eye-tracking data captured in our study shows that a lot of the switches between methods are between methods in close proximity, as well as within a class O5, O6. These findings suggest that there is a common assumption among developers that nearby code is closely related. While this is not a new finding, the additional data captured through eye-tracking that is not captured by user interaction monitoring provides further evidence for this switch behavior. This finding also suggests that a fisheye view that zooms in on the current method and provides much detail on methods in close proximity but less on methods further out might support faster code comprehension for developers.
A common assumption of navigation recommendation approaches is that structural relations between elements are important in a developer's navigation [33]. While empirical studies that examined developers' navigation behavior based on user interactions have shown that developers actually follow such structural relations frequently; in particular call relations (e.g., [16]), the eye-tracking data of our study shows that developers perform many more switches that do not follow these relations and that are not captured by explicit user interaction. These findings point to the potential of eye-tracking data for improving method recommendations as well as for identifying the best times for suggesting structural navigation recommendations. However, further studies are needed to examine this possibility.
An Eye-Tracker per Developer.
As discussed, using eye-trackers in practice and installing them for each developer not just for study purposes bares a lot of potential to improve tool support, such as better task-focus, recommendations or summarization. With the advances and the price decrease in eye-tracking technology, installing eye-trackers for each developer might soon be reasonable and feasible. At the same time, there are still several challenges and questions to address to be smooth and of value to developers, in particular with respect to eye calibration, granularity level and privacy. Several eye-trackers, especially cheaper ones, currently still need a recalibration every time a developer changes position with respect to the monitor, which is too expensive for practical use. For tool integration, one has to decide on the level of granularity that is best for tracking eye gazes. While more fine-grained data might provide more potential, eye-tracking on a finer granularity level is also more susceptible to noise in the data. Finally, as with any additional data that is being tracked about an individual’s behavior, finer granular data also raises more privacy concerns that should be considered before such an approach is being deployed. For instance, the pupil diameter or the pattern of eye traces might also be used to monitor the cognitive load of the developer, which could also be used in harmful ways.
6. CONCLUSION
To investigate developers’ detailed behavior while performing a change task, we conducted a study with 22 developers working on three change tasks of the JabRef open source system. This is the first study that collects simultaneously both eye-tracking and interaction data while developers worked on realistic change tasks. Our analysis of the collected data shows that gaze data contains substantially more data, as well as more fine-grained data, providing evidence that gaze data is in fact different and captures different aspects compared to interaction data. The analysis also shows that developers working on a realistic change task only look at very few lines within a method rather than reading the whole method as was often found in studies on single method tasks. A further investigation of the eye traces of developers within methods showed that developers “chase” variables’ flows within methods. When it comes to switches between methods, the eye traces reveal that developers only rarely follow call graph links and mostly only switch to the elements in close proximity of the method within the class.
These detailed findings provide insights and opportunities for future developer support. For instance, the findings demonstrate that method summarization techniques could be improved by applying some program slicing first and focusing on the lines in the method that are relevant to the current task rather than summarizing all lines in the whole method. In addition, the findings suggest that a fisheye view of code zooming in on methods in close proximity and blurring out others, might have potential to focus developers’ attention on the relevant parts and possibly speed up code comprehension.
The approach that we developed for this study automatically links eye gazes to source code entities in the IDE and overcomes limitations of previous studies by supporting developers in their usual scrolling and switching behavior within the IDE. This approach opens up new opportunities for conducting more realistic studies and gathering rich data while reducing the cost for these studies. At the same time, the approach opens up opportunities for directly supporting developers in their work, for instance, through a new measure of relevance using gaze data. However, possible performance and especially privacy concerns have to be examined beforehand.
7. ACKNOWLEDGMENTS
The authors would like to thank the participants in the study. The authors would also like to thank Meghan Allen for her helpful feedback. This work was funded in part by an SNF grant and an ABB grant.
8. REFERENCES
|
{"Source-Url": "http://www.zora.uzh.ch/id/eprint/112287/1/fse15_preprint.pdf", "len_cl100k_base": 12449, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 41908, "total-output-tokens": 16690, "length": "2e13", "weborganizer": {"__label__adult": 0.00040221214294433594, "__label__art_design": 0.00028634071350097656, "__label__crime_law": 0.00025725364685058594, "__label__education_jobs": 0.0016584396362304688, "__label__entertainment": 5.173683166503906e-05, "__label__fashion_beauty": 0.00014472007751464844, "__label__finance_business": 0.0001990795135498047, "__label__food_dining": 0.000255584716796875, "__label__games": 0.000560760498046875, "__label__hardware": 0.0005125999450683594, "__label__health": 0.00030732154846191406, "__label__history": 0.0001678466796875, "__label__home_hobbies": 6.890296936035156e-05, "__label__industrial": 0.00020503997802734375, "__label__literature": 0.0002090930938720703, "__label__politics": 0.00019931793212890625, "__label__religion": 0.0003151893615722656, "__label__science_tech": 0.002750396728515625, "__label__social_life": 0.00010961294174194336, "__label__software": 0.0045928955078125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002665519714355469, "__label__transportation": 0.0003769397735595703, "__label__travel": 0.00018680095672607425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70844, 0.04791]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70844, 0.49328]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70844, 0.9213]], "google_gemma-3-12b-it_contains_pii": [[0, 810, false], [810, 5379, null], [5379, 12132, null], [12132, 17947, null], [17947, 24797, null], [24797, 31524, null], [31524, 37921, null], [37921, 39885, null], [39885, 46842, null], [46842, 53665, null], [53665, 60269, null], [60269, 65929, null], [65929, 70844, null]], "google_gemma-3-12b-it_is_public_document": [[0, 810, true], [810, 5379, null], [5379, 12132, null], [12132, 17947, null], [17947, 24797, null], [24797, 31524, null], [31524, 37921, null], [37921, 39885, null], [39885, 46842, null], [46842, 53665, null], [53665, 60269, null], [60269, 65929, null], [65929, 70844, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70844, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70844, null]], "pdf_page_numbers": [[0, 810, 1], [810, 5379, 2], [5379, 12132, 3], [12132, 17947, 4], [17947, 24797, 5], [24797, 31524, 6], [31524, 37921, 7], [37921, 39885, 8], [39885, 46842, 9], [46842, 53665, 10], [53665, 60269, 11], [60269, 65929, 12], [65929, 70844, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70844, 0.173]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
b734e2f192c8a86941befad20c919a5cceb9c92a
|
The Systems Architecture Method
– An Overview
This document provides a very brief overview of the Systems Architecture Method (SAM). This is intended to provide users with sufficient methodological information and an understanding of the meta-model to configure and build their architectural models.
Understanding… The Meta-model
To use the Systems Architecture Method (SAM) effectively, securely and safely, it is essential that the user fully understands the fundamental structure of the meta-model, how the various diagrams and spreadsheets work and how the repository is populated and viewed.
In this section we describe only the salient features of each of these subjects.
Scope and Boundaries
All architectures and models must have boundaries and the scope of the work must be understood and applied. These boundaries define the “width” and scope defines the “depth” of our work. Some EA work can be described as “a mile wide and an inch deep” other projects tackle an area “an inch wide and a mile deep”. We need to have some reasonable compromise between these extremes. In SAM, we scope and bound our EA projects using the notions of domain, program, phase, context and timeframe.
We define these as follows:
**Domain:**
An industry, or governmental area of authority or a citizen-focused service, that is clearly understandable, comprehensive and largely self-contained from the points of view of governance, funding, management and professional competences. Examples might be banking, insurance, manufacturing, air transport, local government, education, law enforcement, healthcare and social care.
**Program:**
We define “Program” as a major coordinated endeavor aimed at revising or reforming some or all of the enterprise’s systems, organization, processes, infrastructure and technical platform within its domain. A program will encompass many projects in many disciplines over a period of time.
**Phase:**
Each program is divided, in a “timeline” sense, into phases each with a specified start and end point and defined criteria to judge success in their completion. Typically, these criteria are well-defined for the first phases of a program, but are less than firm for the later phases of the program. Often there is a formal “gateway” process to authorize transition from one phase to the next involving assessment of quality.
Highlight
Architecture often is described in terms of conceptual, logical and physical views. Conceptual views are high-level, are described using generalized terms and contain only the most important objects and relationships. A logical view shows the major architectural components and their relationships within the architectural boundary independently of the technical details of how the architecture is implemented. Physical views are the least abstract of the views of and illustrate the specific implementation components and their relationships. Sometimes, we might also add a contextual view which sets the general scope of the architecture.
achievement of objectives, costs, benefits and return on investment.
**Context:**
Each phase of a program can be expressed in differing perspectives of definition. We use the perspectives of “Conceptual, Logical and Physical”. Initially each phase is defined in conceptual terms and as work progresses the logical and eventually the physical definition emerge.
To these three contexts we sometimes add that of **Contextual**. This is an initial, almost “brainstorming” notion aimed at setting out the broad scope and boundaries of the program and its phases.
**Timeframe:**
Within each phase of each program, we need to define a progressive time element. Clearly, we recognize the current state as the “As-Is” and the ultimate end result as the “Vision”. In between we may have a number of “To-Be’s” each representing a step forward. Sometimes we have a “Has-Been” if we need to understand how we got to where we are.
We can now define some important terms we use in SAM:
- **Level 1 Combinations**
- “Program/Phase” = Domain” + “Program” + “Phase”
- “Architectural State” = “Context” + “Timeframe”.
- **Level 2 Combinations**
- “Active Environment” = “Program/Phase” + “Architectural State”
- **Level 3 Combinations**
- “Tree” = “Active Environment” + “Structure Code”
- “Forest” = “Structure Code” + All applicable “Active Environments”
We should apply validation to these combinations. The principle is that each element of a combination must have been defined before you can define a new combination. In other words, you should not define an Active Environment simply by a random selection from Domain, Program, Phase, Context and Timeframe. You can only define an Active Environment from pre-defined, valid “Program/Phases” and pre-defined, valid “Architectural States”. This helps apply some degree of semantic integrity to our models.
**Schema Definition**
The default schema is shown in Figure 1. The schema is based on a network of structures and relationships. Each red sphere represents a “Structure” and each connecting line represents a set of potential relationships.
The default structures are:
- Organization (Departmental and Role-based)
- Business Processes
- Applications
- Information & Communications Technology
- Infrastructure & Locations
- Business Functions
© Systems Advisers Ltd, 2014. All Rights Reserved.
The Systems Architecture Method
– An Overview
- Data
- Components & Services
- Objectives & Goals
- Projects & Programs
The list of structures is not fixed and may be altered, added to or reduced to reflect individual situations and requirements. The only constraint is that the overall schema retains its integrity and cohesion.
The Overall Model
Hierarchies
A common way of organizing and categorizing a large volume of data is to form a hierarchy or tree, for example, as in a filing cabinet or a set of computer folders or directories. It is a convenient, and common, way of organizing data.
In the Toolkit, each structure is modeled by a “parent/child” hierarchy with up to six levels (although fewer will be sufficient in many cases); thus each entry (called a member) has a parent and a number
of children. For example, in the Organization structure, a Department (level 4 say) might have a parent, Division (level 3), and multiple subsidiary Workgroups (level 5). In some situations, a member may have more than one parent and this is modeled using a separate set off relationships.
A decision to be made is which summarization, or decomposition, scheme should be used to define the levels in the hierarchy. There can of course be more than one set of levels, each summarizing the base population of members in a different way. For example, the Organization structure may be decomposed on the basis of divisions and departments, say, or alternatively on the basis of roles and teams, or indeed both.
**Structures and Relationships**
**What is a structure?**
View it as a collection of information pertaining to a particular topic of interest, for example the enterprise’s ‘Organizational Structure’. Think of a structure as a set of filing cabinets containing all the information on a particular facet of the enterprise, filed and organized for easy access. Various filing schemes could be used but a major advantage would be to avoid redundancy. We only want to file a particular fact in one place not many. This makes it easy to find and easy to update. Also it would be advantageous to organize the information in a tree structure, or hierarchy, provided that it fits. Thus we can keep the detail at the bottom of the stack and have summary layers above, making it easier to deal with large volumes of information. The information need not be textual, it could include diagrams, documents (or their references), or multimedia items.
An enterprise’s ‘Organizational Structure’ may look like that in *Figure 2.*
This could be represented more generically by Figure 3.
In this basic illustration we have regarded the organizational structure as a number of levels. The levels are linked by a simple parent-child relationship between the members of neighboring levels. Each child has only one parent. This is of course a rather simplistic representation of an often complex structure. It ignores complications like matrix and project-based organizations. In a matrix organization one member may have more than one parent on the next higher level or perhaps have a parent on another level altogether. This is handled by the use of relationships which we will discuss below.
Changing a Structure
An Organization Structure is far from static however. Change is likely to be frequent and of two types. The first type – change to the value of a member is no particular problem – it is simply a matter of updating the ‘box’. The second type involves a more radical change to the structure – new levels can be introduced, restructuring can occur at a senior level, whole legs of the hierarchy can disappear and new ones grow. This may need redrafting of the specific structure but the generic form should survive.
More radical change does not usually happen all at once, it is usually implemented as of a change program with an ‘as-is’ and ‘to-be’ definitions and a migration program in between. Each migration step can be represented by a separate, parallel, hierarchy within the structure. We call these ‘trees’ and will define them later too.
Some structures may have qualitatively different facets too, for example, the conceptual, logical and physical aspects of particular objects. These too can be represented as parallel hierarchies within the one structure and this is a common feature of structures dealing with topics such as data.
Another Structure
Having thought about Organization, let’s now think about another structure – Business Functions – the things an organization does. Figure 4 shows a simple ‘functional decomposition’ – a hierarchical representation of the functions of the enterprise. This has been simplified greatly for explanatory purposes. In a real enterprise there might well be several hundred low level functional activities – called ‘primitive functions’ in some methodologies. These are defined in such a way as to be non-redundant, i.e. the same primitive function does not repeat in different legs of the hierarchy, nor do the primitive functions overlap in their scope. [Note the contrast with Business Processes in which the lowest level activities or tasks – sometimes called ‘elementary processes’ – are usually repeated, perhaps many times, in different processes. Incidentally, the lowest level in Business Function (Primitive Function) is the same ‘object’ as the lowest level of Business Process (Elementary Process), the differentiator is the absence or presence of redundancy within the hierarchy]
Figure 4 - Sample Business Function Structure
The generic model for Business Function might look like Figure 5.
The upper groupings – The Enterprise and its Functional Groups – contain information common to items at a lower level in the hierarchy. The middle layer contains information about individual business functions – description, purpose, operating parameters, etc. The lowest level – functional activities – would contain information about the particular tasks carried out within a business function. These represent the basic, indivisible units of work within the enterprise.
**Relationships**
An organizational unit could be said to have degrees of responsibility for, and involvement in, a business function. Therefore we could say that an organizational unit ‘is responsible for’, or ‘involved in’ or ‘interested in’ a particular function. Since the relationships are two-way, the reverse relationship may be expressed thus – a business function is the ‘responsibility of’, ‘involves’ or ‘is an interest of’ a particular organizational unit.
We can record these relationships using a spreadsheet too. Firstly, form a skeleton matrix. Take the Organization sheet and copy the organization hierarchy to the y-axis of the new spreadsheet and then add the Business function hierarchy to the x-axis, as in Table 1.
The Systems Architecture Method
– An Overview
Table 1. Skeleton Matrix - Organizational Structure vs. Business Function
<table>
<thead>
<tr>
<th>My Current Organisation</th>
<th>Planning</th>
<th>Marketing</th>
<th>Research</th>
<th>R & D</th>
<th>Design & Development</th>
<th>Manufacturing</th>
<th>Operations</th>
<th>Quality Assurance</th>
<th>Finance</th>
<th>Accounting</th>
<th>Human Resources</th>
<th>Legal</th>
<th>Review and Control</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>President</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Vice President Finance</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Controller</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Vice President Sales</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Order Control Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Electronic Sales Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Electrical Sales Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Vice President Engineering</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>R & D Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Vice President Production</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Plant Operations Director</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Production Planning Director</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Facilities Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Purchasing Manager</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Division Lawyer</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Planning Director</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Relating Organization and Business Functions
Now let’s populate this matrix with the relationships between organization and business functions. However before we do this we have to think about the levels in the hierarchies at which we express the relationships. It is important to maintain consistency in expressing relationships and this is done by mapping at consistent levels between the two hierarchies in the matrix e.g. from level 3 in hierarchy A to level 3 in hierarchy B (or perhaps from 2>3 or 3>2). Which levels to choose is a question of granularity. Can I accurately express the set of relationships between the members of each level? Map too high, say at 2>2, and everything is related to everything else and there is no differentiation. Map too low, say at 4>4, and there is a very large number of relationships; perhaps too many to comprehend fully or at least define consistently in a short period of time. In our example, let’s try to map from level 3 of Organization to level 3 of Function. The result is shown in Table 2.
Table 2. Populated Matrix - Organizational Structure vs Business Function with Relationships at Level 3
Some ‘workarounds’ were needed to build this matrix:
- Dealing with a missing level in the hierarchy:
- It will be noticed that ‘Personnel Director’, ‘Division Lawyer’ and ‘Planning Director’ report to the President but are not on the same level as Vice Presidents. We introduced a ‘dummy’ level-2 entry ‘Direct Presidential Reports’ to cope with this and keep relationships at the 3 to 3 level.
- Maintaining granularity:
- We mapped relationships at level 3 to level 3. From this you can deduce the relationships at level 2 to level 2 and from them, level 1 to level 1. The higher level relationship is the same as the most senior relationship at the subordinate level.
The Systems Architecture Method
– An Overview
This also makes it easier to fill in relationships at a lower level, say at level 4 to level 4, since we have we have determined the relationships between the level 3 parents and thus can complete the lower level mapping in small pieces and perhaps do this over a period of time.
- Assigning seniority to relationships:
- We have a choice of relationship value for each cell – R for ‘Responsible’, I for ‘Involved’, or T for ‘Interested’. These represent degrees of responsibility – R is the most senior, then I, and T is the most junior. There is also blank or ‘null’ value which indicates no relationship. You only need put one value in the cell; the senior ones encompass the juniors. Every column, at each level, must have at least one instance of the senior relationship.
- Someone must be responsible for each function thus there must be an R in each column. Every row must have at least one value, i.e. must not be completely null – otherwise it has no relevance in the analysis.
Formalizing Concepts
So far, we have constructed two simple structures and shown how they may be related. The information in each of the structures was organized into a hierarchy, or tree structure, consisting of several levels of increasing detail. Each level contains items, or members, of a similar degree of detail or granularity. Each member has only one ‘parent’ on the next higher level and may have more than one ‘child’ on the next lower level.
Thus the basic organization of information within a structure is a tree structure. This is a very common approach in real life. However other kinds of structure are to be found, for example where one member has more than one parent on the next higher level. This is called a ‘network’ structure and may be found, for example, in a structure concerned with ‘Business Processes’. The levels within Business Processes will include one called Process and the lower one called Task.
However a common Task may be carried out in more than one Process, thus there is a ‘many to many’ parent/child structure between members on contiguous levels.
The relationships were formed between members of one level of the Organization tree to members of one level of the Business Function tree of similar granularity. Each relationship has a set of values which might be as simple as ‘yes or no’ or be more meaningful. In our example, an Organizational Unit was ‘responsible for’, ‘involved in’ or ‘interested in’ a particular Business Function. Relationships are bi-directional or commutative – a Business Function is the ‘responsibility of’, ‘has the involvement of’ or ‘has the interest of’ a particular Organizational Unit. A handy shorthand notation for a commutative relationship might be ‘responsible for<>responsibility of’. Each of these relationship values could have associated attributes such as effective dates.
We have been thinking about structures, their members and the relationships between them. This may be visualized in a very simple way in the shape of a “Dumbbell” - Figure 6.
We can develop this idea to illustrate the various artifacts we use in SAM – see Figure 7.
Table 3 offers definitions of each of these artifacts and we briefly discuss each below.
There may be multiple kinds of relationships between two structures. A bundle of relationships between two structures is called a ‘link’. Each kind of relationship should be semantically separate from the other relationships and describe a different notion. For example, if we had structures Cars and People, the link might include the relationships ‘owns<>owned by’ and ‘drives<>driven by’. Clearly each of these relationships could apply to a pair of members in their respective structures.
Sometimes members of a structure on a particular level have relationships between themselves. These are called ‘recursive’ relationships. An example may be Tasks within Business Process which
The Systems Architecture Method
– An Overview
have a ‘sequence’ within which they are executed. A recursive relationship of ‘follows<>followed by’ might be used to show this.
An important is the notion of versions within a structure. These may be regarded as parallel trees or hierarchies which describe differing aspects of the structure. These might for example represent the ‘as-is’ and ‘to-be’ aspects of the structure. This might be appropriate for a structure such as Organization when change is expected. Other trees might describe conceptual, logical and physical aspects of a structure such as Data or Application. A collection of related trees is called a ‘forest’.
Typically a user of an Enterprise Architecture is not interested in all of the structures in a model; they have a particular purpose in mind when exploring the EA which is very often linked to a job role and its specific informational needs.
Thus a useful aspect of an Enterprise Architecture is the capability of navigating selectively though a number of structures and links tracing through a set of meaningful relationships. We call this a ‘view’. In SAM architectures concerned with IS/IT, we particularly address Business, Application, Information and Technology views. Other frameworks have different views.
Table 3. SAM Structure Definitions
<table>
<thead>
<tr>
<th>Definition</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Structure</strong></td>
<td>A structure is a body of information about a particular subject or topic of interest to an enterprise – structured, filed and organised for easy access. Typical structures in an IS/IT context might include organisational structure, business processes, locations, objectives and goals, data, applications and so on. An enterprise will have many such structures and on a wider non-IS/IT basis these might include markets, products, competitors, etc.</td>
</tr>
<tr>
<td><strong>Members</strong></td>
<td>A discrete piece of information belonging to a structure. A structure about ‘Locations’ might have the members ‘Head Office’, ‘London Sales Office’, ‘Birmingham Plant’, and so on.</td>
</tr>
<tr>
<td><strong>Level</strong></td>
<td>The members within a structure may be divided into groups which recognise different summarisations of the information. The criteria for a particular kind of structural group may include increasing levels of detail about the members. Each of these groupings of members can be represented as a level within a hierarchy of information. Typically such hierarchies are formed using a single parent/multiple child relationships – a tree structure (decomposition) or a multiple parent/multiple child relationships – a network structure.</td>
</tr>
<tr>
<td><strong>Links</strong></td>
<td>Structures are related to each other. Between two structures there may be many connections with different meanings. For example a structure ‘Vehicles’ and a structure ‘Party’ might have connections like ‘ownership’, ‘driver’, ‘knocked down by’, ‘insured by’ etc.</td>
</tr>
<tr>
<td><strong>Relationship</strong></td>
<td>A connection between two members of different structures that expresses a link. There may also be relationships between members of the same structure (recursive relationships).</td>
</tr>
<tr>
<td><strong>Trees</strong></td>
<td>A tree is a hierarchical organisation of members of a structure into groups with a different focus such as conceptual, logical or physical characteristics or temporal characteristics (past, present and future). A tree contains the full hierarchy of levels of the structure, i.e. it may be viewed as a vertical ‘slice’ through the structure.</td>
</tr>
<tr>
<td><strong>Forest</strong></td>
<td>A collection of Trees, the set of which makes a coherent ‘family’, e.g. conceptual, logical and physical trees.</td>
</tr>
<tr>
<td><strong>View</strong></td>
<td>1. A representation of a whole system from the perspective of a related set of concerns (IEEE Std 1471-2000). 2. A meaningful collection of information composed from corresponding trees of levels from different structures and their relevant relationships. (SAM)</td>
</tr>
<tr>
<td><strong>Viewpoint</strong></td>
<td>1. A specification of the conventions for constructing and using a view. A pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis. (IEEE Std 1471-2000). 2. A template for the construction of a specific view from a set of structures, their levels, trees and the associated relationships. (SAM)</td>
</tr>
</tbody>
</table>
In formulating SAM some years ago, we were impressed by the form of the Atomium “building” in Brussels, Belgium. See Figure 8 - The Atomium Figure 8. Our impression, quite unfairly on the
The Systems Architecture Method
– An Overview
architect, was that it resembled an assembly of “dumbbells” – just like our overall EA model. Since then we have used the analogy to explain the model and its navigation.
Figure 8 - The Atomium
The analogy with Enterprise Architecture may not be immediately obvious. As we have said, in system architecture work we are concerned with discovering facts about the enterprise, and the problem domain, and understanding how these facts relate to each other. From this understanding, we deduce various strategies and initiatives to modify and extend the facts, hopefully to the benefit of the enterprise.
The Atomium building is a useful illustration of the idea, which is the basis for SAM, in which we use the notion of structures, or “spheres of interest”, to represent coherent groups of facts, and the notion of the connecting tubes to represent the relationships between the groups of facts.
Placing ourselves, metaphorically, in one of the spheres, we can examine the facts in our sphere of interest, and then, look down the tubes and see related facts contained in another sphere of interest. In SAM, we might even travel along a tube (the Atomium building actually has elevators and escalators within some of the tubes) and go on a voyage of discovery from one set of facts to another, and even onwards to further related sets of facts.
1 Photo: R J A Jarvis
© Systems Advisers Ltd, 2014. All Rights Reserved.
Members and Identification
The “member” is the fundamental object in our architectural model. Each structure has a population of “base members” that form the lowest level of the structure hierarchy or tree and are indivisible objects that do not decompose any further. Members populate all levels, of course, those at higher levels being the summarization members for the level below.
The member population does not need to be complete, and indeed, rarely is. It is sufficient to stop the top-down decomposition at the level at which meaningful relationships between groups of members in different structures can be formed. This is usually at about the third or fourth level from the peak of the hierarchy. Further having reached the useful level, it is often only necessary to “drill down” further for those members in the immediate area of interest.
We record the information for each member in a common format irrespective of the structure and level to which it belongs. In summary, this is:
- Member ID and Revision Level
- Structure Code, Level Set and Level
- Member Short Name
- Parent Member ID and Revision Level
- Member Description
- Member Attributes (up to five, user configurable by structure)
In the repository we hold all member records in one table. This, and the common format, enables a highly flexible approach to analysis.
Figure 9 shows a snapshot of the Structure Members table in the Toolkit Repository. The formatting of the Member ID should be noted.
Links and Relationships
Structures are, of course, associated with each other in many ways. For example, Applications use Technology, provide Business Functionality and manage Data. Business Processes use Applications and are carried out by Organizational units. Objectives and Goals are realized by Projects and Programs and so on. A link between two structures can have many semantic meanings each of which can be represented by a carefully defined relationship. Each of these relationships is expressed using a descriptive phrase such as those above.
Relationships may be declared between the members of one level of structure A and the members of a level of structure B. Thus we could record that a department (Level 4 in the Organization structure) is located in a certain Building (Level 3 in the Infrastructure & Locations structure).
It is the mapping, analysis and interpretation of relationships that engenders knowledge and understanding of the enterprise. Many times users have a sudden revelation as they map out a set of relationships. “Ah, that explains it!” is commonly heard as the explanation of a business issue is revealed.
A particularly important relationship is that a Business Function “creates, reads, updates or deletes” Data. The CRUD relationship, as it is known,
The Systems Architecture Method
– An Overview
describes which business activities operate on the enterprise’s data resource. The construction and analysis of the CRUD relationship is a key activity in data-centric analyses such as in Service-oriented Architecture.
The best way to map relationships is to form a matrix with the structure members on the axes and the relationship values in the cells. A spreadsheet is a good vehicle for the construction of the matrix and Figure 13 shows a portion of such a matrix in Excel.
Cluster Analysis
Although many insights emerge as the basic matrix is constructed and studied, it is the subsequent manipulation and remapping of relationships that brings most return. A particularly useful technique is that of “Commutative Clustering” - a major feature of the SAM method.
The technique reveals patterns of relationships in the matrix which can be of considerable value in the discovery and understanding the underlying truths of the enterprise. In simple terms, Commutative Clustering groups together pairs of members which are related by common ‘clusterable’ relationships such as Create and Update (CU) in the CRUD matrix. Slightly inaccurately, this may be described as a “double sort” of the matrix, firstly by a column and then by a row, progressively moving the origin down a column and along a row and sorting again.
We have defined a procedure for manual Commutative Clustering. We call this the “North West” method and it is detailed in Figure 10.
As an example let us cluster the matrix of Business Functions and Organisation that we built earlier using the North West method. If we cluster on the relationship value 'R' – Responsible for – at level 3, the resulting clusters will contain all organisational units responsible for particular business functions and also all business functions that are the responsibility of the particular organisational units. The clusters may be call ‘Responsibility Groups’. See Figure 11.
This result, one big cluster and five small ones, is a bit disappointing but not untypical of a first pass. A picture like this is caused by dubious relationships that actually ‘join’ clusters together. Is the Purchasing Manager really responsible for cost planning? Probably not, it which case we can downgrade the relationship to ‘involved in’. Similarly, is the Engineering Design Manager only involved in Design and Development? We should probably upgrade the relationship to ‘responsible for’. Adjusting the dubious relationships gives the result in Figure 12.
# The Systems Architecture Method
## An Overview
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Controller</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Purchasing Manager</td>
<td>R</td>
<td>I</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Division Lawyer</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td></td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Planning Director</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>R</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Treasurer</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td></td>
<td>R</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Plant Operations Director</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Production Planning Director</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>R</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Facilities Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Materials Control Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>R</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Order Control Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Marketing Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Electronic Sales Manager</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Electrical Sales Manager</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>R & D Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Engineering Design Manager</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>I</td>
<td>I</td>
<td>R</td>
<td>I</td>
<td>I</td>
<td>T</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
<tr>
<td>Personnel Director</td>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>T</td>
<td>R</td>
<td>T</td>
<td></td>
<td>R</td>
<td>R</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>R</td>
</tr>
</tbody>
</table>
**Figure 11 - 1st Pass Clusters**
We used this clustering technique to good effect in our Healthcare Business Pattern to define the business components and services. *Figure 13* and *Figure 14* show a portion of the Healthcare Business Pattern clustered matrix in both “raw” and “processed” form. Candidate business components are highlighted in yellow. The original matrices can be viewed in the Solutions Toolkit.
The Systems Architecture Method
– An Overview
Figure 13 - CRUD Matrix (portion - unprocessed)
Figure 14 - CRUD Matrix (clustered - portion)
Granularity and Volatility
The population of structures and their associated relationships can be very large. A typical structure may well have over 10,000 members in total over all levels and a set of relationships, formed at the level 6 of the hierarchy, could well exceed 100,000 entries. This clearly is a problem in so far as the effort required in data capture and validation is equally large. However at level 4, a typical structure may have less than 350 members and relationships to another structure at this level might total between 1,000 and 2,000 entries. This would clearly be a more appropriate level, or indeed level 3 might be too, at which to define relationships provided that the summarization scheme is such that the relationships declared are meaningful and truly encapsulate their subordinate levels. It usually is!
Context and Timeframe
The data we record and analyze in an Enterprise Architecture needs to be clearly identified and consistent in terms of its context and the timeframe to which it refers. With regard to context, architectural data may have a conceptual, logical or physical semantic meaning. With regard to timeframe, the data may refer to the present state (“as-is”), the desired end state (“vision”) and a variable number of intermediate, in-progress states (“to-be” stages). Each model, object, relationship and definition in the architecture should have a clear context and timeframe e.g. “logical, to-be” or “physical, as-is”. We call this “Architectural State”.
Forests and Trees
We have described how we can organize structure members into a hierarchy or tree structure. We have also noted that all data should have context and timeframe. We handle this by forming a tree for each context and timeframe combination (Architectural State) within a structure. Thus for each structure, there is a family of trees that track the evolving population from “as-is to vision” on one hand and progressive development from “conceptual to physical” on the other. We call this collection of trees for a single structure a “Forest”.
Views and Viewpoints
The individual users of an Enterprise Architecture have differing purposes and motivations. Some are interested in the business aspects of the architecture – in Objectives and Goals, Projects and Programs and Business Processes for example. Others have an interest in technical matters – Technology and Infrastructure for example. Each of these requirements can be met from the same Enterprise Architecture. Since the members, structures, forests, trees and relationships are common to all views, there is the assurance that all views are consistent being drawn from the same population in the same level of update.
Four popular views are the Business, Application, Information and Technology views (BAIT for short) and these are fully supported in our architectural model.
The BUSINESS VIEW describes how the business works. It includes broad business strategies along with plans for moving the enterprise from its current to its future state (as-is to to-be).
The APPLICATION VIEW defines the enterprise’s application portfolio. Typically it would be based around the application structure represents the services, information, and functionality that crosses organizational boundaries, linking users of different skills and functions to achieve common business objectives.
The Systems Architecture Method
– An Overview
The INFORMATION VIEW describes the data the enterprise needs to run its business processes and operations. The information view may describe how data is bound into the work flow, including structured data stores such as databases and unstructured data stores such as documents, spreadsheets, and presentations that exist throughout the enterprise.
The TECHNOLOGY VIEW lays out the hardware and software supporting the enterprise and provides a logical description of infrastructure and system components that are necessary to support the application and information views. It defines the set of technology standards and services needed to execute the business mission.
An important point is that these views, and others, are drawn from the same members, relationships, structures, trees and forests. In other words, the views are consistent with each other and data mismatches, and resulting bad decisions, are avoided.
The Minimum Essential Models, described later, are also views drawn from the same, consistent data as are the key architectural scenarios in our Business Patterns.
Environment Definition – Scope and Boundaries
The development of an Enterprise Architecture is not an unconstrained activity without limit to its range and depth. It is normally limited to the boundaries of the enterprise it describes. Nor is it a parochial activity, limited to a couple of departments or even a division of the enterprise.
An Enterprise Architecture must address a coherent and cohesive business environment and the issue is how this should be defined. We suggest the following parameters may be used in setting the scope and boundaries of any EA project:
Domains: The architecture should be completely contained within a business domain such as healthcare or social care. Any overlaps between domains in terms of functionality or data usage should be handled by means of a common domain.
Business Programs and Phases: The architecture should address one or more complete business programs and all their phases. By business program we mean programs of work aimed at business development or improvement and involving multiple business functions.
Business Processes and Data Scope: The scope of the architecture should include complete business processes and all data creating and updating functions within the domain.
These parameters now provide the definition and identification of a “tree”. The Tree ID is composed from the following elements:
- Program Phase (Domain + Program ID + Phase Code)
- Architectural State (Context + Timeframe)
- Structure Code
- Tree Description
- Forest Name
Highlight
There may seem to be an apparent contradiction here in so far as we seem to be describing large scale enterprise architecture projects even though our toolkit is aimed at the single architect or analyst. However, even small scale projects should be governed by these principles.
The Systems Architecture Method
– An Overview
We allocate a member to a tree. Since members are held in a “pool”, we can assign a member to more than one tree. For example, a member may be unaltered between “as-is” and “to-be” states and thus is included in both trees. It is the analysis of “tree populations” that provides the basis for migration planning in that it shows which members are new, which are unchanged and which are discontinued. The differencing of two trees (the “delta”) forms an important input to change management programs, e.g. it specifies the changes in Organization or Business Processes, Technology, and so on. This is an important usage of Enterprise Architecture and is a highly useful tool for even for the single architect user.
The Myth of Enterprise-wide, Project-deep Projects
Have defined our scope and boundaries, it is not reasonable to expect an Enterprise Architecture to specify all levels of detail from business processes through technology selections and application functionality for individual projects (project-deep) across the entire enterprise (enterprise-wide) in one massive and collective effort.
Yet, this is what many Enterprise Architecture projects attempt. They use armies of architects and consultants who closet themselves away for months at a time and then deliver “the answer.” The problem with this approach is that the answer is usually out of date by the time it is delivered. In attempting to define all things to all people, this approach severely compromises the value of any results.
Our development method recognizes these limitations and takes appropriate measures to build the Enterprise Architecture in successive iterations. This allows the architecture to provide business value quickly, to gather feedback from actual use, and to make adjustments through subsequent iterations. Following the initial iteration you can state that you “have an Enterprise Architecture.” However, the moment this is stated as a fact, the work is just beginning.
The iterative process is supported by the concept of the Minimum Essential Model. It is only necessary to build the part of the model needed for the immediate problem in hand and only to the depth required to express meaningful relationships between the Structures Enterprise Architecture. Then you can move on to the next MEM.
Controlling the Project
We need to translate the scope and boundary definitions into a clear plan. This need not be a complex affair. We need to establish our priorities before embarking on a project, in either Explore Mode or Enterprise Mode, and establish our deliverables, timeline and resource requirements.
As a single user project, we can use simple project methods – perhaps a basic Gantt chart is enough. Beyond that, when working on a multi-architect project, the standard project management procedures and tools of the enterprise should suffice.
|
{"Source-Url": "http://www.systems-advisers.com/Pages/Systems%20Architecture%20Method.pdf", "len_cl100k_base": 10952, "olmocr-version": "0.1.49", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 49036, "total-output-tokens": 10519, "length": "2e13", "weborganizer": {"__label__adult": 0.0009393692016601562, "__label__art_design": 0.041351318359375, "__label__crime_law": 0.0015649795532226562, "__label__education_jobs": 0.020721435546875, "__label__entertainment": 0.0004150867462158203, "__label__fashion_beauty": 0.0006003379821777344, "__label__finance_business": 0.042999267578125, "__label__food_dining": 0.0008864402770996094, "__label__games": 0.0017309188842773438, "__label__hardware": 0.0033054351806640625, "__label__health": 0.0013113021850585938, "__label__history": 0.00202178955078125, "__label__home_hobbies": 0.0012187957763671875, "__label__industrial": 0.01013946533203125, "__label__literature": 0.0014142990112304688, "__label__politics": 0.0008435249328613281, "__label__religion": 0.0013208389282226562, "__label__science_tech": 0.31787109375, "__label__social_life": 0.0003845691680908203, "__label__software": 0.0439453125, "__label__software_dev": 0.501953125, "__label__sports_fitness": 0.0005211830139160156, "__label__transportation": 0.0019235610961914065, "__label__travel": 0.0007042884826660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50597, 0.00389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50597, 0.15816]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50597, 0.93639]], "google_gemma-3-12b-it_contains_pii": [[0, 3015, false], [3015, 5373, null], [5373, 6180, null], [6180, 7905, null], [7905, 7961, null], [7961, 9432, null], [9432, 10948, null], [10948, 12161, null], [12161, 17255, null], [17255, 18037, null], [18037, 21118, null], [21118, 21986, null], [21986, 23281, null], [23281, 26785, null], [26785, 28252, null], [28252, 29735, null], [29735, 31032, null], [31032, 32538, null], [32538, 33583, null], [33583, 40847, null], [40847, 41229, null], [41229, 41371, null], [41371, 44745, null], [44745, 47688, null], [47688, 50597, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3015, true], [3015, 5373, null], [5373, 6180, null], [6180, 7905, null], [7905, 7961, null], [7961, 9432, null], [9432, 10948, null], [10948, 12161, null], [12161, 17255, null], [17255, 18037, null], [18037, 21118, null], [21118, 21986, null], [21986, 23281, null], [23281, 26785, null], [26785, 28252, null], [28252, 29735, null], [29735, 31032, null], [31032, 32538, null], [32538, 33583, null], [33583, 40847, null], [40847, 41229, null], [41229, 41371, null], [41371, 44745, null], [44745, 47688, null], [47688, 50597, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50597, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50597, null]], "pdf_page_numbers": [[0, 3015, 1], [3015, 5373, 2], [5373, 6180, 3], [6180, 7905, 4], [7905, 7961, 5], [7961, 9432, 6], [9432, 10948, 7], [10948, 12161, 8], [12161, 17255, 9], [17255, 18037, 10], [18037, 21118, 11], [21118, 21986, 12], [21986, 23281, 13], [23281, 26785, 14], [26785, 28252, 15], [28252, 29735, 16], [29735, 31032, 17], [31032, 32538, 18], [32538, 33583, 19], [33583, 40847, 20], [40847, 41229, 21], [41229, 41371, 22], [41371, 44745, 23], [44745, 47688, 24], [47688, 50597, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50597, 0.19421]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
27ef614b4c442e08e4fdf068fea199ff02163317
|
Integration in PVS:
Tables, Types, and Model Checking*
Sam Owre, John Rushby, Natarajan Shankar
Computer Science Laboratory, SRI International,
Menlo Park, CA 94025, USA
Abstract. We have argued previously that the effectiveness of a verification system derives not only from the power of its individual features for expression and deduction, but from the extent to which these capabilities are integrated: the whole is more than the sum of its parts [20, 21]. Here, we illustrate this thesis by describing a simple construct for tabular specifications that was recently added to PVS. Because this construct integrates with other capabilities of PVS, such as typechecker-generated proof obligations, dependent typing, higher-order functions, model checking, and general theorem proving, it can be used for a surprising variety of purposes. We demonstrate this with examples drawn from hardware division algorithms and requirements specifications.
1 Introduction
Persuaded by the advocacy of David Parnas and others [15], we recently added a construct for tabular specification to PVS [12]. The construct generates proof obligations to ensure that the conditions labeling the rows and columns are disjoint and exclusive. This simple capability has been found useful by colleagues at NASA and Lockheed-Martin, who applied it in requirements analysis for Space Shuttle flight software [2, 18]. The capability becomes rather richer in the presence of dependent typing, and in this form it has been used to verify the accessible region in a quotient lookup table for SRT division [19]. When combined with other features of the PVS specification language, the table construct provides some of the attractive attributes of the TableWise [8] and SCR [6] specification methods. Because these constructions are performed in the context of a full verification system, we are able to use its theorem prover and model checker to establish invariant and reachability properties of the specifications concerned, and are able also to compose specifications described by separate tables and to establish refinement and equivalence relations between state machines specified in this manner.
* This work was supported by the Air Force Office of Scientific Research, Air Force Materiel Command, USAF, under contract F49620-95-C0844 and by the National Science Foundation under contract CCR-9509931.
2 Basic Tables
Tables can be a convenient way to specify certain kinds of functions. An example is the function \( \text{sign}(x) \), which returns \([-1, 0, 1]\) according to whether its integer argument is negative, zero, or positive. As a table, this can be specified as follows.
\[
\begin{array}{c|c|c}
x & < 0 & > 0 \\
\hline
-1 & 0 & +1
\end{array}
\]
This is an example of a piecewise continuous function that requires definition by cases, and the tabular presentation provides two benefits.
- It makes the cases explicit, thereby allowing checks that none of them overlap and that all possibilities are considered.
- It provides a visually attractive presentation of the definition that eases comprehension.
The first of these benefits is a semantic issue that is handled in PVS by the COND construct; the second is a syntactic issue that is handled in PVS by the TABLE construct, which builds on COND.
Before we introduce these constructs, we should mention that the PVS specification language is a higher-order logic that supports both predicate subtypes and dependent types, and that the system provides strong assurances that definitional constructs (such as recursive function definitions) are conservative [13, 14]. Some of the checks necessary to ensure type-correctness and conservative extension are not algorithmically decidable; in these cases, PVS generates Type Correctness Conditions (TCCs), which are obligations that must be discharged by theorem proving. PVS provides a powerful interactive theorem prover that includes decision procedures for linear arithmetic and other theories, and its default strategies are often able to discharge TCCs automatically; in more difficult cases, the user must guide the theorem prover interactively. Specifications with false TCCs are considered malformed and no meaning is ascribed to them. PVS allows proof obligations to be postponed, but keeps track of all unsatisfied obligations; a specification is not considered fully typechecked, and its theorems are considered provisional, until all TCCs have been proved.
2.1 The PVS COND Construct
Standard PVS language constructions for specification by cases are the traditional IF-THEN-ELSE, and a pattern matching CASES expression for enumerating over the constructors of an abstract data type. A COND construct has recently been added to these. Its general form is shown in \( \text{COND} \) where the \( c_i \) are Boolean expressions and the \( e_i \) are values of some type \( t \). (PVS has subtypes and overloading, so the types of the individual \( e_i \) must be “unified” to yield the common supertype \( t \).) The keyword ELSE can be used in place of the final condition \( c_n \). The construct can appear anywhere that a value of the type of \( t \) is allowed.
Exactly one of the $c_i$ is required to be true; because PVS already supports proof obligations in the form of TCCs, it is easy to enforce this requirement by causing each COND to generate two TCCs as follows.
- **Disjointness** requires that each distinct $c_i$, $c_j$ pair is disjoint.
- **Coverage** requires that the disjunction of all the $c_i$ is true.
The coverage TCC is suppressed if the ELSE keyword is used; also the $c_i$, $c_j$ component of the disjointness TCC is suppressed when $e_i$ and $e_j$ are syntactically identical.
A COND has meaning only if its TCCs are true, in which case the general COND expression of 1 is assigned the same meaning as (and is treated internally as) the IF-THEN-ELSE construction shown in 2. Notice that the condition $c_n$ does not appear in the IF-THEN-ELSE translation: if this condition was given as an explicit ELSE in the COND, then the “fall through” default is exactly what is required; otherwise, the coverage TCC ensures that $c_n$ is the negation of the disjunction of the other $c_i$, and the “fall through” is again correct. Because COND is treated internally as an IF-THEN-ELSE, reasoning involving COND requires no extensions to the PVS theorem prover.
Using COND, we can specify the sign function as follows.
```plaintext
signs: TYPE = { x: int | x >= -1 & x <= 1 }
x: VAR int
sign_cond(x): signs = COND
x < 0 -> -1,
x = 0 -> 0,
x > 0 -> 1
ENDCOND
```
This generates the following TCCs, both of which are discharged by PVS’s default strategy for TCCs in fractions of a second.
% Disjointness TCC generated (line 10) for
% COND x < 0 -> -1, x = 0 -> 0, x > 0 -> 1 ENDCOND
sign_cond_TCC2: OBLIGATION (FORALL (x: int):
NOT (x < 0 AND x = 0)
AND NOT (x < 0 AND x > 0)
AND NOT (x = 0 AND x > 0));
% Coverage TCC generated (line 10) for
% COND x < 0 -> -1, x = 0 -> 0, x > 0 -> 1 ENDCOND
sign_cond_TCC3: OBLIGATION (FORALL (x: int): x < 0 OR x = 0 OR x > 0);
The variant specification that uses an ELSE in place of the condition \( x > 0 \) generates a simpler disjointness TCC (just the first of the three conjuncts in \( \text{sign\_cond\_TCC2} \)), and no coverage TCC.
2.2 The PVS TABLE Construct
PVS has TABLE constructs that provide a fairly attractive input syntax for one- and two-dimensional tables and that are \( \LaTeX \) printed as true tables (the example Parnes Fig/1 that appears later illustrates this). Their semantic treatment derives directly from the COND construct.
2.2.1 One-Dimensional Tables. The simplest tables in PVS are one-dimensional. In their vertical format, they simply replace the -> and , of COND cases by | and ||, respectively, and introduce each case with |; they also add a final || and change the keyword from COND to TABLE. The sign example is therefore transformed from a COND to the TABLE shown in 3. Note that the horizontal lines are simply comments (comments in PVS are introduced by \%.)
```
sign_vtable(x): signs = TABLE
%------------------%
| x < 0 | -1 ||
%------------------%
| x = 0 | 0 ||
%------------------%
| x > 0 | 1 ||
ENDTABLE %------------------%
```
One-dimensional horizontal tables present the information in a different order, and use |[...]| to alert the parser to this fact, as illustrated in 4.
```
sign_htable(x): signs = TABLE
%------------------%
| [ x<0 | x=0 | x>0 ]
%------------------%
| -1 | 0 | 1 ||
ENDTABLE %------------------%
```
Both these tabular specifications are equivalent to \( \text{sign\_cond} \), generate exactly the same TCCs, and are treated the same in proofs. Notice that tables require no extensions to the PVS theorem prover, and the full repertoire of proof commands may be applied to constructions involving tables—for example, it is possible to rewrite with an expression whose right hand side is a table. Note, however, that PVS remembers the syntactic form used in a specification and always prints it out the same way it was typed in; thus, the prover will print a table as a table, even though it is treated semantically as a COND (which is itself treated as an IF-THEN-ELSE). Of course, the special syntactic treatment is lost once a proof step (e.g., one that “lifts” IF-THEN-ELSE constructs to the top level) has transformed the structures appearing in a sequent.
2.2.2 Blank Entries. Suppose we reformulated our sign example to take a natural number, rather than an integer, as its argument. The \( x < 0 \) case can no longer arise and can be omitted from the table. In some circumstances, however, we may wish to make it patently clear that this case should not occur and we can do this by including the case, but with a blank entry for the value of the expression.
The presence of blank entries changes the coverage TCC: this must now ensure that the disjunction of all the conditions with non-blank entries is true. Notice this requires a TCC to be generated even when an ELSE case is present.
In one-dimensional tables, blank entries can always be removed by simply deleting the entire case; this is not so with two-dimensional tables, however, where the accessibility of an entry may depend on the conditions labeling both its row and column. We describe an example later.
### 2.2.3 Enumeration Tables
These are a syntactic variation that provide more succinct representation when the conditions to a table are all of the form \( x = \text{expression} \), for some single identifier \( x \). In an enumeration table, the identifier concerned follows the TABLE keyword, and the conditions of the table simply list the expressions; a two-dimensional example appears below in \( \text{l} \).
Enumeration tables are an important special case because their TCCs are often easily decidable, and this allows some important optimizations. Observe that the number of conjuncts in a disjointness TCC grows as the square of the number of conditions; when enumerating over the values of an enumeration type, it is not uncommon to have tens or hundreds of conditions, and thus thousands of conjuncts in the disjointness TCC. It is unwieldy and slow to display such massive TCCs to the user. PVS therefore recognizes this case and treats it specially: when the expressions in an enumeration table are all constructors of a single datatype (and the values of an enumeration type are exactly these), the disjointness and coverage conditions are trivially decidable and are checked internally by the typechecker, which also translates such tables into a datatype CASES expression, rather than a \text{COND}. Another special case arises when the expressions of an enumeration table are all literal values of some type (the usual case is values from some range of integers); again, the disjointness TCC is easily decidable and can be checked internally by the typechecker (the coverage TCC can require theorem proving and is generated normally). A table is immediately flagged as illegal if such internal checks reveal a false TCC.
### 2.2.4 Two-Dimensional Tables
Two-dimensional tables are treated as nested \text{COND} (or \text{CASES}) constructs; more particularly, the columns are nested within the rows. Here is a trivial example of a two-dimensional enumeration table in which the rows enumerate the values of a type \text{state} and the columns enumerate the values of a type \text{input}.
\[
\begin{array}{c|c|c}
\text{signhtable}(x: \text{nat}): \text{signs} = & \text{TABLE} \% & \% \\
| x<0 & x=0 & x>0 | & \% & \% \\
| 0 & 1 & | & \% & \%
\end{array}
\]
The prover can provide greater automation for the \text{CASES} expression. The user could use a \text{CASES} construct directly in the one-dimensional case; the main benefit in providing the translation automatically is with two-dimensional tables.
This translates internally to the following.
```plaintext
example(state, input): some_type = TABLE state , input
<table>
<thead>
<tr>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>p</td>
</tr>
<tr>
<td>b</td>
<td>q</td>
</tr>
</tbody>
</table>
ENDTABLE
COND
state = a \to COND input = x \to p, input = y \to q ENDCOND,
state = b \to COND input = x \to q, input = y \to q ENDCOND
ENDCOND
```
Notice that this translation causes disjointness and coverage TCCs for the columns to be generated several times—once for each row. For example, the coverage TCCs generated for the two inner CONDs above have the following form.
- `coverage_a: OBLIGATION state = a IMPLIES input = x OR input = y`
- `coverage_a: OBLIGATION state = b IMPLIES input = x OR input = y`
These appear redundant, so we might be tempted to use the following, apparently equivalent, translation.
```plaintext
LET x1 = COND input = x \to p, input = y \to q ENDCOND,
x2 = COND input = x \to q, input = y \to q ENDCOND
IN COND state = a \to x1, state = b \to x2 ENDCOND
```
This generates the following single, simple coverage TCC for the columns.
- `coverage_TCC: OBLIGATION input = x OR input = y`
The problem with this translation is that there may be subtype TCCs generated from the terms corresponding to \( p \) and \( q \) that must be conditioned on the expressions corresponding to \( a \) and \( b \) in order to be provable. Here is an example due to Parnas [15, Figure 1] that illustrates this. We exhibit this example in the form output by the PVS \LaTeX-printer.
```
\texttt{Parnas_Fig1}(y, x: \text{real}): \text{real} =
\begin{array}{c|c|c|c}
\hline
& y = \text{27} & y > \text{27} & y < \text{27} \\
\hline
x = 3 & 27 + \sqrt{27} & 54 + \sqrt{27} & y^2 + 3 \\
\hline
x < 3 & 27 + \sqrt{-(x-3)} & y + \sqrt{-(x-3)} & y^2 + (x-3)^2 \\
\hline
x > 3 & 27 + \sqrt{x-3} & 2 \times y + \sqrt{x-3} & y^2 + (3-x)^2 \\
\hline
\end{array}
```
The subtype constraint on the argument to the square root function (namely, that it be nonnegative) generates TCCs in the second and third rows that are true only when the corresponding row constraints are taken into account. The \texttt{LET} form translation loses this information. The advantage of the simple translation, which is the one used in PVS, is that it provides more precise (i.e., weaker but still adequate) TCCs, and therefore admits more specifications.
2.3 Applications
The PVS table constructs described above have been used in several applications performed by ourselves and others—indeed, some elements in the PVS treatment of tables (notably, blank entries, and the optimizations for enumeration tables) evolved in response to these applications.
In one application, PVS is being employed in analysis of new requirements documented in “Change Requests” (CRs) for the flight software of the Space Shuttle. This work is undertaken as part of a project involving staff from several NASA Centers (Langley, Johnson, and JPL) and Requirements Analysts (RAs) from the team at Lockheed Martin (formerly IBM) that develops this software. Running alongside what is generally considered an exemplary (though manual) process for requirements review, this experiment provides useful data on the effectiveness of automated formal analyses [2, 18].
One of the CRs focused on improving the display of flight information to Shuttle pilots guiding the critical initial bank onto the “Heading Alignment Cylinder” (HAC) during descent. The CR documented key portions of the required control logic in tabular form, and was readily formalized using PVS tables; a small representative example is reproduced in Appendix A. Attempts to discharge the TCCs generated by these tables immediately indicated the need to document implicit “domain knowledge,” including constraints such as “Major Mode = 305 or 603 implies phase <= 3,” and “wowlon can be true only if Major Mode = 305 or 603.” Such domain knowledge was incorporated into the specification using dependent predicate subtyping and was gradually extended and refined through an iterative process that relied on the automated strategies for proving TCCs that are built into PVS.
Observe that proofs of the HAC TCCs could be automated because necessary domain knowledge was supplied through the type system, using predicate and dependent subtyping. For example, the constraints mentioned above were specified as follows (iphase andwowlon are record fields; notice that the latter has a type that is a subtype of bool).
\[
\begin{align*}
\text{iphase: } & \{ p : \text{iphase} \mid (\text{mode} = \text{mm602} \Rightarrow p \geq 4) \ \text{AND} \\
& \ (\text{mode} = \text{mm305 OR mode} = \text{mm603}) \Rightarrow p \leq 3\} \\
\text{wowlon: } & \{ b : \text{bool} \mid b \Rightarrow (\text{mode} = \text{mm305 OR mode} = \text{mm603}) \}
\end{align*}
\]
The PVS prover can make very effective and automated use of information supplied in this way; a system lacking such a rich type system would probably require an interactive proof to provide the domain knowledge in the form of axioms. (Of course, PVS’s decision procedures for linear arithmetic also contributed to the automation of these proofs.)
After incorporating all constraints identified by the RAs, it was found that the conditions for several rows in one table still overlapped, and this led to identification of a missing conjunct in some of the conditions. In addition to
discovery of this error, the requirements analysts felt that explicit identification and documentation of the domain knowledge was a valuable product of the analysis [18].
Another application for PVS tables has been in verification of fast hardware division algorithms. The notorious Pentium Fdiv bug, which is reported to have cost Intel $475 million, was due to bad entries in the quotient lookup table for an SRT divider. Triangular-shaped regions at top and bottom of these tables are never referenced by the algorithm; the Pentium error was that certain entries believed to be in this inaccessible region, and containing arbitrary data, were, in fact, sometimes referenced during execution [16].
An SRT division algorithm similar to that used in the Pentium has been specified and verified in PVS [19]. The quotient lookup table for this algorithm was specified as a PVS table (reproduced in Appendix B) which uses blank entries to indicate those regions of the table that are believed to be inaccessible. PVS generates 23 coverage TCCs to ensure that these entries will never be encountered; verification of the algorithm (which can be done largely automatically in PVS) then ensures that all the nonblank table entries are correct. Injection of an error similar to that in the Pentium leads to a failed TCC proof whose final sequent is a counterexample that highlights the error [19]. Miner and Leathrum have used this capability of PVS to develop several new SRT tables [11], each in less than three hours.
3 Decision Tables
Decision tables associate Boolean expressions with the “decision” or output to be generated when a particular expression is true. There are many kinds of decision tables; the ones considered here are from a requirements engineering methodology developed for avionics systems by Lance Sherry of Honeywell [22], and given mechanized support in TableWise, developed by Hoover and Chen at ORA [8]. The following is a simple decision table (taken from [8, Table 2]).
<table>
<thead>
<tr>
<th>Input Variables</th>
<th>Takeoff</th>
<th>Climb</th>
<th>Climb_Int_Level</th>
<th>Cruise</th>
</tr>
</thead>
<tbody>
<tr>
<td>Flight.phase</td>
<td>climb</td>
<td>climb</td>
<td>climb</td>
<td>climb</td>
</tr>
<tr>
<td>Acc.Alt > 400</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>*</td>
</tr>
<tr>
<td>compare(Acc.Alt, Acc_Alt)</td>
<td>LT</td>
<td>LT</td>
<td>GE</td>
<td>GE</td>
</tr>
<tr>
<td>Alt_Cap_Hold</td>
<td>false</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>compare(Alt_Target, prev_Alt_Target)</td>
<td>*</td>
<td>GT</td>
<td>*</td>
<td>GT</td>
</tr>
</tbody>
</table>
This table describes the conditions under which each of the four “operational procedures” Takeoff, Climb, Climb_Int_Level, and Cruise should be selected. Each of the columns beneath the name of an operational procedure gives a conjunction of conditions under which that procedure should be selected.
(where * indicates “don’t care”). For example, the third and fourth columns in the body of the table indicate that the operational procedure Climb should be used if the Flightphase is climb, AC_Alt is greater than or equal to Acc_Alt, and either Alt_Capt_Hold is false, or it is true and Alt_Target is greater than prev_Alt_Target. The columns forming a subtable beneath each operational procedure are similar to the AND/OR tables used in the RSML notation of Leveson and colleagues [10].
The PVS TABLE construct cannot represent this type of decision table directly: we need some additional mechanism to represent a conjunction such as
\[(\text{Flightphase} = \text{climb}) \land (\text{AC}_\text{Alt} \geq \text{Acc}_\text{Alt}) \land \neg \text{Alt}_\text{Capt}_\text{Hold}\]
by the compact list given in the third column of the table.
Now the list (climb, *, GE, false, *) from that column can be interpreted as the argument list to a function \(X\) that treats the first element as a function to be applied to Flightphase, the second as a function to be applied to the expression AC_Alt > 400 and so on, as follows.
```plaintext
X(a,b,c,d,e): bool =
a(\text{Flightphase}) \& b(\text{AC}_\text{Alt} > 400) \& c(\text{AC}_\text{Alt}, \text{Acc}_\text{Alt})
\& d(\text{Alt}_\text{Capt}_\text{Hold}) \& e(\text{Alt}_\text{Target}, \text{prev}_\text{Alt}_\text{Target})
```
We can then use this construction to specify the third column of the decision table as the following row from a vertical one-dimensional PVS table; the complete table is shown in Appendix C (taken from [12], where full details may be found).
```
%| X(climb?, *, GE, false, *) | Climb |
%|----------------------------|-------|
```
The functions appearing in the argument list to \(X\) are defined as follows (note that * is overloaded and that climb? is a recognizer for an enumerated type).
```plaintext
q: VAR bool
false(q): bool = NOT q
GE(x, y): bool = x >= y
*(q): bool = TRUE
```
The disjointness TCC from this table immediately identifies two overlapping cases, while the coverage TCC identifies four that are missing. For example, one of the four unproved sequents\(^3\) from the coverage TCC is the following.
\(^3\)PVS uses a sequent calculus presentation whose interpretation is that the conjunction of formulas above the turnstile line (\(|\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdOTS of formulas below the line. The appearance of a formula on one side of the line is equivalent to its negation on the other, and this structural rule is used to eliminate top-level negations. Names with embedded ! characters are Skolem constants derived from variables with the same root name.
Unproven sequents such as this, with no formulas above the line, indicate the failure to select an operational procedure when all the formulas below the line are false. This one, for example, identifies the failure to consider the case when \( AC_{\text{Alt}} \) is not greater than 400, \( Alt_{\text{Capt Hold}} \) is false, and \( AC_{\text{Alt}} \) is less than \( Acc_{\text{Alt}} \). The six flaws identified in this way are identical to those found in this example by the special-purpose tool TableWise [8].
Unlike PVS, TableWise presents the anomalies that it discovers in a tabular form similar to that of the original decision table; TableWise can also generate executable Ada code and English language documentation from decision tables. These benefits are representative of those that can be achieved with a special-purpose tool. On the other hand, PVS's more powerful deductive capabilities also provide benefits. For example, PVS can settle disjointness and coverage TCCs that depend on properties more general than the simple Boolean and arithmetic relations built into TableWise and similar tools. The limitations of these tools are illustrated by Heimdahl [3], who describes spurious error reports when a completeness and consistency checking tool for the AND/OR tables of RSML (developed with Leveson [5]) was applied to TCAS II. These spurious reports were due to the presence of arithmetic and defined functions whose properties are beyond the reach of the BDD-based tautology checker incorporated in the tool. As Heimdahl notes [3, page 81], a theorem prover is needed to settle such properties; he and Czerny are now experimenting with PVS for this purpose [4].
A theorem prover such as PVS can also examine questions beyond simple completeness and consistency. For example, the incompleteness and inconsistencies detected in the example decision table can be remedied by adding an ELSE clause and by replacing the second and third "don't care" entries under Climb Int level by false and LT, respectively. The TCC generated by this modified specification is proved automatically by PVS, so we may proceed to examine general properties of the decision table. To check that the specification matches our intent, we can use conjectures that we believe to be true as "challenges." For example, we may believe that when \( AC_{\text{Alt}} = Acc_{\text{Alt}} \), the operational procedure selected should match the Flightphase. We can check this in the case that the Flightphase is cruise using the following challenge.
```plaintext
test: THEOREM \( AC_{\text{Alt}} = Acc_{\text{Alt}} \) =>
decision_table(cruise, AC_{\text{Alt}}, Acc_{\text{Alt}},
Alt_{Target}, prev_Alt_{Target}, Alt_{Capt Hold}) = Cruise
```
This is easily proved by PVS's standard (grind) strategy. However, when we try the corresponding challenge for the case where Flightphase is climb, we
discover that the conjecture is not proved, and actually is false in the case where
\texttt{Alt\_Capt\_Hold} is \texttt{true} and \texttt{Alt\_Target <= prev\_Alt\_Target}, thereby exposing
a flaw in either our expectations or our formalization of the specification. Mechanically supported challenges of this kind illustrate the utility of undertaking
the analysis of tabular specifications in a context that provides theorem proving. Special-purpose tools for tabular specifications generally provide only
completeness and consistency checking, and perhaps some form of simulation. Such tools would help identify the anomaly just described only if we happened
to choose to simulate a case where \texttt{Alt\_Capt\_Hold} is \texttt{true} and \texttt{Alt\_Target <= prev\_Alt\_Target}.
4 Transition Relations and Model Checking
Decision tables provide a way to specify the selection of operational procedures
to be executed at each step. However, the model of computation that repeatedly
performs these selection and execution steps is understood informally and is
not explicit in the PVS specifications. Consequently, it is not possible to pose
and examine overall system properties—such as whether a certain property is
invariant, or another is reachable—without formalizing more of the underlying
model of computation. \textit{Transition relations} provide a way to do this, and the
SCR method is a way to present such relations in a tabular manner [7].
The following is a typical SCR “mode transition table” (taken from Atlee
and Gannon [1, Table 2]). This system, a simplified automobile cruise control,
has four modes (\texttt{off}, \texttt{inactive}, \texttt{cruise}, and \texttt{override}) and the table describes
the conditions under which it makes transitions from one mode to another.
| Current \texttt{Mode} | \texttt{Ignited} | \texttt{Running} | \texttt{Too fast} | \texttt{Brake} | \texttt{Activate} | \texttt{Deactivate} | \texttt{Resume} | \texttt{Next \texttt{Mode}} |
|----------------------|----------------|----------------|-----------------|-------------|----------------|^-------------|----------------|-----------------------------|
| \texttt{Off} | @T | - | - | - | - | - | - | \texttt{Inactive} |
| \texttt{Inactive} | @F | T | T | F | @T | - | - | \texttt{Off} |
| \texttt{Cruise} | @F | - | - | - | - | - | - | \texttt{Inactive} |
| | @F | - | - | - | - | - | - | \texttt{Inactive} |
| | @F | - | - | - | - | - | - | \texttt{Inactive} |
| | T | T | F | @T | - | - | - | \texttt{Override} |
| | @F | - | - | - | - | - | - | \texttt{Override} |
| \texttt{Override} | @F | - | - | - | - | - | - | \texttt{Off} |
| | - | @F | - | - | - | - | - | \texttt{Inactive} |
| | T | T | F | @T | - | - | - | \texttt{Cruise} |
| | @F | - | - | - | - | - | - | \texttt{Inactive} |
| | - | @F | - | - | - | - | - | \texttt{Inactive} |
| | T | T | F | @T | - | - | - | \texttt{Cruise} |
An @T entry indicates the case where the condition labeling that column changes
from \texttt{false} to \texttt{true}, while @F indicates the opposite transition; a T entry indicates
the case where the condition labeling that column remains \texttt{true} through the
transition, F indicates the case where it remains \texttt{false}, and a dash indicates
“don’t care.” Thus the third row indicates that the system transitions from the **Inactive** mode to the **Cruise** mode if **Activate** goes true, while **Ignited** and **Running** remain true and **Brake** remains false.
To model this type of specification in PVS, we specify a condition as a predicate on inputs to the system, then $\texttt{atT}$ (which represents $\partial T$) is a higher order function that takes a condition and returns a relation on pairs of inputs (namely, one that is true when the condition is false when applied to the first and true when applied to the second). The constructions for $\texttt{atF}$ (representing $\partial F$), $T$, $F$, and $\texttt{dc}$ (representing “don’t care”) are specified similarly.
```plaintext
scr[ input, mode, output: TYPE ]: THEORY
BEGIN
condition: TYPE = pred[input]
p, q: VAR input
P: VAR condition
atT(P)(p, q): bool = NOT P(p) & P(q) % $\partial T(P)$
atF(P)(p, q): bool = P(p) & NOT P(q) % $\partial F(P)$
T(P)(p, q): bool = P(p) & P(q)
F(P)(p, q): bool = NOT P(p) & NOT P(q)
dc(P)(p, q): bool = true % don’t care
...
With these constructions, the mode transition table shown earlier can be represented in PVS as follows (for brevity, we show only the transitions from the **Inactive** mode, corresponding to the second and third rows of the table; the complete table is shown in Appendix D, and full details are given in [12]).
```plaintext
event_constructor: TYPE = [condition -> event]
EC: TYPE = event_constructor
PC(A, B, C, D, E, F, G)(a, b, c, d, e, f, g)(p, q): bool = A(a)(p, q) & B(b)(p, q)
& C(c)(p, q) & D(d)(p, q) & E(e)(p, q) & F(f)(p, q) & G(g)(p, q)
% Note: PC stands for "pairwise conjunction"
original(s: modes, (p, q: monitored_vars)): modes =
LET
x = (ignited, running, toofast, brake, activate, deactivate, resume),
X = (LAMBDA (a, b, c, d, e, f, g: EC): PC(a, b, c, d, e, f, g)(x)(p, q))
IN TABLE s
...
inactive | TABLE %----|----|----|----|----|----|----|----|----|----|----|----|----|--
| X( atF, dc, dc, dc, dc, dc, dc, dc, dc, dc) | off |
%----|----|----|----|----|----|----|----|----|----|----|----|----|--
| X( T, T, dc, F, atT, dc, dc, dc, dc, dc) | cruise |
%----|----|----|----|----|----|----|----|----|----|----|----|----|--
| ELSE | inactive |
ENDTABLE | %----|----|----|----|----|----|----|----|----|----|----|----|----|--
...
```
12
Typing checking this specification generates several TCCs; those for the transitions from mode `inactive` are proved automatically, but those from modes `cruise` and `override` are not. These unproved TCCs yield subgoals that pinpoint problems in the specification, rather in the way that identified problems in the decision table. For example, the successor to `cruise` mode is ambiguous in the case where `toofast` and `deactivate` both go from `false` to `true`; the first of these causes a transition to `inactive` mode, while the second causes a transition to `override` mode. Repairing these flaws requires several changes to the table and—as with the Space Shuttle example—adding some “domain knowledge” (such as that `toofast` implies `running`).
Because a mode transition table specifies how the system proceeds from one mode to another, we can examine properties of the computations that this induces. To do this, we first need to derive the transition relation on states that is implicit in a mode table. We identify the instantaneous state of the system with its current mode and the current values of its input variables. We specify this as a record in PVS; a transition relation is a predicate on pairs of such states.
```
state: TYPE = [# mode/: mode, vars/: input #]
transition_relation: TYPE = pred[[state, state]]
```
Recall that a mode transition table has the following signature.
```
mode_table: TYPE = [mode, input, input -> mode]
```
We can therefore define a function `trans` that takes a mode table and returns the corresponding state transition relation.
```
trans(mt/: mode_table): transition_relation =
(LAMBDA (s,t/: state): mode(t) = mt(mode(s), vars(s), vars(t))]
```
The branching time temporal logic CTL provides a convenient way to specify certain properties of the computations induced by a transition relation, and PVS can automatically verify CTL formulas for transition relations over finite types by using a decision procedure for Park’s µ-calculus to provide CTL model checking [17]. An example of a property about this specification that can be specified in CTL is the following invariant.
In `cruise` mode, the engine is `running`, the vehicle is not going `toofast`, the `brake` is not on, and `deactivate` is not selected.
We can examine this property with PVS in the following manner.
Here, `cruise_tab` is the PVS theory that defines the mode table `deterministic` (formed by correcting the errors found in the table `original` discussed above), and `ctlops` is the PVS theory (from the library MU) that defines the CTL operators. The function `trans` introduced above is applied to the mode table `deterministic` to construct a transition relation (also called `trans`). We characterize the initial state as one whose mode is `off` and in which the engine is not `ignited`, and specify (as `safe4`) the invariant mentioned above (where `AG` is the CTL operator meaning “in every reachable state”). Another plausible invariant property is specified by the formula `safe5`. The PVS `model-check` command verifies formula `safe5` but fails on `safe4`. This prompts closer examination of the specification and reveals that, although `cruise` mode is exited when `toofast` goes `true`, the transitions into `cruise` mode neglect to check that `toofast` is `false` before making the transition. The correction is to add the condition `F(toofast)` to the three transitions into `cruise` mode, and PVS is able to verify the formula `safe4` for the corrected specification.
Similar to the TableWise tool for decision tables, Heitmeyer and colleagues have developed the SCR* tool for checking consistency of SCR tabular specifications [6], while Atlee and colleagues have developed a translator that turns SCR tables into a form acceptable to the SMV model checker [23]. These special-purpose tools have the advantage of being closely tailored to their intended uses and are scalable to larger examples than is convenient for the PVS treatment. On the other hand, the PVS treatment required no customized development: it simply builds on capabilities such as tables, higher-order logic, theorem proving, and model checking that are already present in PVS.
Furthermore, the PVS treatment can draw on the full resources of the language and system to combine methods in novel ways, or to conduct customized analyses. For example, we have used a variant of PVS’s treatment of SCR tables to specify the nondeterministic mode transitions of interacting “climb” and “level” components in the requirements for a simple “autopilot” [12, section 4.3]. The transitions of the components were specified as separate tables and combined by disjunction (representing interleaving concurrency). The combined specification was then tested against a number of challenge properties using model checking. A deterministic “implementation” specification of the autopilot was constructed from two “phases” using relational composition to specify
sequential execution. This specification was also tested against the challenge properties using model checking. Finally, model checking was used to show that the behaviors induced by the requirements and the implementation specifications are equivalent (this property can be expressed as a CTL formula).
5 Conclusion
We have described PVS's capabilities for representing tabular specifications, illustrated how these interact synergistically with other capabilities such as typechecker-generated proof obligations, dependent typing, higher-order functions, model checking, and general theorem proving, and described some applications. We demonstrated how these capabilities of the PVS language and verification system can be used in combination to provide customized support for existing methodologies for documenting and analyzing requirements. Because they use only the standard capabilities of PVS, users can adapt and extend these customizations to suit their own needs.
The generic support provided for tables and for model checking in PVS may be compared with the more specialized support provided in tools such as ORA's TableWise [8], NRL's SCR* [6, 7], and Leveson and Heimdahl's consistency checker for RSML [5]. Dedicated, lightweight tools such as these are likely to be superior to a heavyweight, generic system such as PVS for their chosen purposes. Our goal in applying PVS to these problems is not to compete with specialized tools but to complement them. The generic capabilities of PVS can be used to prototype some of the capabilities of specialized tools (this was done in the development of TableWise), and can also be used to supplement their capabilities when comprehensive theorem proving and model checking power is needed.
Acknowledgments
Examples undertaken by Ricky Butler, Ben Di Vito, and Paul Miner of NASA Langley Research Center, Steve Miller of Collins Commercial Avionics and Harald Rueß of Universität Ulm, and suggestions by Connie Heitmeyer of the Naval Research Laboratory, were instrumental in shaping the PVS table constructs. Comments by the anonymous referees improved the presentation of this paper.
References
Papers by SRI authors are generally available from http://www.csl.sri.com/fm.html.
The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Office of Scientific Research or the U.S. Government.
Appendix
A HAC Requirements Table Expressed in PVS
| switch_position: TYPE = {low, medium, high} |
| major_mode: TYPE = {mm301, mm302, mm303, mm304, mm305, mm602, mm603} |
| iphase: TYPE = {n: nat | n <= 6} CONTAINING 0 |
ADI_error_inputs: TYPE =
[# mode: major_mode, |
switch_position: switch_position, |
iphase: {p: iphase | (mode = mm602 => p >= 4) AND |
((mode = mm305 OR mode = mm603) => p <= 3)}, |
wowlon: {b: bool | b => (mode = mm305 OR mode = mm603)} #]
ADI_error_scale_deflection(A: ADI_error_inputs) : [real, real, real] =
LET mode = mode(A), switch_position = switch_position(A), |
iphase = iphase(A), wowlon = wowlon(A) IN |
TABLE % Result is of form: [roll error, pitch error, yaw error] |
; |
% switch_position |
; |
%-----------------------------% |
| high | medium | low |
%--------------------------------% |
| mode = mm301 OR |
| mode = mm302 OR |
| mode = mm303 | (10, 10, 10) | (5, 5, 5) | (1, 1, 1) |
%--------------------------------% |
| mode = mm304 OR |
| (mode = mm602 AND |
| (iphase = 4 OR |
| iphase = 6)) | (25, 5, 5/2) | (25, 2, 5/2) | (10, 1, 5/2) |
%--------------------------------% |
| mode = mm602 AND |
| iphase = 5 | (25, 5/4, 5/2) | (25, 5/4, 5/2) | (10, 1/2, 5/2) |
%--------------------------------% |
| (mode = mm305 OR |
| mode = mm603) AND |
| NOT wowlon | (25, 5/4, 5/2) | (25, 5/4, 5/2) | (10, 1/2, 5/2) |
%--------------------------------% |
| wowlon | (20, 10, 5/2) | (5, 5, 5/2) | (1, 1, 5/2) |
%--------------------------------% |
ENDTABLE
18
### B Quotient Lookup Table for SRT Divider
```plaintext
c(D: bvec[3], (P: bvec[7] | estimation_bound?(valD(D), valP(P)))): subrange(-2, 2) =
LET a = -(2 - P(1) * P(0)),
b = -(2 - P(1)),
c = 1 + P(1),
d = -(1 - P(1)),
e = P(1),
Dp:nat = bv2pattern(D),
Ptruncp:nat = bv2pattern(P^(-6,2))
IN TABLE
<table>
<thead>
<tr>
<th></th>
<th>Dp</th>
</tr>
</thead>
<tbody>
<tr>
<td>%</td>
<td>----</td>
</tr>
<tr>
<td>01010</td>
<td></td>
</tr>
<tr>
<td>01001</td>
<td></td>
</tr>
<tr>
<td>01000</td>
<td></td>
</tr>
<tr>
<td>00111</td>
<td></td>
</tr>
<tr>
<td>00110</td>
<td></td>
</tr>
<tr>
<td>00101</td>
<td></td>
</tr>
<tr>
<td>00100</td>
<td></td>
</tr>
<tr>
<td>00011</td>
<td></td>
</tr>
<tr>
<td>00010</td>
<td></td>
</tr>
<tr>
<td>00001</td>
<td></td>
</tr>
<tr>
<td>00000</td>
<td></td>
</tr>
<tr>
<td>11111</td>
<td></td>
</tr>
<tr>
<td>11110</td>
<td></td>
</tr>
<tr>
<td>11101</td>
<td></td>
</tr>
<tr>
<td>11100</td>
<td></td>
</tr>
<tr>
<td>11011</td>
<td></td>
</tr>
<tr>
<td>11010</td>
<td></td>
</tr>
<tr>
<td>11001</td>
<td></td>
</tr>
<tr>
<td>11000</td>
<td></td>
</tr>
<tr>
<td>10111</td>
<td></td>
</tr>
<tr>
<td>10110</td>
<td></td>
</tr>
<tr>
<td>10101</td>
<td></td>
</tr>
<tr>
<td>%</td>
<td>----</td>
</tr>
</tbody>
</table>
ENDTABLE
```
### C Example Decision Table
<table>
<thead>
<tr>
<th>q: VAR bool</th>
</tr>
</thead>
<tbody>
<tr>
<td>true(q): bool = q</td>
</tr>
<tr>
<td>false(q): bool = NOT q</td>
</tr>
<tr>
<td>*q: bool = TRUE</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>x, y: VAR nat</th>
</tr>
</thead>
<tbody>
<tr>
<td>GT(x, y): bool = x > y</td>
</tr>
<tr>
<td>GE(x, y): bool = x >= y</td>
</tr>
<tr>
<td>EQ(x, y): bool = x = y</td>
</tr>
<tr>
<td>LE(x, y): bool = x <= y</td>
</tr>
</tbody>
</table>
**operation_procedures: TYPE = {Takeoff, Climb, Climb_Int_Level, Cruise}**
**flight_phases: TYPE = {climb, cruise}**
**FlightPhase: VAR flight_phases**
**AC_Alt, Acc_Alt, Alt_Target, prev_Alt_Target: VAR nat**
**Alt_Capt_Hold: VAR bool**
```
decision_table(FlightPhase, AC_Alt, Acc_Alt, Alt_Target, prev_Alt_Target, AC_Alt, Acc_Alt, Alt_Target, prev_Alt_Target, Alt_Capt_Hold): operational_procedures =
LET X = (LAMBDA (a: pred[flight_phases]), (b: pred[bool]), (c: pred[[nat,nat]]), (d: pred[bool]), (e: pred[[nat,nat]]):
a(FlightPhase) &
b(AC_Alt > 400) &
c(AC_Alt, Acc_Alt) &
d(Alt_Capt_Hold) &
e(Alt_Target, prev_Alt_Target)) INTABLE
% | | | | | |
% | | | | | |
% v v v v v Operational Procedure
%-------------------------|------------------------|-------------------|------%
| X(climb?, true , LT , false , * ) | Takeoff | ||
%-------------------------|------------------------|-------------------|------%
| X(climb?, true , LT , true , GT) | Takeoff | ||
%-------------------------|------------------------|-------------------|------%
| X(climb?, * , GE , false , * ) | Climb | ||
%-------------------------|------------------------|-------------------|------%
| X(climb?, * , GE , true , GT) | Climb | ||
%-------------------------|------------------------|-------------------|------%
| X(cruise?, * , GT , true , EQ) | Cruise | ||
%-------------------------|------------------------|-------------------|------%
```
**ENDTABLE**
D Example SCR Table
```plaintext
event_constructor: TYPE = [condition -> event]
EC: TYPE = event_constructor
PC(A,B,C,D,E,F,G)(a,b,c,d,e,f,g)(p,q):bool = A(p,q) & B(p,q)
& C(p,q) & D(p,q) & E(p,q) & F(p,q) & G(p,q)
% Note: PC stands for "pairwise conjunction"
original(s: modes, (p, q: monitored_vars)): modes =
LET
x: conds7 = (ignited, running, too-fast, brake, activate, deactivate, resume),
X = (LAMBDA (a,b,c,d,e,f,g): PC(a,b,c,d,e,f,g)(x)(p,q))
IN TABLE s
<table>
<thead>
<tr>
<th>off</th>
<th>TABLE</th>
</tr>
</thead>
<tbody>
<tr>
<td>X( att, dc , dc , dc , dc , dc ) inactive</td>
<td></td>
</tr>
<tr>
<td>ELSE</td>
<td></td>
</tr>
<tr>
<td>X----</td>
<td>-----------------</td>
</tr>
</tbody>
</table>
ENDTABLE
<table>
<thead>
<tr>
<th>inactive</th>
<th>TABLE</th>
</tr>
</thead>
<tbody>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>ELSE</td>
<td></td>
</tr>
<tr>
<td>X----</td>
<td>----------------</td>
</tr>
</tbody>
</table>
ENDTABLE
<table>
<thead>
<tr>
<th>cruise</th>
<th>TABLE</th>
</tr>
</thead>
<tbody>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>ELSE</td>
<td></td>
</tr>
<tr>
<td>X----</td>
<td>----------------</td>
</tr>
</tbody>
</table>
ENDTABLE
<table>
<thead>
<tr>
<th>override</th>
<th>TABLE</th>
</tr>
</thead>
<tbody>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>X------</td>
<td>----------------</td>
</tr>
<tr>
<td>ELSE</td>
<td></td>
</tr>
<tr>
<td>X----</td>
<td>----------------</td>
</tr>
</tbody>
</table>
ENDTABLE
```
This article was processed using the \LaTeX \text{macro package with LLNCS style}
|
{"Source-Url": "http://www.csl.sri.com/papers/tacas97/tacas97.pdf", "len_cl100k_base": 13716, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 61958, "total-output-tokens": 15889, "length": "2e13", "weborganizer": {"__label__adult": 0.0002918243408203125, "__label__art_design": 0.0005288124084472656, "__label__crime_law": 0.00031304359436035156, "__label__education_jobs": 0.0013513565063476562, "__label__entertainment": 9.804964065551758e-05, "__label__fashion_beauty": 0.00015461444854736328, "__label__finance_business": 0.0004086494445800781, "__label__food_dining": 0.0003528594970703125, "__label__games": 0.0008301734924316406, "__label__hardware": 0.001708984375, "__label__health": 0.0004515647888183594, "__label__history": 0.0003821849822998047, "__label__home_hobbies": 0.0001316070556640625, "__label__industrial": 0.0007357597351074219, "__label__literature": 0.0003817081451416016, "__label__politics": 0.0002880096435546875, "__label__religion": 0.0005469322204589844, "__label__science_tech": 0.1317138671875, "__label__social_life": 0.00010144710540771484, "__label__software": 0.0155792236328125, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.0002448558807373047, "__label__transportation": 0.0006842613220214844, "__label__travel": 0.00021028518676757812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54585, 0.04211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54585, 0.34811]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54585, 0.82068]], "google_gemma-3-12b-it_contains_pii": [[0, 2385, false], [2385, 5182, null], [5182, 7116, null], [7116, 9863, null], [9863, 12905, null], [12905, 15216, null], [15216, 18242, null], [18242, 20996, null], [20996, 26674, null], [26674, 29565, null], [29565, 34329, null], [34329, 36686, null], [36686, 39027, null], [39027, 41659, null], [41659, 44161, null], [44161, 47533, null], [47533, 49403, null], [49403, 50932, null], [50932, 51594, null], [51594, 53330, null], [53330, 54585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2385, true], [2385, 5182, null], [5182, 7116, null], [7116, 9863, null], [9863, 12905, null], [12905, 15216, null], [15216, 18242, null], [18242, 20996, null], [20996, 26674, null], [26674, 29565, null], [29565, 34329, null], [34329, 36686, null], [36686, 39027, null], [39027, 41659, null], [41659, 44161, null], [44161, 47533, null], [47533, 49403, null], [49403, 50932, null], [50932, 51594, null], [51594, 53330, null], [53330, 54585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54585, null]], "pdf_page_numbers": [[0, 2385, 1], [2385, 5182, 2], [5182, 7116, 3], [7116, 9863, 4], [9863, 12905, 5], [12905, 15216, 6], [15216, 18242, 7], [18242, 20996, 8], [20996, 26674, 9], [26674, 29565, 10], [29565, 34329, 11], [34329, 36686, 12], [36686, 39027, 13], [39027, 41659, 14], [41659, 44161, 15], [44161, 47533, 16], [47533, 49403, 17], [49403, 50932, 18], [50932, 51594, 19], [51594, 53330, 20], [53330, 54585, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54585, 0.24431]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
bd3c21a81e647e5f6c6d07c457ee742512488efb
|
Schema matching and integration for data sharing among collaborating organizations
Ünal-Karakas, Ö.; Afsarmanesh, H.
Published in:
Journal of Software
DOI:
10.4304/jsw.4.3.248-261
Citation for published version (APA):
Abstract—Schema matching and schema integration are important components of the data sharing infrastructure in Collaborative Networks. In order to achieve more accurate matching and integration results and enhance efficiency, it is required to provide some mechanisms to carry out these processes as automatically as possible. This paper addresses the problems and challenges related to schema matching and schema integration and introduces the Semi-Automatic Schema Matching and INTegration (SASMINT) system to automate these processes. Other systems aiming at database interoperability typically focus either on schema matching or on schema integration. On the other hand, the SASMINT system combines them and uses the results of schema matching for semi-automatic schema integration. SASMINT follows a composite approach in schema matching, which means it combines the results of variety of algorithms, making it a generic tool applicable for different types of schemas. It also proposes a Sampler component for helping the user to assign the weights to algorithms. Furthermore, SASMINT uses an XML-based derivation language to save the results of schema matching and schema integration, and also to define the components of integrated schemas, in order to further support automated query processing against integrated sources.
Index Terms—Schema matching, schema integration, collaborative networks
I. INTRODUCTION
With the advances of Internet, the number of information sources accessible through the Web is increasing. However, these advances create new challenges. For example, there is a huge amount of related data made available by distributed providers. Rather than accessing and manipulating single database systems in isolation, research is needed to make it possible to simultaneously access and manipulate different remote databases. In addition to being distributed, the voluminous data are exposed by various data providers (e.g. institutions, organizations, companies, etc.), which have their own proprietary data models resulting in heterogeneity among databases. In order to provide transparent access to such remote data and enable the sharing of information among heterogeneous and autonomous databases, their schema heterogeneity needs to be identified and resolved. Proposing a solution for such problems is more challenging for environments whose members shall collaborate, while they pose a number of heterogeneities that need to be addressed by the infrastructure. For example, when a number of organizations are members of collaborative networks, the proposed infrastructure must support them with sharing and exchange of their information.
More and more organizations understand the need to work together in order to better achieve their common goals. The importance of collaboration has been well understood in different domains, resulting in a rise in the number of collaborating organizations. A Collaborative Network (CN) is formed by variety of autonomous, geographically distributed, and heterogeneous organizations that collaborate to better achieve common or compatible goals [1]. Several forms of collaborative networks are evolving in parallel. Among the promising types of CNs, one can mention Virtual Organizations or Virtual Enterprises, Virtual Communities, and Virtual Laboratories.
It is important to provide an infrastructure enabling database interoperability, especially considering that collaborative networks need to be formed quickly [2]. Heterogeneity is the most important obstacle facing the collaboration. Since data sharing constitutes the main type of collaboration, the collaboration infrastructure has to consider such differences for providing effective mechanisms to integrate or inter-link and homogeneously access heterogeneous databases. However, automatic resolution of schema heterogeneity still remains a major bottleneck for provision of integrated data access/sharing among autonomous, heterogeneous, and distributed databases. In order to provide transparent access to such remote data and enable the sharing of information among databases, their schema heterogeneity needs to be identified and resolved and then the correspondences among schemas need to be identified. This process is called as schema matching. After schema matching, schemas might need to be also integrated, depending on the needs of the CNs. It is clear to see that schema...
matching and schema integration constitute the key processes in information and communication technologies (ICT) infrastructures supporting collaboration. Tools that enable semi-automatic matching and integration are among the most important components of such infrastructures.
Both schema matching and schema integration are challenging, especially considering the naming and structural differences among schemas. In most previous approaches reported in literature, there is a great amount of manual work involved in schema matching and integration. Although there is some research focusing on semi-automatic schema matching (as later addressed in the related research section), it is not interlinked with the automation of schema integration. There is still need for clever and flexible user interfaces to display match results. Another limitation of the previous approaches is that they typically do not combine different match algorithms in a flexible way. Taking these limitations into account, we propose the SASMINT (Semi Automatic Schema Matching and INTEGRation) system and approach [3-5]. SASMINT proposes a solution to automate the processes related to interlinking of heterogeneous relational databases, particularly focused on schema matching and schema integration in collaborative environments including different forms of collaborative networks. Compared to other approaches in the literature, SASMINT combines a number of algorithms for semi-automatic schema matching and uses the result of matching for semi-automatic schema integration, needed for providing access to distributed, heterogeneous, and autonomous databases.
The rest of this paper is organized as follows: Section II introduces different types of information management systems aiming at providing access to distributed and heterogeneous databases. This section also summarizes different types of information related heterogeneities. Section III provides a background review of schema matching and schema integration. Section IV addresses the related work and open issues. Section V introduces the SASMINT system. Sections VI, VII, and VIII describe the Configuration, Schema Matching, and Schema Integration steps of SASMINT respectively. Section IX provides some discussions about the application of SASMINT through a small example. Finally, Section X summarizes the main conclusions of the paper.
II. INTEGRATED INFORMATION MANAGEMENT AND HETEROGENEITY
Enabling interoperability among distributed and heterogeneous databases has been a significant issue in different domains, including CNs. Different architectures have been proposed in the literature, concerning the management and sharing of data provided by distributed and possibly heterogeneous and autonomous databases. Many terms have been used to describe these architectures, such as multirelational systems, federated and non-federated database systems, whereas there is no consensus of terminology in the database community. In order to provide our understanding of the terms, we provide a classification for such systems that we call as Integrated Information Management Systems, as shown in Fig. 1.
By following the definition of [6], we mention two types of integrated information management systems: distributed database systems and multidatabase systems. Based on the classification of [7], we divide the multidatabase systems as federated information management systems and non-federated information management systems.
Federated information management systems consist of nodes, which autonomously decide which part of their data to share with others. These systems can follow a fully federated schema or a global federated schema approach. As illustrated in Fig. 2, a fully federated schema approach constructs an integrated schema at each node by merging the local schema of that node with the schemas imported from other nodes. Import schemas represent the information that other nodes make available to this node. A global federated schema approach on the other hand, generates a global schema by integrating the export schemas (represent the shared part of information) from nodes into a single schema, as shown in Fig. 3.
Nodes of non-federated information management systems are not autonomous. Two approaches can be mentioned here: 1-to-1 schema mapping and common schema adaptation mapping. In 1-to-1 schema mapping approach, mappings between the schemas of nodes are
identified in a pair-wise manner. For instance, as represented in Fig. 4, mappings between the schema of Node A and schemas of each other nodes are defined. Whereas in common schema adaptation mapping approach, mappings between the common schema and the local schema of each node are specified, as depicted in Fig. 5.
No matter which of the Integrated Information Management System approach is used in a network of collaborating organizations, heterogeneity is the main challenge to deal with. Heterogeneity exists at different levels, such as there might be differences in operating systems and in database management systems used, as well as in data definitions.
A number of classifications of heterogeneity have been proposed in the literature and there are many overlaps and discrepancies among these classifications. Considering the goals of SASMINT, introduced in this paper, we focus only on information related heterogeneities. Especially considering the differences in database schemas, we can mention the following types of heterogeneity:
1. **Structural Heterogeneity**: Different structural primitives are provided by different data models. For example, object-oriented data models support inheritance while relational data models do not (data model heterogeneity). Even if the data model is the same, similar information content may be represented differently in different schemas (schematic heterogeneity).
Following types are mentioned by [8] among the structural conflicts:
- **Type Conflicts**: These conflicts arise from using different modeling constructs (for example entity vs. attribute) for representing the same concept.
- **Dependency Conflicts**: These types of conflicts arise when concepts are related among themselves with different dependencies in different schemas, such as with 1-to-1 relationship in one schema, while 1-to-m relationship in another schema.
- **Key Conflicts**: This case arises when different keys are assigned to the same concept.
2. **Syntactic Heterogeneity**: This type of heterogeneity is related to different formats used in the names of
the same concepts, such as using abbreviated vs.
extended names.
3. Semantic Heterogeneity: This type is related to
differences in meaning, dependent on the vocabulary
and terminology used to express the information and
the contexts in which it is interpreted.
There are two types of semantic relationships among
the names used:
- Homonyms: The same name is used for two
different concepts.
- Synonyms: The same concept is described by
different names.
As it is clear from the existence of a large number of
classifications, heterogeneity has been one of the
fundamental problems in information systems. Among
different types of heterogeneities mentioned in the
literature, SASMINT system considers heterogeneities
listed above. By combining the syntactic and semantic
heterogeneity under the name linguistic heterogeneity,
the research explained in this paper focuses on structural
and linguistic schema conflicts. Especially structural
conflicts are complex cases and cause difficulties for
schema matching and integration algorithms. Since it is
difficult to handle these cases automatically, user input is
required.
III. SCHEMA MATCHING AND SCHEMA INTEGRATION
Integrated information management systems, introduced
in the previous section, need to tackle different types of
heterogeneities in order to identify the correspondences
among schemas, which is the aim of schema matching
and integration. As a result, schema matching and
schema integration have become two main processes in
such systems.
Schema matching can be defined as finding correspondences
between elements of two schemas. It plays an important role in several application domains,
such as schema integration, data warehouses, query
processing, Semantic Web, and e-business [9][10].
The simplest type of matching is the 1-1 matching. For two
schemas A and B, this type of matching identifies for
each element of A the most similar element of schema B.
In addition to 1-1 matches, complex matches also
frequently occur among schemas. Complex matching
finds out mappings between each element or a group of
elements of schema A and a group of elements of schema
B. Groups of elements are combined with a formula.
Schema Matching takes a variety of inputs and
produces some outputs depending on the matching
approach that it follows. Varieties of inputs consist of
the schema information, a linguistic dictionary, a number of
linguistic and structural similarity measures, and the user
input. Output of matching is the similarity scores for each
mapping identified.
The problem of schema integration in the context of
distributed information systems is a relatively old
problem. In different approaches for enabling access to
distributed and heterogeneous data, different levels of
integration might be required. In database research,
schema integration is typically used to refer to both view
integration and database integration [8]. View integration
aims at producing an integrated schema of users’ views
and is performed during the database design process,
whereas database integration derives a new schema from
existing specification. As identified in [11], view
integration methodologies work with views based on the
same data model, but database integration technologies
work with schemas that are usually defined using
heterogeneous data models. Considering the goals of the
research work explained in this thesis, database
integration is the one being focused on and whenever
schema integration is mentioned, database integration is
meant.
Three steps are involved in schema integration: 1) Pre-
integration, 2) Matching, and 3) Integration. The Pre-
integration step consists of a number of preparations
before the integration, such as identifying schemas to be
integrated, preferences to be considered in the integration
process, and amount of user input, as [8] mentioned. The
Matching step, also called as the Investigation step by
[11], identifies the correspondences among schemas by
resolving the conflicts. The Integration step is
responsible for integrating schemas based on the
correspondences identified in the matching step.
IV. RELATED WORK AND OPEN ISSUES
Varieties of approaches for providing integrated data
access/sharing among distributed, heterogeneous, and
autonomous databases have been proposed in the
literature. For example, the PEER is a generic object-
oriented federated information management system
enabling information sharing among autonomous and
heterogeneous nodes [12]. There is an integrated schema
for each node generated by integrating the local schema
of the node and the schemas representing data that other
nodes make available to this node. However, no
automation is provided for generating this schema. In
another project called SIMS (Services and Information
Management for decision Systems) [13], in order to
provide access to heterogeneous and distributed
databases, first a common domain model is created using
the Loom knowledge representation language. When an
information source decides to join to the SIMS system,
first its contents are modeled and then the concepts in
information source model are related to the
corresponding concepts of the domain model. Again, no
automation is provided for this process. Similar to the
PEER and SIMS systems, other efforts in this typically
involve a large amount of manual work. They usually
ignore the step of semi-automatic schema matching.
While interoperability has been an important topic in
the database research, schema matching has been usually
considered as a separate problem. A great deal of effort
has been put into the study of increasing the degree of
automation of schema matching. One such schema
matching approach is proposed in the SEMINT
(SEMantic INTEGRator) system [14] that utilizes both
schema and instance information. However, no Graphical
User Interface (GUI) is provided. Cupid system [15]
exploits a combination of name and structure matcher.
However, the name matcher uses only one string similarity metric and no GUI is provided. Similarity Flooding [16] converts diverse models into directed labeled graphs and then identifies the initial maps between elements of two graphs using only a simple string matcher. These initial maps are then used by a structure matcher. However, Similarity Flooding (SF) neither has the knowledge of edge and node semantics, nor it provides a GUI. Clio [17] generates alternative mappings as SQL view definitions based on the value correspondences that are defined by the user. No linguistic matching techniques are used and much manual work is required. S-Match [18] exploits a number of element and structure level match techniques. Result of schema matching is represented using the terms of equivalence, more general, less general, mismatch, and overlapping and no GUI is provided. COMA++ [19], which is a successor of COMA [20], provides a library of different types of matches and also a sophisticated GUI, making it more comprehensive than other systems. However, it is sometimes difficult for users to decide on the best combination of matchers.
As for schema integration, a number of systems or approaches have been introduced in the database literature. MOMIS (Mediator EnvironMent for Multiple Information Sources) [21] has a component responsible for schema integration. However, it requires a database specialist to assists the integration process at each phase of integration. For example, it is necessary that all elements of schemas are annotated by the database designer manually using the appropriate meanings in the WordNet lexical database. COMA++, introduced above among the schema matching systems, provides functionality for schema merging, but since schema matching is the main focus of COMA++, schema merging is primitive and it is not possible to see how the elements of merged schema are derived from the local schemas and no mappings are defined between the merged schema and the local schemas. PORSCH E (Performance ORiented SCHEma mediation) [22] aims at creating a mediated schema from a set of large XML Schemas and identifying mappings from the source schemas to the mediated schema. It accepts a set of schema trees. PORSCH E has a linguistic matcher component, which uses tokenization, abbreviations, and synonyms. Abbreviation and synonym tables are generated by users. There is no GUI provided by PORSCH E and it is not clear how the results of integration are stored.
To summarize, although schema matching and schema integration have been the focus of large numbers of efforts in the literature, there are a number of issues, which are not sufficiently addressed yet and thus require further investigation:
- **Using a Combination of Match Algorithms:** Efforts in the schema matching research typically use a limited number of algorithms. However, in order to achieve high match accuracy, it is necessary to combine different types of algorithms, considering syntactic, semantic, as well as structural differences among schemas. Furthermore, in order to combine different algorithms, a weight needs to be identified for each of them. Identifying an appropriate weight for each algorithm is also an essential part of a system.
- **Graphical User Interface:** Developing algorithms for automatic schema matching is not sufficient alone. User interaction is another important topic to be considered when developing a schema matching and schema integration system. Especially considering that it is not possible to identify all matches automatically, a simple but effective user interface is required both for setting some parameters, such as the threshold and the weights for the metrics, and also for correcting and validating the match and integration results. Unfortunately, most prototypes developed so far offer no or only a rudimentary user interface, except COMA [20], COMA++ [19], and Clio [17] systems. However, COMA, COMA++ and Clio have some limitations as well as addresses above.
- **Use of Match Results for Schema Integration and Providing a Comprehensive System:** Efforts in the literature are typically about algorithms and they do not consider developing complete systems for enabling interoperability. These algorithms are useful as being the base for schema matching and integration systems, but they require a large amount of manual input. Furthermore, none of these efforts considers how to use the result of schema matching for semi-automatic schema integration. Providing a system with only schema matching capabilities and not considering schema integration is not enough and limits the applicability of the system only to specific cases.
V. THE SASMINT SYSTEM
Considering the limitations of the previous work as addressed in Section IV, a system, called Semi-Automatic Schema Matching and INTEGRation (SASMINT), is proposed, capable of automatically resolving naming, structural, and semantic conflicts and semi-automatically integrating relational database schemas [3]. Since user input is required after the schema matching and schema integration, it is aimed to be used by database administrators or users who have sufficient knowledge about the domain as well as database schemas.
The main components of SASMINT are shown in Fig. 6. The Sampler Component helps the user to identify the appropriate weight for each metric and algorithm used in the schema matching. The Graph Representation Component of SASMINT is responsible for representing schemas in the graph format. It uses JGraph [23] for graph visualization and JGraphT [24] for its Java graph libraries. Users interact with the system by means of the GUI Component. The Schema Matching Component matches input schemas, which are called as recipient and donor schemas, using a combination of Linguistic and Structural Matching techniques. Linguistic Matching
SASMINT has three main processing steps: Configuration, Schema Matching, and Schema Integration. Details of these steps are provided in the next three sections. The main flow of information in the system is as follows: First, the user assigns weights for each metric or algorithm either manually or with the help of the Sampler component. If nothing is set by the user, a value of 0.5 is defaulted. Secondly, the user specifies a threshold value and the selection strategy for the results of schema matching as explained in Section VIII.
As for the second responsibility of the configuration step, which is the assignment of weights, currently there are three ways supported by SASMINT:
1) Users can manually assign weights.
2) Users can automatically assign weights.
3) In case neither (1) nor (2) are opted for, SASMINT assumes an equal weight distribution. Needless to say, this might lead to imprecise mapping results.
Accurate matching is important in order to reduce the amount of user input and we consider appropriate distribution of weights to be a pre-requisite for accurate matching. However, assigning these weights manually is not an easy task and assistance to the user is required. For this reason, SASMINT provides a component called Sampler, whose function is to guide the user in assigning weights to the metrics used in Linguistic Matching metrics. In Fig. 7, the operation of the Sampler Component is illustrated.
The Sampler component can work with up to five known sample pairs. Through the GUI, shown in Fig.8, provided by the Sampler component, the user has the freedom to put in a) syntactically similar pairs in case

he/she would like the system to compute the weights of syntactic matching metrics, or b) semantically similar pairs in case it is required to compute the weights of metrics for semantic matching.
The user is expected to input these pairs into the Sampler component from his schema domain. For instance, the user might want to see how syntactic similarity metrics would perform for the pair P: ["course_credit", "credit_of_course"]. On the other hand, he might want to see how semantic similarity metrics would perform for the pair P: ["person", "individual"].
For a given set of pairs S: {P1, P2, ..., PN}, Sampler runs syntactic or semantic metrics for each given pair P in S, and determines their calculated similarities. The outcome of the calculated similarity for a Pair P is a value between 0 and 1. After the computation of the similarity values, the Sampler measures the accuracy level of each metric using F-measure. F-measure is a combination of precision and recall from the information retrieval domain [27] and used in different areas for calculating the accuracy. Using the following formula, the Sampler calculates the weight for each metric; where \( \sum F \) represents the sum of F-measure values resulted for all metrics used, and \( F_m \) represents the F-measure value calculated for metric ‘m’.
\[
w_m = \frac{1}{\sum F} * F_m
\]
As the last step of the weight computation and assignment process, the calculated weights of metrics are presented to the user. The user has the option of accepting and directly using the proposed weights, or modifying them and feeding them back to the system for usage. An example of the usage of the Sampler component is given in Fig. 8:
**VII. SCHEMA MATCHING STEP OF SASMINT**
Schema matching aims at finding all correspondences between elements of two schemas. SASMINT focuses on the schema level matching, utilizing element and structure level information. Furthermore, SASMINT exploits a combination of automatic schema matching techniques for resolving syntactic, semantic, and structural heterogeneities. Considering a single criterion (e.g., name matching) is unlikely to be successful for achieving high match accuracy for a large variety of schemas. As a consequence, it is necessary to combine and utilize multiple techniques at the same time. For this purpose, SASMINT combines the results of several independently executed linguistic and structure match algorithms.
Schema matching in SASMINT consists of the preparation, comparison, and result generation and validation steps, as detailed below. Fig. 9 shows an overall view of the steps of schema matching.
**A. Preparation Step of Schema Matching**
The Preparation step deals with the translation of source schemas defined in the typical Data Definition Language (DDL) of its Database Management System (DBMS) into a common representation format. The Directed Acyclic Graph (DAG) format with labeled edges has been chosen for this purpose, considering that it provides a balanced format among other alternatives supporting the representation of a relational schema, an object-oriented schema, etc. as a graph.

Preparation step of SASMINT, shown in Fig. 10, works as follows: User can load the recipient schema from a database or from previously persisted XML files, in case of which schemas are already in graph format. He can load the donor schema from a database. When he chooses to load the schemas from a database, the system connects to the database using the related Java Database Connectivity (JDBC) driver, gets the metadata information (e.g. tables and columns in relational databases), represents the metadata in graph format by means of JGraphT, and finally using JGraph visualizes and displays the graphs corresponding to the schemas.
B. Comparison Step of Schema Matching
A key step of SASMINT in the schema matching process is the Comparison step, which identifies the likely matches between two schemas by resolving syntactic, semantic, and structural heterogeneities. SASMINT uses a number of algorithms from the Natural Language Processing (NLP) and Graph Theory. The Comparison step consists of two types of matching: Linguistic and Structure, detailed below.
Most of the time, element names are represented differently in different schemas, and thus before the matching process, they need to be brought into a common representation. This sub-step of SASMINT called pre-processing and involves the operations shown in Fig. 11. In tokenization and word separation operation, strings containing multiple words are split into list of words. For instance, “First Name” is split into “First” and “Name”. Stop words, such as “of” and “the” as well as some special characters, such as “/” and “-” are removed from names. Furthermore, abbreviations are expanded and lemmatization is used to bring different forms of the same word into a common form.
1) Linguistic Matching
Linguistic matching considers only the names of schema elements and results in a value between 0 and 1, for pairs of element names from the two schemas. Variety of algorithms or metrics from the NLP research field is applied to identify the syntactic and semantic similarities. In order to compare element names from two schemas, node names from the graph representation of these schemas are put into two separate lists. After preprocessing names, syntactic and semantic match algorithms are applied to each pair and then the results are combined for the final value of Linguistic Matching.
Syntax Similarity
There are large numbers of string distance and similarity algorithms (also called here as metric) from the Natural Language Processing communities.
Unlike other approaches to schema matching, which use only one metric for syntactic similarity, SASMINT uses a combination of several main syntactic similarity metrics for comparing two character strings syntactically. These metrics can be classified as string-based and token-based. String-based metrics consider strings as adjacent sequences and do not divide multi-word strings into a set of single strings. However, token-based metrics view strings as unordered sets of tokens. As for the string-based metric, SASMINT uses Levenshtein Distance (Edit Distance) [28], Monge-Elkan Distance [29], Jaro [30], and Longest Common Substring (LCS) metrics. As for the token-based metric, it utilizes TF*IDF (Term Frequency*Inverse Document Frequency) [31] and Jaccard Similarity [32] metrics. Considering that each metric is suitable for a different type of string, SASMINT can be used for more types of strings than previous approaches.
SASMINT uses a combination of these metrics to obtain more accurate results. Metrics are combined by means of a weighted summation using the following formula:
$$\text{sim}_{a,b} = w_{lv} \cdot \text{sm}_{lv}(a,b) + w_{me} \cdot \text{sm}_{me}(a,b) + w_{jr} \cdot \text{sm}_{jr}(a,b) + w_{jc} \cdot \text{sm}_{jc}(a,b) + w_{tf} \cdot \text{sm}_{tf}(a,b) + \sum w_{jc} \cdot \text{sm}_{jc}(a,b)$$
Another contribution of SASMINT is its recursive weighted metric. This metric is aimed for element names containing more than one token. Depending on whether the names contain one or more tokens, the user can choose between the weighted and recursive weighted metric. Given two strings $a$ and $b$ that are tokenized into $a = s_1, s_2, ..., s_l$ and $b = t_1, t_2, ..., t_m$, the recursive weighted metric is calculated as follows:
$$\text{sim}(a,b) = \frac{1}{2m} \sum_{i=1}^{m} \max \text{sim}_{j}(a_i, b_j) + \frac{1}{2l} \sum_{j=1}^{l} \max \text{sim}_{i}(a_i, b_j)$$
Figure 11. Pre-processing operations
Figure 10. Preparation Step
Semantic Similarity
Identifying the semantic similarity between two words or concepts has been the subject of many applications in NLP, information retrieval, and some other areas. The semantic similarity measures use variety of knowledge resources, such as WordNet [25]. WordNet is partitioned into nouns, verbs, adjectives, and adverbs, which are organized into synonym sets, each representing one underlying lexical concept. Synonym sets, called also as synset, are interlinked by different relations, such as hyponymy, hypernymy, antonymy, meronymy, holonymy, etc.
Semantic similarity algorithms from the NLP domain that SASMINT uses can be classified as path-based and gloss-based measures. Path-based measures use the path between the concepts in taxonomy of concepts. SASMINT exploits the measure of Wu and Palmer [33], which is based on the idea of calculating the shortest path between the concepts in the IS-A hierarchy of WordNet.
As the base for its gloss-based measure, SASMINT uses the measure of Lesk [34]. SASMINT benefits from the gloss information provided in WordNet for calculating the gloss-based similarity.
The result of semantic similarity in SASMINT is the weighted sum of the two semantic similarity measures addressed above. Following formula is used for computing the result of semantic similarity:
\[ \text{sim}_\text{Semantic}(a,b) = w_{\text{wup}} \cdot \text{sim}_{\text{wup}}(a,b) + w_{\text{gloss}} \cdot \text{sim}_{\text{gloss}}(a,b) \]
where ‘wup’ stands for Wu and Palmer’s measure and ‘gloss’ for the gloss-based similarity.
2) Structure Matching
In addition to linguistic differences, other types of differences that frequently occur among database schema definitions are structural differences. Structural differences are more difficult to resolve than Linguistic differences, typically requiring user input. The second activity of comparison in SASMINT is structure matching, which uses the result of linguistic matching to identify the structural similarity of two schemas represented as graphs. For the purpose of structure matching in SASMINT, a variety of graph similarity and matching algorithms from the Graph Theory and other areas like Web searching and schema matching were considered.
The first approach that structure matching in SASMINT uses is the one proposed by [35]. It is an iterative algorithm from the graph similarity research field. This algorithm is based on the idea that nodes of two graphs are similar if the neighbors of these nodes are also similar.
As the second algorithm for structure matching, SASMINT uses the structure similarity algorithm of Similarity Flooding [16]. Similarity Flooding is based on a fix point computation to calculate the structural similarity. It uses an iterative algorithm and the similarity of two elements is propagated to their adjacent elements at each iteration.
Similar to the method followed in linguistic matching, structure matching uses the weighted sum of these two structural similarity algorithms, as shown in the formula below:
\[ \text{sim}_\text{Structure}(a,b) = w_{\text{blonde}} \cdot \text{sim}_{\text{blonde}}(a,b) + w_{\text{sf}} \cdot \text{sim}_{\text{sf}}(a,b) \]
where ‘blonde’ stands for the algorithm of [35] and ‘sf’ for the algorithm of Similarity Flooding.
C. Result Generation and Validation Step of Schema Matching
Results of the comparison step are displayed to the user by means of a GUI in order for him to modify and save them. Previous systems typically provide either a primitive or no GUI. However, a clever and flexible GUI is an indispensable part of a matching system, both because it is not possible to determine all possible matches automatically and also not all the identified matches may be correct, especially considering the existence of large amount of semantics involved in schema descriptions. An example case, for which the user input is essential, occurs for complex matches, such as 1-to-n (one column in one schema matches one or more columns in the other schema). For this case, it is not possible to automatically decide whether a column in the first schema is a combination of n columns in the second schema and if so, it may not be known how to combine these n columns, such as using concatenation, sum, etc.
In order to be more specific, suppose that schema matching system has identified a match between the “rNum” element in one schema and “roomNo” and “telNum” elements in the second schema. In this case, user is supposed to delete the match between “rNum” and “telNum” as one refers to the room number and the other refers to the telephone number. As another example, suppose that the system has identified a match between the “name” element in one schema and “fname” and “lname” elements in the second schema. In this case, user is supposed to specify that “name” is the concatenation of “fname” and “lname”.
Considering the requirements addressed above, a GUI is implemented as a part of SASMINT, a screenshot of which is shown in Fig. 12. Using this GUI, user can load the recipient schema from a database or from a file (in XML-based Graph format, called SASMINT Derivation Markup Language-SDML as introduced below) and donor schema from a database, as shown in two windows titled “Recipient Schema” and “Donor Schema” in Fig. 12. After loading the schemas, he can run the match option. Results of matching are displayed in graph format, as shown in the window titled “Schema Match”. The window titled “Metric Results” shows similarity results that each metric has identified for all matching pairs. User can delete incorrect matches and introduce new ones and specify which kind of operation to use for combining n columns in 1-to-n or n-to-1 matches, using the window titled “Integration Customizations”. User can either store schema matching results or continue with the schema integration. If he chooses to save the results, an XML-based SDML format is used to persist the results in
a file. SDML is designed as a part of the SASMINT system and further explanations about it will be given in a forthcoming paper. This format is similar to other existing XML-based graph formats, such as Graph eXchange Language (GXL) [36] and GraphML [37], but it is extended to store the results of both match and integration. SDML uses a number of derivation elements, base constructs of which are explained in the following section. These derivation elements consist of tableRename derivation, tableUnionDerivation, tableSubtract Derivation, columnRename derivation, columnUnionDerivation, and columnStringAdditionDerivation for storing the derivation results of schema integration. The ‘columnStringAdditionDerivation’ element is also used at the end of schema matching to define the special mapping rule, which means that a column in one schema is represented by the concatenation of some columns in the second schema. Content of an example XML file produced as a result of a schema matching is shown in Fig. 13. As it can be seen in the figure, a match is identified between the “fname” column of the “person” table in the first schema and the “name” column of the “employee” table in the second schema.
VIII. SCHEMA INTEGRATION STEP OF SASMINT
Schema integration is a key process in many database applications. It is required in different types of integrated information management system approaches, introduced in Section II.
SASMINT facilitates schema integration by providing some semi-automatic means. After the schema matching step, users can continue with the schema integration to integrate two schemas that have been matched. SASMINT automatically generates an integrated schema, which needs the final user validation, as it is not possible to resolve all types of structural conflicts. Among different possibilities for the results of schema matching, following cases are the ones automatically handled by the schema integration component of SASMINT:
- **ColumnX (1 → 1) ColumnY**: ColumnX in the first schema matches ColumnY in the second schema.
- **ColumnX (1 → n) Column**: ColumnX in the first schema matches n columns of the second schema.
- **Column X (1 → 1) Table A**: ColumnX in the first schema matches Table A in the second schema.
- **Column (m → 1) ColumnY**: m columns of the first schema match ColumnY in the second schema.
- **Column (m → 1) Table B**: m columns of the first schema match Table B in the second schema.
- **TableA (1 → 1) TableB**: TableA in the first schema matches TableB in the second schema.
- **TableA (1 → n) Table**: TableA in the first schema matches n tables of the second schema.
- **Table A (1 → 1) Column Y**: Table A in the first schema matches Column Y in the second schema.
- **Table A (1 → n) Column**: Table A in the first schema matches n columns of the second schema.
- **Table (m → 1) TableB**: m tables of the first schema match TableB in the second schema.
- **Table (m → n) Table**: m tables of the first schema match n tables of the second schema.
Considering different conflicts to be resolved, a number of rules for integrating relational schemas have been defined for SASMINT. In order to detect integration points automatically, these rules operate on the types of
match results listed above. The rules identify which tables and columns need to be inserted in the resulting schema and how they need to be combined in order to generate an integrated schema that can represent all the elements of participating schemas. If SASMINT is extended to work with types of schemas other than relational, similar rules can be defined for these types also. Details about these rules are the subject of a forthcoming paper and thus we will not give further information about them.
The Schema Integration component of SASMINT uses a derivation language for representing integrated schemas. A formal representation of the derivation language constructs, a variation of PEER derivation language [38], is given in [3]. There are two types of derivation for relational schemas: Table and Column Derivation. Table derivation consists of derivations of type “Table Rename”, “Table Union”, “Table Subtract”, and “Table Restrict”. On the other hand, column derivation comprises the derivations of type “Column Rename”, “Column Union”, and “Column Extraction”. Table Rename, Table Union, Column Rename, Column Union, and Column Extraction are the ones typically used by SASMINT. Brief explanations about all derivation types are provided below:
- **Table Rename** derivation is used when a new table is generated in the integrated schema by renaming a table in one of the input schemas (recipient and donor schemas).
Example: FacultyMember@IntSchema = Faculty@S1.
- **Table Union** derivation is used to state that a newly generated table in the integrated schema is the union of two or more tables from the input schemas.
Example: Department@IntSchema = union (Department@S1, Department@S2).
- **Table Subtract** derivation is used to specify that a table in the integrated schema is constructed by subtracting a table from another table in one of the input schemas.
Example: EngineeringDepartments@IntSchema = subtract(Departments@S1, NonEngineeringDepartments@S1).
- **Table Restrict** derivation is used to specify that a table in the integrated schema is generated by applying a restriction to a table in one of the input schemas.
Example: SuccessfulStudents@IntSchema = restrict (Students@S1, [gpa > 2.0]).
- **Column Rename** derivation is used when a new column is generated in the integrated schema by renaming a column in one of the input schemas.
Example: start@Time@IntSchema = start@Time@S2.
- **Column Union** derivation is used to specify that a newly generated column of the integrated schema is the union of two or more columns of the input schemas.
Example: EngineeringDepartments@IntSchema = restrict (Students@S1, [gpa > 2.0]).
- **Column Extract** derivation is used to specify that a column in one of the input schemas equals two or more columns of the other input schema, combined by an operator, such as arithmetic and string operator. Currently, columnStringAdditionDerivation is supported, which is used to specify that a column in one schema equals to the concatenation of two or more columns in the other schema.
Example: name@Student@IntSchema = fname@Student@S1 + lname@Student@S1.
Using the automatic schema integration rules, an integrated schema is proposed to the user. User can modify/save the result, which is stored in XML format.
using the SDML. SDML uses the derivation constructs defined above as the base.
IX. EXAMPLE CASE OF SASMINT
SASMINT is a generic system and can be applied to different types of Integrated Information Management Systems as introduced in Section II, for the purpose of semi-automatic schema matching and/or schema integration. We provide in this section through a small example some discussions over the application of SASMINT.
Fig. 14 shows parts of two university schemas that also include some foreign keys (FK). In order to match and then integrate these two schemas, we ran SASMINT with the threshold value of 0.5 and selection criteria set as “select max above threshold”.
SASMINT could identify all the following similar pairs correctly, namely (course, academic_course), (course_id, academic_course_id), (course_provider, academic_course_provider), (department, department), (department_id, department_id), and (dept_name, dept_name) from Shema-1 and Schema-2 are matched correctly, except for those shown in Fig. 15 that were not identified correctly but that for example a human database expert may discover through investigation of these two schemas. The element pairs shown in Fig. 15 are of two categories: either those that SASMINT missed to identify (thus representing the false negative), or those that SASMINT found as similar while actually they were not (thus representing the false positives). These cases could not be correctly identified by SASMINT, mostly due to the fact that SASMINT system currently lacks some semantic relationships. For example, the semantic similarity of “university” and “academic institution” could not be identified through the WordNet in the current processing done by SASMINT. Furthermore, although they have different meanings, the “university” and “university student” were identified as similar, due to their partial overlap in names, as well as their structure, while they are not correct matches.
Especially considering such semantic issues, this example indicates that a fully automatic schema matching system is not the right approach for integration of heterogeneous schemas, rather the semi-automated approach of SASMINT is suitable, that is accompanied by a sophisticated GUI to support users with their modification of the match results. Furthermore, after saving the modified matched results of the two schemas, the schema integration process of SASMINT can be started by user.
---
**False negatives**
- university @ academic Institution
- university_id @ university
- academic_institution_id @ academic Institution
- university_name @ university
- academic_institution_name @ academic Institution
**False positives**
- university @ university student
- university_name @ university @ name (university_student)
---
Without showing the details about the derivation (for simplicity reasons), Fig. 16 represents the integrated schema generated by SASMINT for this example case. This integrated schema is complete and almost minimal. In other words, this schema covers all elements of the two schemas, while containing no redundancy except for the “university_ref” column of the “department” table, which is not incorrect, but not required in a minimal integrated schema.
We have carried out many similar experiments using different schemas in order to evaluate the performance of both the schema matching and the schema integration processes of SASMINT, of which the results are the subject of a forthcoming paper. The results of all these experiments have shown that SASMINT can achieve high percentage of accuracy, (about 75 to 85 %) with its schema matching process, and can generate complete and about 99% minimal schemas with its schema integration process. At present, the most important difficulty is in identifying some semantics involved in each schema. Currently, as a generic tool, SASMINT uses the domain independent WordNet for identifying semantic similarities. Nevertheless, WordNet does not contain domain specific semantic relationships. As a future work, in addition to using the WordNet, domain specific ontology will also be integrated to the SASMINT system, so that more types of semantic relationships can be identified, and thus the automated process of SASMINT can generate more accurate results.
Current experiments have also shown that SASMINT’S GUI is very useful and makes the interaction of domain experts with the system straightforward and effective. Furthermore, the SDML format used for saving the results of both matching and integration is valuable. This format enables results to be interpreted and used by other systems for further processes, for example for the purpose of federated query processing. Moreover, it has a human-readable format that makes it very easy for the user to understand and modify the results.
X. CONCLUSION
With the increasing number of Collaborative Networks, the need for an infrastructure supporting data sharing in such networks has become clear. Schema matching and integration correspond to the key components of this infrastructure. Since carrying out these tasks manually is error-prone and time-consuming, some automatic mechanisms are required. This paper introduces the SASMINT system, which enables semi-automatic schema matching and integration by combining a number of syntactic, semantic, and structural similarity algorithms from the NLP and Graph Theory domains. SAMINT uses a weighted sum of different metrics or algorithms in order to be applicable for different types of strings. It is possible to semi-automatically identify the appropriate weight for each metric by means of SASMINT’s Sampler tool. SASMINT provides an effective GUI for users to modify and accept match and integration results. Furthermore, utilizing the result of schema matching for schema integration and defining a set of rules for automatic integration as well as a derivation language for representing the results of both matching and integration are other contributions of the SASMINT system.
REFERENCES
Mrs. Ozgul Unal received her Bachelors degree from the Department of Computer Engineering at Middle East Technical University in Turkey and a Masters degree from the Department of Information Systems at the same university. Since September 2002, she is a PhD student at the Computer Science Department of the Faculty of Science of the University of Amsterdam, in the Netherlands. She has been involved in several European and Dutch national research projects, focusing on the analysis, design and implementation of the Federated Information Management Systems in the domain of Bio-Sciences. Her current research areas include resolution of syntactic, semantic, and structural heterogeneities among database schemas in order to support (semi-) automatic schema matching and integration.
Dr. Hamideh Afsarmanesh is an associate professor at the Computer Science Department of the faculty of Science of the University of Amsterdam in the Netherlands. At this faculty, she is also the director of the COLNET (Collaborative Network) group. She has received her PhD in Computer Science from the University of Southern California (USC) in 1985, and her MSc degree also in Computer Science from the University of California, Los Angeles (UCLA) in 1980. Her current research focuses on the areas of Federated /Distributed Cooperative Databases, Virtual Organizations /Virtual Laboratories /Virtual Communities, Integration of Autonomous and Heterogeneous Databases, and the design and development of specialized Web-based Applications for a wide variety of domains such as Biodiversity, Manufacturing, Tele-assistance, and Distributed Control Engineering. She has directed research in more than fifteen National, European, and International projects. She has been involved in the organization and has initiated / chaired several International conferences and workshops. She has published more than 150 articles in journals, books, and refereed conference proceedings in computer science research. She has co-edited more than ten books and various issues of international Journals. She is the Dutch representative at the IFIP TC5, and a member of the IFIP WG5.3 and WG5.5.
|
{"Source-Url": "https://pure.uva.nl/ws/files/817458/76171_318048.pdf", "len_cl100k_base": 10630, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46746, "total-output-tokens": 13408, "length": "2e13", "weborganizer": {"__label__adult": 0.00040435791015625, "__label__art_design": 0.0010356903076171875, "__label__crime_law": 0.0006842613220214844, "__label__education_jobs": 0.0159454345703125, "__label__entertainment": 0.0002206563949584961, "__label__fashion_beauty": 0.0003075599670410156, "__label__finance_business": 0.0012445449829101562, "__label__food_dining": 0.0004911422729492188, "__label__games": 0.0007905960083007812, "__label__hardware": 0.0009479522705078124, "__label__health": 0.000762939453125, "__label__history": 0.00072479248046875, "__label__home_hobbies": 0.00020694732666015625, "__label__industrial": 0.0007739067077636719, "__label__literature": 0.00127410888671875, "__label__politics": 0.000579833984375, "__label__religion": 0.0006351470947265625, "__label__science_tech": 0.301513671875, "__label__social_life": 0.0005116462707519531, "__label__software": 0.11309814453125, "__label__software_dev": 0.556640625, "__label__sports_fitness": 0.00024437904357910156, "__label__transportation": 0.0005955696105957031, "__label__travel": 0.00030422210693359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58874, 0.01606]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58874, 0.54521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58874, 0.89396]], "google_gemma-3-12b-it_contains_pii": [[0, 406, false], [406, 4829, null], [4829, 9266, null], [9266, 11377, null], [11377, 17324, null], [17324, 23209, null], [23209, 24879, null], [24879, 28074, null], [28074, 32731, null], [32731, 38709, null], [38709, 41955, null], [41955, 45279, null], [45279, 50102, null], [50102, 56173, null], [56173, 58874, null]], "google_gemma-3-12b-it_is_public_document": [[0, 406, true], [406, 4829, null], [4829, 9266, null], [9266, 11377, null], [11377, 17324, null], [17324, 23209, null], [23209, 24879, null], [24879, 28074, null], [28074, 32731, null], [32731, 38709, null], [38709, 41955, null], [41955, 45279, null], [45279, 50102, null], [50102, 56173, null], [56173, 58874, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58874, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58874, null]], "pdf_page_numbers": [[0, 406, 1], [406, 4829, 2], [4829, 9266, 3], [9266, 11377, 4], [11377, 17324, 5], [17324, 23209, 6], [23209, 24879, 7], [24879, 28074, 8], [28074, 32731, 9], [32731, 38709, 10], [38709, 41955, 11], [41955, 45279, 12], [45279, 50102, 13], [50102, 56173, 14], [56173, 58874, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58874, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
c292f1962658b34f5567ecb87f49856f973a90b4
|
BlockJoin: Efficient Matrix Partitioning Through Joins
Kunft, Andreas; Katsifodimos, Asterios; Schelter, Sebastian; Rabl, Tilmann; Markl, Volker
Publication date
2017
Document Version
Final published version
Published in
Proceedings of the VLDB Endowment
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights.
We will remove access to the work immediately and investigate your claim.
ABSTRACT
Linear algebra operations are at the core of many Machine Learning (ML) programs. At the same time, a considerable amount of the effort for solving data analytics problems is spent in data preparation. As a result, end-to-end ML pipelines often consist of (i) relational operators used for joining the input data, (ii) user defined functions used for feature extraction and vectorization, and (iii) linear algebra operators used for model training and cross-validation. Often, these pipelines need to scale out to large datasets. In this case, these pipelines are usually implemented on top of dataflow engines like Hadoop, Spark, or Flink. These dataflow engines implement relational operators on row-partitioned datasets. However, efficient linear algebra operators use block-partitioned matrices. As a result, pipelines combining both kinds of operators require rather expensive changes to the physical representation, in particular re-partitioning steps. In this paper, we investigate the potential of reducing shuffling costs by fusing relational and linear algebra operations into specialized physical operators. We present BlockJoin, a distributed join algorithm which directly produces block-partitioned results. To minimize shuffling costs, BlockJoin applies database techniques known from columnar processing, such as index-joins and late materialization, in the context of parallel dataflow engines. Our experimental evaluation shows speedups up to 6X and the skew resistance of BlockJoin compared to state-of-the-art pipelines implemented in Spark.
1. INTRODUCTION
Requirements for data analytics applications based on machine learning techniques have changed over the last years. End-to-end ML pipelines nowadays go beyond pure linear algebra and often also include data preparation and transformation steps (ETL) that are best defined using relational algebra operators. Data scientists construct feature-vector representations for training ML models by filtering, joining, and transforming datasets from diverse data sources [39] on a daily basis. This process is often repeated many times in an ad-hoc fashion, as a variety of features are explored and selected for optimal predictive performance.
Such pipelines are most conveniently expressed in languages with rich support for both ETL and ML tasks, such as Python or R, but these implementations do not scale. In enterprise setups, the source data usually resides in a data warehouse. One possible strategy in such situations is to run the ETL part of the pipeline in situ, and the ML part in a specialized engine such as SciDB [11] or RasDaMan [6]. This approach has two drawbacks. First, moving data between engines is an expensive operation that is frequently repeated as the pipeline is refined. Second, it does not allow to easily join warehouse and external data sources.
Parallel dataflow engines such as Spark [38] or Hadoop [5] offer a more flexible execution infrastructure that does not suffer from the problems outlined above. Initially developed for ETL-like workloads, these systems have been increasingly used by practitioners to implement ML algorithms [26, 8, 33]. To support scalable execution of ML workloads, the functionality of established libraries for scalable linear algebra, such as ScALAPACK [13], is being implemented on top of parallel dataflow systems by projects like SystemML [17], MLlib [26], Apache Mahout Samsara [33] and Pegasus [19].
A common runtime engine avoids data transfer, but the mismatch in data representation still manifests itself when executing mixed analytics pipelines. While dataflow engines typically row-partition large datasets, scalable linear algebra operators are implemented on top of block-partitioned, or blocked matrices. The difference in the partitioning assumptions results in a re-partitioning barrier whenever a linear algebra operator follows a relational one. The dataflow engine has to re-partition the entire row-partitioned dataset into a block-partitioned matrix. One possible solution would be to execute linear algebra operators on row-partitioned...
matrices. Although this performs well for operations such as row sums (shown in Figure 1), superlinear operations such as matrix multiplication that consume multiple rows and/or columns become very inefficient [17]. For computational and storage efficiency, the majority of scalable linear algebra frameworks perform matrix multiplications on blocked matrices (e.g., [17, 18, 19]).
In this paper, we demonstrate the optimization potential of fusing relational and linear algebra operators. As a first step, we focus on a common pattern – a relational join, followed by a per-element transformation for feature extraction and vectorization, and a subsequent matrix conversion. To reduce the total shuffling costs of this operator chain, we propose BlockJoin, a specialized distributed join algorithm that consumes row-partitioned relational data and directly produces a block-partitioned matrix. We focus on the major drawback posed by an independent operator chain: the intermediate result of the join, row-wise partitioned by the join key, is discarded immediately to form a block-partitioned matrix. This materialization implies the risk of running out of memory when the join result becomes large, and more importantly results in an unnecessary shuffle operation for the join. BlockJoin avoids the materialization of the intermediate join result by applying the vectorization function and the successive block partitioning independently to both relations. Analogous to joins that have been proposed for columnar databases [24, 9, 1], BlockJoin builds on two main concepts: index joins and late materialization. More specifically, we first identify the matching tuple pairs and their corresponding row indexes in the matrix by performing a join on the keys and tuple-ids of the two relations (analogous to TID-Joins [25]). Based on the gathered metadata, we apply the vectorization function separately to the matching tuples of both relations, and repeat this for the block partitioning, without having to materialize the intermediate join result. Therefore, we can apply different materialization strategies for the matrix blocks based on the shape of the input relations, namely Early and Late materialization. Our experiments show that BlockJoin performs up to 6× faster than the state-of-the-art approach of conducting a row-wise join followed by a block-partitioning step.
Overall, we make the following contributions:
- We demonstrate the need for implementing relational operators producing block-partitioned datasets (Section 2.2).
- We propose BlockJoin, a distributed join algorithm which produces block-partitioned results for workloads mixing linear and relational algebra operations. To the best of our knowledge, this is the first work proposing a relational operator for block-partitioned results (Section 3).
- We provide a reference implementation of BlockJoin based on Apache Spark [38] with two different block materialization strategies (Sections 4).
- We provide a cost model to select the best suited materialization strategy based on the shape of the input tables (Section 3.4).
- We experimentally show that BlockJoin outperforms the baseline approach in all scenarios and, depending on the size and shape of the input relations, is up to 6× faster. Moreover, we show that BlockJoin is skew resistant and scales gracefully in situations when the state-of-the-art approach fails (Section 5).
## 2. BACKGROUND
In this section, we introduce the blocked matrix representation. We also discuss a running example we will use throughout the paper and discuss the state-of-the-art implementation for dataflow systems.
### 2.1 Block-Partitioned Matrix Representation
Distributed dataflow systems use an element-at-a-time processing model in which an element typically represents a line in a text file or a tuple of a relation. Systems that implement matrices in this model can choose among a variety of partitioning schemes (e.g., cell-, row-, or column-wise) for the matrix. For common operations such as matrix multiplications, all of these representations incur huge performance overheads [17]. Block-partitioning the matrix provides significant performance benefits. This includes a reduction in the number of tuples required to represent and process a matrix, block-level compression, and the optimization of operations like multiplication on a block-level basis. These benefits have led to the widespread adoption of block-partitioned matrices in parallel data processing platforms [17, 18, 19]. A blocked representation splits the matrix into submatrices of fixed size, called blocks. These blocks become the processing elements in the dataflow system.
### 2.2 Motivating Example
Our running example is learning a spam detection model, a common use case in e-commerce applications. Assume that customers write reviews for products, some of which are spam, and we want to train a classifier to automatically detect the spam reviews. The data for products and reviews are stored in different files in a distributed filesystem. We need the attributes from both relations to build the features for the model in our ML algorithm. Therefore, we first need to join the records from these tables to obtain reviews with their corresponding products. Next, we need to transform these product-review pairs into a suitable representation for an ML algorithm. To this end, we apply a user defined function (UDF) that transforms the attributes into a vector representation. Finally, we aggregate these vectors into a distributed, blocked feature matrix to feed them into an ML system (such as SystemML).
Figure 2 illustrates how to execute such a workload. Listing 1 shows how it can be implemented in a distributed dataflow system like Spark, expressing a mixed linear- and relational-algebra pipeline. We will refer to this as baseline implementation in the rest of the paper. The input data resides in the tables Products (product_no, name, price, category) and Reviews (product_no, text, num_stars, is_spam). Step 1 (in Figure 2 and Listing 1) performs a foreign-key join on the product_no attribute. Step 2 applies user-defined vectorization functions to each row of the join result, to transform it into vector-based features, using techniques like feature hashing and “one-hot-encoding”.
...
We assume that the vector resulting from a row is a concatenation of the vectorization of the input tuples of the participating relations. Step 3 is split into three sub-steps that are necessary to form a block-partitioned matrix: (a) creates a sequential index for the join result that is used as row index for the matrix. This is necessary, as dataflow engines, in contrast to database systems, do not provide a unique tuple identifier. (b) builds the initial matrix blocks by splitting the rows at block boundaries. (c) in a final aggregation step, where partially filled blocks (which span multiple data partitions) are merged.
```
val Products: Dataset[Product] = // read csv...
val Reviews: Dataset[Review] = // read csv...
1 val JoinResult = Products.joinWith(Reviews,
2 Products("product_no") === Reviews("product_no"))
3 // Vectorize each tuple in the join result
4 val Vectorized = JoinResult.map { case (p, r) =>
5 val pv = vectorizeProduct(p)
6 val rv = vectorizeReview(r)
7 pv ++ rv
8 }
9 // Convert 'Vectorized' into blocked matrix 'M'
10 val M = toMatrix(Vectorized)
11 // Train the ML model with matrix 'M' ...
```
Listing 1: Code snippet for the running example.
### 3. BLOCKING THROUGH JOINS
In this section, we present BlockJoin, our chained, context-aware operator, leveraging the example of Figure 2. We first introduce a baseline implementation of independent operators for that example, which cannot leverage join metadata for the blocking phase. We then detail BlockJoin in Sections 3.1 and 3.2, and discuss how BlockJoin improves upon the baseline.
#### Drawbacks of an independent operator chain.
The baseline implementation, which uses independent operators, is illustrated in Figure 2 and proceeds as follows: We first partition Products \( p \) by its primary key \( p.product_no \) and Reviews \( r \) by its foreign-key \( r.product_no \) to execute the distributed join. After vectorizing the join result Vectorized \( v \), we introduce a consecutive index (e.g., by a zipWithIndex method in Spark), called row-idx, to uniquely identify each tuple. Then, we split each \( v \) of Vectorized into its components, based on the col-idx, and re-partition by the block index of the resulting matrix. The block index is obtained by the function:
\[
\text{block-idx}(v, \text{col-idx}) = \{v \cdot \text{row-idx} \cdot \text{col-idx} \}
\]
The block size represents the number of rows and columns in a block. Although matrix blocks can have arbitrary row- or column-sizes, we use square blocks, for the sake of simplicity. One can easily derive the function for non-square blocks by substituting block size with the number of rows and column per block.
We observe that an independent operator chain has to re-partition the data twice and materializes the join result, even though this result is split according to block boundaries immediately after applying the index assignment in Step 3a, as described before. Thus, the costly join is only executed to create a sequential index for the rows of the matching tuples in the matrix. Another danger during materialization of the join result is that the two input tables can be very wide, and we therefore risk running out of memory when executing the join.
In the following, we introduce BlockJoin and explain how it avoids to materialize the intermediate join result by introducing information exchange between the operators. We start by discussing a simplified case in Section 3.1, and extend our solution to the general case in Section 3.2.
#### 3.1 BlockJoin under Simplifying Assumptions
We introduce two simplifying assumptions to explain how to independently block two relations\(^1\): (i) the join keys on both relations are consecutive integers and the relations are
\(^1\)Note that we introduce these assumptions solely for the purpose of discussing the blocking, we drop these assumptions in the next section and describe how to apply BlockJoin for general equi-join cases.
ordered by their keys; (ii) there is a strict 1:1 relation between the tables, that is: they have the same cardinality and the same values in their primary key. Joining two relations, which fulfill these conditions, is equivalent to concatenating the relations. Moreover, the cardinality of the join result will be the same as the cardinality of the two joined relations. Now, suppose that we want to block-partition the join result of the two relations. The question we are going to answer throughout the rest of this section is: Can we achieve joined and block-partitioned results, without first materializing the join result in a row-partitioned representation?
**Blocking without materializing the join result.** Given our simplifying assumptions, we can safely treat the key `product no` as the unique, sequential identifier of each tuple. Hence, we can not only use it as join key, but can also define `v, row-idx = v, product no`, to uniquely identify the rows in the resulting matrix. Now, we do not need to materialize the join result to obtain the row-idx, we discuss how we apply the blocking function on both relations independently after the vectorization. The first component of the block-idx function `(v, row-idx)` assigns the row index of the block block-row-idx, which the cells in a row belong to. Due to our assumptions, matching tuples already share the same row-idx. The second component of the block-idx function `(v, col-idx)` defines the column index of the block block-col-idx, which the cells of a rows are split across. We can use this part of the equation on the individual tables without joining after we apply some small changes: the function has to account for the fact that the block-col-idx of the second relation have to be offset by the number of columns in the first relation (because the result concatenates the two relations). Thus, we add the offset `cols(pv)` (i.e., the number of columns of the vectorized version of the first relation p) to the column index of the second relation. Equation 1 shows the modified block-idx function that is applied on the vectorized tuples of the individual input relations.
\[
\begin{align*}
\text{block-idx}_p (pv, col-idx) &= \{ pv, row-idx, col-idx \} \\
\text{block-idx}_r (rv, col-idx) &= \{ rv, row-idx, cols(pv) + col-idx \}
\end{align*}
\]
3Section 4 details how we determine this value at runtime.
### 3.2 BlockJoin for the General Case
The simplifying assumption of an ordered, consecutive index on both relations from the previous section obviously does not hold in reality. In real-world scenarios, we observe primary-key (PK) – foreign-key (FK) or 1:N relationships, such as users and items, items and reviews, or even N:N relations, as well as normalized database schemes [21]. Therefore, we cannot use the keys of the individual relations to determine the corresponding blocks of the tuples. Moreover, the size of the input relations may vary compared to the join result. For instance, a Product can match arbitrarily many Reviews. In the subsequent paragraphs, we show how BlockJoin determines which tuple pairs are part of the join result, and assigns a unique row-idx to each matching tuple pair under general conditions without materialization of the join result.
**Assigning indexes to tuple pairs in the join result.**
BlockJoin first obtains a unique surrogate key `TID` from each tuple of both relations independently. The `TID` consists of a `<relation-id, partition-id, index-within-partition>` triple as depicted in the bottom left part of Figure 3 (b). The triple uniquely identifies each row of the relations. In the next step, we generate the unique identifier `row-idx` for the rows in the resulting `Matrix M`. In order to assign the identifier to the matching tuples of both relations, we design a variant of the index-join [15, 25]. The main idea of the index-join is to project the key and `TID` columns of the two relations to determine matching tuples without materializing the payload columns. As depicted in Figure 3, step 1 projects and collects the `<key, TID>` pairs from both relations on the driver. Therefore, we have all keys of the two relations and execute an `index-join`. Based on the result, we assign the `row-idx` to the matching tuples. We call this phase `join-kernel`, following the nomenclature of [36]. In Step 3, we make the block metadata, which contains the matched `<key, TID>` pairs and `row-idx`s, available on all nodes for the subsequent `fetch-kernel` phase. Based on the information in the metadata, we prune all non-matching tuples and apply the vectorization function to the remaining tuples on each relation separately. While we can use the very same `block-idx` function, as described in Section 3.1, Equation 1, we elaborate on two different strategies for efficient blocking, enabled by applying the `row-idx` separately, in the next section.
3.3 Block Materialization Strategies
Figure 4 (a) sketches the two materialization strategies for BlockJoin. Both approaches share the initial Steps 1 to 3 from Figure 3, explained in the previous section. The main difference stems from the block materialization strategy we use for the values emitted in Step 4, the fetch-kernel.
Our goal now is to shuffle the row-splits\(^3\) of each row to the nodes responsible for the splits’ destination blocks. A very important consideration is that one row-split may need to fill multiple rows in the same block and might be part of multiple blocks. For instance, consider a row-split of a product which matches multiple reviews. If there are 10 matches and the block size is 5, that product’s row-split will have to be duplicated 10 times and, therefore, contribute to at least 2 different blocks. Duplicates can have a huge impact on the runtime of the block materialization phase. For this reason, we devise two materialization strategies which are detailed below.
Late Materialization. The left side of Figure 4 (b) depicts the execution flow of late materialization. The key idea behind late materialization is to reduce the number of row-splits emitted, by sending each split only once per destination block, even if the row-split occurs multiple times in the respective block. The duplicates of each split are materialized on the receiver side for each block. We can apply receiver-side materialization, as we are not forced to materialize the join result (like in the baseline), to obtain the sequential row-idx. More specifically, each row emitted from the fetch kernel is split in multiple <blk-idx, row-offset, duplicates, row-split> tuples. Since there might be multiple matches for a key, we store the number of duplicates per block, instead of materializing them early. The row-offset defines the first row index of the row-split in the destination block. In the destination node, we merge the row-splits of the same blk-idx, and create the complete blocks by materializing their duplicates. Note that we create complete blocks even in the case they contain data from both relations in one pass (as can be seen for the green cells from the Reviews table).
Early Materialization. The right side of Figure 4 (b) depicts the execution flow of early materialization. Instead of separating the rows from the fetch kernel into row splits immediately, we emit a single <row-idx, duplicates, row-tuple per row. Rows matching multiple times are not yet materialized, and we emit one tuple for all duplicates within a block again. In the next step, we range-partition the tuples by their row-idx and sort within each partition. A custom partitioner ensures that tuples belonging to the same block end up in the same partition. Next, we create the blocks and materialize the duplicates for each relation separately. Note that we do not have to shuffle, but potentially create partial blocks (as can be seen for the blocks with column index 1). In the last step, we union the relations and merge the partial blocks.
Applicability to the baseline. While we can apply the presented materialization strategies also in the baseline, we do not gain any advantage. The main benefit of late materialization is the receiver-side materialization of duplicates (e.g., PK matching multiple FKS). In the baseline though, we materialize all duplicates during the distributed join phase. As a result, we shuffle the same amount of data as in the baseline, but with a much larger amount of tuples, as we split the rows in late materialization. The advantage of early materialization yields from the custom partitioner, which ensures partitions that do not span over block boundaries. In BlockJoin, we introduce the shuffle needed for this partitioner, as we do not shuffle for the distributed join that is required in the baseline. Therefore, applying the partitioner on the baseline would introduce an additional shuffle step, making it worse than the baseline.
3.4 Choosing a Materialization Strategy
To make these trade-offs between late and early materialization more concrete, we compare the two materialization strategies against the baseline implementation described in Section 2.2. We base our comparison on the cost model shown below, using the symbols from Table 1. For brevity and simplicity, we focus only on the amount of data ex-
corresponding block (egy only needs to shuffle once to merge all row-splits in their
has to shuffle the input data in order to range-partition it
them ( and another shuffle of the join results for block-partitioning
Size of Shuffled Data.
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>$</td>
<td>T</td>
</tr>
<tr>
<td>$cols(T)$</td>
<td>Number of columns in relation $T$</td>
</tr>
<tr>
<td>$\text{bytes}(T)$</td>
<td>Size (bytes) of a tuple in relation $T$</td>
</tr>
<tr>
<td>$b$</td>
<td>Number of rows/columns per square block</td>
</tr>
</tbody>
</table>
| $P, R$ | Input tables of the join |
| $J$ | Join result |
change and the number of tuples during the shuffling phases, and make the simplifying assumption that all the tuples of the two input relations survive the join, which also reflects the worst case for our materialization strategies. Late materialization emits multiple row splits per row, thus increases the number of tuples to be shuffled. On the other hand, early materialization emits full (and materialized) blocks at the expense of an extra range partitioning on complete rows and local sorting step. Since the blocks in the early materialization schema are complete, apart from blocks containing columns from both relations (which is equal to number of row-wise blocks), only those have to be considered during the merging process.
Size of Shuffled Data.
- **Baseline**
\[
\text{baseline} \rightarrow |P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R) + |J| \cdot \text{bytes}(J) \\
\quad \text{merge blocks}
\]
- **Early**
\[
\text{early} \rightarrow |P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R) + |J| \cdot \text{bytes}(J) \\
\quad \text{merge blocks}
\]
- **Late**
\[
\text{late} \rightarrow |P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R) \\
\quad \text{merge blocks}
\]
Deriving the size of shuffled data for the baseline implementation is straightforward: we execute a shuffle in order to perform the join $|P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R)$ and another shuffle of the join results for block partitioning $|J| \cdot \text{bytes}(J)$. The early materialization strategy has to shuffle the input data in order to range partition it $|P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R)$ and shuffle the join result in order to merge the blocks $|J| \cdot \text{bytes}(J)$, as we might have partially filled blocks. Finally, the late materialization strategy only needs to shuffle once to merge all row-splits in their corresponding block $|P| \cdot \text{bytes}(P) + |R| \cdot \text{bytes}(R)$. The late materialization strategy is expected to have the least amount of data shuffling. However, the amount of tuples exchanged differs among the three implementations.
The number of tuples exchanged for the baseline implementation includes the relations themselves $(|P| + |R|)$, plus the total number of blocks that form the final matrix. The number of blocks is defined by the rows in the join result divided by the block size $(\frac{|J|}{b})$ and the number of columns, divided by the block size $(\frac{cols(J)}{b})$. The early materialization strategy will require an extra $(\frac{|J|}{b})$ for the partial blocks that span both relations (detailed in the Block Materialization paragraph of Section 4). In the late materialization strategy, we emit each matching row of both relations $(|J|)$ multiplied by the number of splits per row $(\frac{cols(J)}{b})$. Intuitively, late materialization always emits more tuples than early materialization and the baseline, because each row of the result is split while the early materialization creates (partial) blocks before shuffling.
**Estimating Cost.** Estimating the runtime of BlockJoin, boils down to estimating a cost function which takes into account the amount of shuffled data as well as the number of shuffled tuples for both materialization strategies. For early materialization, the regression $r_e = [d_e(\theta) \; t_e(\theta) \; 1] \cdot w_e$ predicts the runtime $r_e$. Here $\theta$ denotes a vector that contains the data statistics from Table 1 for a particular join input, $d_e(\theta)$ and $t_e(\theta)$ refer to the previously presented functions for computing data size and number of shuffled tuples for early materialization and $w_e$ denotes the learned regression coefficients. Analogously, a regression model $r_l = [d_l(\theta) \; t_l(\theta) \; 1] \cdot w_l$ can be trained for predicting the runtime $r_l$ for late materialization. The obtained regression coefficients depend on the actual cluster settings. Therefore, a couple of experiments must be executed to obtain a sample of different runtimes for different data characteristics, before the model can be fitted. Afterwards, the prediction model can be used to select the best suited materialization strategy for subsequent runs. We present such an instance of a trained model in our experiments and showcase its accuracy.
Using this model requires statistics on the input tables and the join result. We can integrate the model into an optimizer, which creates an optimized plan statically before job execution (e.g., Catalyst in Spark), but have to rely on table statistics and estimations for the join result to select the best strategy.
### 3.5 Extensibility
So far we have only considered equality-joins. However, BlockJoin and the general idea of assigning unique identifiers without materializing the intermediate join result is independent of the actual join algorithm that runs locally. Thus, extending BlockJoin for theta and n-ary joins boils down to implementing a variation of the index-join used to define the matching tuples. Theta joins can be implemented by a projection of the columns required for predicate evaluation and a modified version of the shared metadata, to identify matching tuples and conduct row index assignment in the fetch kernel. Extending BlockJoin to n-ary joins is also possible, once we identify the join results. However,
this extension requires further research regarding the choice between multiple binary joins or a solution based on multiway join algorithms, which we leave to future work.
4. IMPLEMENTATION ASPECTS
In this section, we present important technical aspects to consider when implementing BlockJoin in distributed dataflow systems.
Row Index Assignment. In order to block partition the join result, we need to assign consecutive row indexes to the join result. In the baseline implementation, we conduct this assignment on the distributed join result. For that, we leverage Spark’s zipWithIndex operation, which counts the number of elements of each partition in the distributed dataset, and uses the result to assign consecutive indexes in a second pass over the data. In BlockJoin, we create the unique row indexes during the join-kernel based on the matching tuples and make them available as part of the metadata. Therefore, the assignment of row indexes to emitted tuples in the fetch-kernel phase can be done on each relation individually, without prior materialization of the join result.
Block Materialization. In the baseline implementation, we create the blocks after assigning the row index. To reduce the number of emitted partial blocks, the baseline uses a mapPartitions function to create the matrix blocks. This function provides an iterator over the whole partition inside the UDF. Due to the sequential row index, all rows that belong to a certain block come one after the other, which allows us to create full blocks before emitting. Therefore, we only have to combine blocks that are split row-wise between two partitions in the succeeding merge step.
As discussed in Section 3, we create the correct block-idx separately on both tables in the BlockJoin. Figure 5 shows the assignment of the block index in detail. We create partial blocks for the blk-col-idx 1 in both relations, as the block is split across both relations. In the late materialization approach, we have to merge all individual tuples on the receiver-side, which reduces the data that needs to be shuffled but increases the number of tuples in certain scenarios (as discussed in Section 3.3). In the early materialization approach, we also use a mapPartitions function to create full blocks on the sender-side. As we can not guarantee sorted row indexes for at least one of the relations, we would risk emitting partially filled blocks, as consecutive tuples might belong to different blocks. Therefore, we provide a custom partitioner, which creates partitions that do not cross block boundaries. Afterwards, we sort by the row index within each partition to create consecutive blocks. Thus, we only have to merge blocks that contain columns from both relations, e.g., for blocks with column blk-col-idx 1 in Figure 5.
Determining Matrix Dimensions. In order to assign the vectorized data to matrix blocks, it is necessary to know the dimensionality of the vectors returned by the user-defined vectorization functions upfront. One can either require the user to specify this in the vectorization functions, or alternatively fetch a single random tuple from each relation once, apply the vectorization function, and record the dimensionality of the resulting vector.
5. EXPERIMENTAL EVALUATION
In this section, we comprehensively evaluate experiments comparing BlockJoin with late and early materialization against a baseline approach on dense and sparse data. As discussed before, the baseline represents the current state-of-the-art; we use Spark to execute the join of the tables, and then SystemML to create a blocked matrix representation from the join result without staging the intermediate results on HDFS.
Sparsity mainly affects the data size and runtime, but not the overall performance trend for the algorithms. For this reason, we show the results for sparse and dense data for each experiment in the same plot. Throughout the experiments, sparse data is indicated with patched bars in the front, whereas dense data is indicated with solid bars.
Setup. We used a local cluster with up to 20 worker nodes connected with 1 GBit Ethernet connections. Each machine is equipped with a quad-core Intel Xeon X3450 2.67 GHz, and 16GB of RAM. We implemented BlockJoin on Spark 1.6.2 (each Spark worker has 4 slots) and store the initial data in HDFS 2.4.1. Every experiment is executed seven times and we report the median execution time. For the experiments on dense data we use 20 worker nodes, resulting in a degree of parallelism (DOP) of 80, while we use 10 worker nodes (DOP = 40) for sparse data.
Dataset. In order to have full control of the shape, size and content of the input tables we evaluate BlockJoin on synthetic datasets. The simulated tables, called PK and FK, have following schema: PK (key, r1, ..., rn) and FK (key, s1, ..., sn). We use a vectorization function that converts r1, ..., rn to an n-dimensional double precision vector, and analogously s1, ..., sn to an m-dimensional double precision vector. We conducted the experiments for dense and sparse (10% non zero values) vectors and vary the number of rows and columns. If not stated otherwise in the experiments, the tables have a 1:N primary key - foreign key relation. We use squared blocks of 1000 × 1000 as it
was shown to make a good trade off between computational efficiency and network traffic [17]. The corresponding sizes of the tables are given in Table 2.
In addition, we provide experiments on the publicly available Reddit Comments\(^4\) dataset. It consists of line separated JSON entries that represent comments on the news aggregator website Reddit. Each JSON entry contains a single comment with additional information such as the author, votes, category, etc. We split the raw data into a comment and author CSV file, by introducing a primary - foreign key relation author_id and use these as input to our experiments. The final join input are \(~30\) million comments (5.1 GB) and \(~1.5\) million authors (29.9 MB).
**Data Distribution.** Many real-world datasets exhibit extreme skew in the distribution of data points per object observed (e.g., reviews per product), and it has been shown that this skew increases over time in many datasets [23]. When joining with such datasets, a small number of tuples from the skewed relation will produce a very large amount of tuples in the join result. For this reason, we conduct experiments with uniform as well as power-law distributed foreign keys (with \(\alpha = 0.01\)).
### 5.1 Effect of Table Shape and Size
In this experiment, we evaluate the scalability of BlockJoin for different numbers of columns. We fix the rows to 100K in the PK and 1M in the FK table. All rows in the FK table match at least one key in the PK table. Therefore, we concentrate on the effects of the block materialization strategies, as BlockJoin cannot gain performance by pruning non-matching tuples (an expected effect of the fetch-kernel phase).
**Scaling PK Columns.** In this experiment we fix the number of columns in the FK table to 5K, while we scale the PK table, from 5K to 50K columns, until it reaches the same data size as the FK table.
Figure 6 (a) depicts the results for uniform distributed foreign keys. A first observation is that Late Materialization scales much better and is up to 2.5\(\times\) faster than the baseline for sparse and dense data. Late Materialization materializes duplicates (primary keys matching multiple foreign keys) at the receiver side. Thus, it only needs to shuffle data equal to the size of the input tables. In contrast, both Early Materialization and the baseline approach, materialize the duplicates (the baseline approach in the join and Early Materialization before merging partial matrix blocks). Therefore, they shuffle up to 847GB + 84.7GB (for 50K dense columns); roughly 10\(\times\) more data compared to Late Materialization. Even though the baseline and Early Materialization shuffle the same amount of data, Early Materialization appears to outperform the baseline by 10\%. The faster execution of Early Materialization is due to (i) the independent blocking of the two relations without materializing the join result, and (ii) our custom partitioner (see Section 4), which never splits rows sharing the same blk-row-idx across different partitions.
Figure 6 (b) shows the same experiment for power-law distributed foreign keys. Note that the baseline approach fails to perform the join for more than 5K columns of dense data. We experienced an internal Spark error, while it tried to read partitions to execute the join on the receiver side.
\(^4\)http://files.pushshift.io/reddit/comments/
This is due to the heavily skewed data, which results in almost all of the work ending up in one worker node, which is unable to gather and sort the received partitions. For Late Materialization, we can observe that the algorithm is not affected by data skew and outperforms the baseline by up to 4\(\times\) for sparse data. The effect of skewed keys on Early Materialization is not as severe as for the baseline, but the heavily increased amount of duplicates, still decreases its performance as the PK table holds the majority of the data.
**Scaling FK Columns.** Figure 7 (a) depicts the inverse experiment with 5K in the PK table and scaling number of columns in the FK table. This time, Early Materialization outperforms the Late Materialization for dense data and performs up to 2\(\times\) better than the baseline. Note that in this experiment, \(i\) the FK table grows very large, up to 846.7GB for dense data, in comparison to the previous experiment, while \(ii\) the resulting matrix sizes are exactly same. Thus, as the PK table accounts for the duplicates, Late Materialization does not save much by late duplicate materialization. However, the number of shuffled FK tuples increases with the number of columns in the FK table. Late Materialization emits up to 50M (1M rows split in 50K columns divided by 1K block size) row-splits, while only 1M rows are exchanged by Early Materialization and the baseline.
Figure 7 (b) shows the experiment with a power-law distributed foreign keys. For the two versions of BlockJoin, we can observe almost the same runtime as for the uniform distributed keys, as the data size is dominated by the FK table. Therefore, the impact of the skewed keys on Early Materialization is minor and Late Materialization does not save much data exchange. This time, the baseline approach fails to finish the experiment in case of more than 25K sparse columns due to the increased size of the FK table.
Experiment Conclusion. When the PK table size dominates the data exchange, Late Materialization performs up to 4× better than the baseline and outperforms Early Materialization. However, when the FK table dominates data exchange and the duplication of row-splits is no longer an issue, Early Materialization can be up to 1.8× faster than Late Materialization and 2× faster than the baseline. Finally, we were unable to conduct all experiments for the baseline in case of skewed data and the performance of Late Materialization is generally less affected by the data distribution.
Cost Model Evaluation. We trained the regression models, described in Section 3.3, based on the experiment results using dense input data. Figure 8 depicts the estimated runtime in relation to the number of columns in the two input relations. The number of rows is thereby the same as in the experiments (100K for PK and 1M for FK). We can observe that the model reflects the measured runtimes. While the model can serve as a binary classifier to select the best suited strategy for other experiments, we are aware that we need more data to fit the model thoroughly. Another interesting observation is that we can use the column distribution as a simplified measure to select the strategies (cols(PK) > cols(FK) favors Late Materialization and vice versa). This ratio turns out to be a pretty good estimation model and can be used as a fallback in an optimizer, as long as not enough training data is available to fit the model.
Detailed runtimes of the different phases. In Figure 9 and 10, we show the runtime of each of the phases – vectorize, join, and blocking – for the experiments with dense data in Figure 6 and 7 respectively. Due to operator chaining in Spark, we had to measure the phases in separated jobs to obtain their individual runtime. Vectorize – We observe roughly equal run times, which is expected, as the same vectorization function is performed for both the baseline and BlockJoin.
Join – We observe different behavior depending on whether we scale the PK or FK columns. Scaling the PK columns (Figure 9), we see only a minor speedup for BlockJoin in case of uniform distributed keys. For power-law distributed keys, the baseline fails to execute the join after 5K columns. As expected, BlockJoin is not sensitive to skewed keys and the join times are equal to the cases with uniformly distributed keys. Scaling the FK columns (Figure 10), we observe a speedup of up to 3x. Compared to Figure 9, we have to shuffle much more data, as we increase the FK columns. BlockJoin degrades gracefully with increasing number of columns, as we have to read the data to project the join keys. Again, the baseline fails to execute the join for power-law distributed keys, while BlockJoin is not affected by skew.
Blocking – We observe performance gains of up to 3x for the best suited materialization strategy. This applies mainly for late materialization, as the benefits are rather small in cases early materialization is better. The gains in performance for early materialization are due to the block-size aware partitioning. Late materialization gains performance due to the receiver-side materialization of duplicates. Thus, we observe a huge performance gain when scaling the PK columns. The behavior reflects the assumptions of our cost model: When scaling the PK columns, Late Materialization is superior as it avoids the materialization of the duplicates in the PK table and thus, shuffles considerably less data. When we scale the FK columns, Late Materialization can not gain much from receiver-side materialization as the majority of data resides in the FK table, but has to shuffle way more tuples. The experiments show that BlockJoin gains performance with both, a efficient, skew resistant join and the right choice of the materialization strategy.
Detailed runtimes of the different phases. In Figure 9 and 10, we show the runtime of each of the phases – vectorize, join, and blocking – for the experiments with dense data in Figure 6 and 7 respectively. Due to operator chaining in Spark, we had to measure the phases in separated jobs to obtain their individual runtime. Vectorize – We observe roughly equal run times, which is expected, as the same vectorization function is performed for both the baseline and BlockJoin.
Join – We observe different behavior depending on whether we scale the PK or FK columns. Scaling the PK columns (Figure 9), we see only a minor speedup for BlockJoin in case of uniform distributed keys. For power-law distributed keys, the baseline fails to execute the join after 5K columns. As expected, BlockJoin is not sensitive to skewed keys and the join times are equal to the cases with uniformly distributed keys. Scaling the FK columns (Figure 10), we observe a speedup of up to 3x. Compared to Figure 9, we have to shuffle much more data, as we increase the FK columns. BlockJoin degrades gracefully with increasing number of columns, as we have to read the data to project the join keys. Again, the baseline fails to execute the join for power-law distributed keys, while BlockJoin is not affected by skew.
Blocking – We observe performance gains of up to 3x for the best suited materialization strategy. This applies mainly for late materialization, as the benefits are rather small in cases early materialization is better. The gains in performance for early materialization are due to the block-size aware partitioning. Late materialization gains performance due to the receiver-side materialization of duplicates. Thus, we observe a huge performance gain when scaling the PK columns. The behavior reflects the assumptions of our cost model: When scaling the PK columns, Late Materialization is superior as it avoids the materialization of the duplicates in the PK table and thus, shuffles considerably less data. When we scale the FK columns, Late Materialization can not gain much from receiver-side materialization as the majority of data resides in the FK table, but has to shuffle way more tuples. The experiments show that BlockJoin gains performance with both, a efficient, skew resistant join and the right choice of the materialization strategy.
Figure 8: Estimated cost of the regression models, trained on the experiment results from Section 5.1. The number of rows correspond to the experiments. The data points represent the experiment results for Late Materialization and Early Materialization.
Figure 9: Split up execution times for scaling the number of columns in the PK table.
Figure 10: Split up execution times for scaling the number of columns in the FK table.
5.2 1:1 and M:N Relations
In this experiment, we analyze the effects of 1:1 and M:N relations between the keys in the two relations. Therefore, we fix the number of rows in both tables to 100K and use sequential keys in both relations, but vary the range we draw the keys from. Figure 11 (a) depicts a 1:1 relation; each key appears once per table. Late Materialization and Early Materialization gain up to 2× speedup compared to the baseline...
(both for sparse and dense data). As there are no duplicates, Early Materialization is only slightly slower than Late Materialization. Figure 11 (b) – (d) illustrate M:N relations with 2, 4, and 10 duplicates per key, and therefore, 200K, 400K, and 1M rows in the matrix. While the baseline has the worst performance throughout the series, we can observe a declining performance of Early Materialization with increasing number of duplicates for dense data. The runtime of Late Materialization is almost not affected by the number of duplicates and gains up to 4× speedup compared to the baseline for dense and sparse data.
5.3 Effect of Selectivity
In this experiment, we investigate the performance implications of the join selectivity. Therefore, we can observe the impact of the semi-join reduction in the fetch-kernel. We start with the same number of rows in the PK and FK table as in the previous experiment (Section 5.1), but we restrict the number of tuples in PK table. As a result, not all foreign keys match. This reflects a common use case, where only certain values, e.g., products of a given category, are of interest.
Scaling PK Columns. Figure 12 shows the experiment with fixed FK columns (5K) and scaling PK columns. On the x-axis, we increase the selectivity of the filter on the PK table. The selectivity not only defines the number of rows in the PK table (from 100K to 10K rows), but also the number of matching foreign keys, and thereby the size of the join result/matrix. Again, Late Materialization outperforms Early Materialization, but the benefits of late duplicate materialization decrease with increasing selectivity. Nevertheless, we achieve up to 4× speedups, due to pruning non-matching tuples in the fetch-kernel. For power-law distributed keys (Figure 12 (b)), the baseline approach fails for PK tables with more than 5K columns of dense data and the skew resistant Late Materialization gains up to 6× speedups for sparse data.
Scaling FK Columns. Figure 13 depicts the experiments with scaling number of columns in the FK table. Again, we can observe the performance degradation of Late Materialization, compared to the experiments in Figure 12, as the number of FK columns increases. Note that increasing selectivity mitigates the performance impact of row splitting for Late Materialization due to pruning in the the fetch-kernel and we see almost equal performance for Early Materialization and Late Materialization in case of 0.1 selectivity. The semi-join reduction thereby increases the speedups from 2× for 1.0 up to 6× for 0.1 selectivity. Figure 13 (b) shows the experiment with power-law distributed keys. While Late Materialization can outperform Early Materialization in the smallest configuration, pruning cannot mitigate the exploding number of tuples for larger number of columns in the dense case.
Experiment Conclusion. Restricting the primary key table to certain categories or values is a common use case. We showed that the impact of pruning in BlockJoin further increases its performance benefits compared to the baseline up to 6×.
5.4 Reddit Comments Dataset
In this experiment, we evaluate our join algorithms on the Reddit Comments dataset, described in the beginning of this section. In order to obtain the full feature set, we
6. RELATED WORK
Join Optimization. Optimized join algorithms have been well studied in the area of distributed database systems [27, 31, 35, 30, 3] and parallel dataflow systems [29, 37, 2, 28, 40] like Hadoop [5] and Spark [38], with the aim of reducing network traffic and dealing with skewed data. Efficient join implementations in main-memory databases are based on TID-joins [25, 15] and late materialization [36, 24] to achieve cache efficiency up to the latest possible point. In BlockJoin, we apply and enhance these techniques for the domain of distributed matrix computation by using index-joins to create the matching tuples without re-partitioning the tables. More specifically, we apply a semi-join reduction to prune tuples before creating the blocks and we introduce late materialization to avoid sending rows resulting from duplicated join keys.
Array Databases. RasDaMan [6] is an array DBMS for multidimensional discrete data with an extended SQL query language. It stores its data as tiles, i.e., possibly non-aligned sub arrays, as blobs in an external DBMS. While their optimizer provides a rich set of heuristic-based rewrites, to the best of our knowledge, RasDaMan does not perform joint optimization over relational and array backed data. SciDB [11] is another array database that, in contrast to RasDaMan, provides its own shared-nothing storage layer. This allows SciDB to store and query tiles more efficiently. It provides a variety of optimizations, like overlapping chunks and compression. We see BlockJoin as complementary to the research in array databases and its ideas could be implemented to enhance their data loading and/or transformation.
Algebra Unifying Approaches. Kumar et al. [21] introduce learning generalized linear models over data residing in a relational database. The authors push parts of the computation of the ML model into joins over normalized data, similar to [12]. These works target generalized linear models only, while our approach subsumes a more generic optimization that can be used in arbitrary machine learning pipelines over normalized data. MLBase [20] provides high-level abstractions for ML tasks with basic support for relational operators. Their DSL allows the optimizer to choose different ML algorithm implementations, but does not take the relational operators into account nor does it optimize the physical representation of the data among different operators. Cohen et al. [14] execute linear algebra operations in a relational database, but do not present optimizations for block-partitioning the operators.
ML Libraries & Languages. SystemML’s DML [7, 32, 8, 16], Mahout’s Samsara [33], provide R-like linear algebra abstractions. SystemML executes locally or distributed on Hadoop and Spark, while Samsara targets Spark, Flink
Figure 13: Effect of selectivity for varying number of columns in the FK table. The number of PK columns is fixed to 5K.
Figure 14: Effect of scaling the number of columns for the comment relation.
Figure 14 depicts the results of the experiment. We fix the dimensions of the author name feature vector to 1000 and 5000 and increase the dimensions of the comments vector. The first observation is that the baseline implementation fails after the first scaling factor. This is due to an out of memory exception in the blocking phase. The large amount of comments (~30 million tuples) exceed the available memory in the mapPartitions operators that create partial blocks within each partition. While we also create partial blocks in the early materialization approach, we execute the blocking on the two relations separately, without prior joining. This leads to less memory pressure, compared to the baseline. Late materialization is not affected by memory pressure. This leads, in combination with the huge difference in the relations sizes (1 : 30) and the relatively small sparse feature vectors, to an almost equal runtime for Late Materialization and Early Materialization.
and H2O. As there is no dedicated support for relational operators, ETL has to be executed using a different set of abstractions, and both systems lose potential for holistic optimization. MLlib [26, 10], MLI [34], Cumulon [18] and Pegasus [19] employ different strategies to efficiently execute matrix operations on distributed dataflow systems, but again do not target holistic optimization over relational and linear algebra operators. We presented recently the potential for optimizations across relational and linear algebra in the context of the Lara [22] language, based on Emma [4].
7. CONCLUSION & FUTURE WORK
In this paper, we introduce a scalable join algorithm for analytics which mix relational and linear algebra operations. Our technique reduces the re-partitioning overheads which stem from the different physical representations of relations and matrices. To this end, we propose BlockJoin, an optimized join algorithm, which fuses relational joins with blocked matrix partitioning, avoiding costly re-partitioning steps. We discuss different block materialization strategies of this join operator and their cost-model driven application, depending on the shape of the input data. In an extensive experimental evaluation, we show that BlockJoin outperforms the current state of the art implementation for dataflow systems up to a factor of six, and demonstrated that BlockJoin is scalable and robust on highly skewed data.
Future work. We plan to integrate BlockJoin and other physical operators into a common intermediate representation and optimizer which will be able to reason on mixed linear and relational algebra programs [4, 22]. Moreover, we plan to explore extensions of BlockJoin, to generate a variety of block-partitioned matrices for model selection workloads that are commonly employed to find well-working features and hyperparameters for machine learning models [32]. Furthermore, we plan future research to overcome the current limitation of BlockJoin to vectorization functions that can be executed separately on both relations.
Acknowledgments. This work has been supported through grants by the German Science Foundation (DFG) as FOR 1306 Stratosphere, by the German Ministry for Education and Research as Berlin Big Data Center BBDC (funding mark 01IS14013A), and by the European Union as Horizon 2020 projects Streamline (688191) and Proteus (687691).
8. REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/36128801/blockjoin_pvldb17.pdf", "len_cl100k_base": 12295, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47127, "total-output-tokens": 14994, "length": "2e13", "weborganizer": {"__label__adult": 0.000408172607421875, "__label__art_design": 0.0005078315734863281, "__label__crime_law": 0.0005178451538085938, "__label__education_jobs": 0.0018033981323242188, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.00024700164794921875, "__label__finance_business": 0.0007281303405761719, "__label__food_dining": 0.0004787445068359375, "__label__games": 0.0008611679077148438, "__label__hardware": 0.0011930465698242188, "__label__health": 0.0008673667907714844, "__label__history": 0.00047397613525390625, "__label__home_hobbies": 0.0001672506332397461, "__label__industrial": 0.000912189483642578, "__label__literature": 0.0004458427429199219, "__label__politics": 0.0004949569702148438, "__label__religion": 0.0005507469177246094, "__label__science_tech": 0.40576171875, "__label__social_life": 0.00021541118621826172, "__label__software": 0.0297393798828125, "__label__software_dev": 0.55224609375, "__label__sports_fitness": 0.0003025531768798828, "__label__transportation": 0.0006589889526367188, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63191, 0.03186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63191, 0.29641]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63191, 0.88609]], "google_gemma-3-12b-it_contains_pii": [[0, 1039, false], [1039, 5151, null], [5151, 11471, null], [11471, 15475, null], [15475, 20374, null], [20374, 24754, null], [24754, 30860, null], [30860, 36151, null], [36151, 41488, null], [41488, 48574, null], [48574, 51870, null], [51870, 55862, null], [55862, 63191, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1039, true], [1039, 5151, null], [5151, 11471, null], [11471, 15475, null], [15475, 20374, null], [20374, 24754, null], [24754, 30860, null], [30860, 36151, null], [36151, 41488, null], [41488, 48574, null], [48574, 51870, null], [51870, 55862, null], [55862, 63191, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63191, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63191, null]], "pdf_page_numbers": [[0, 1039, 1], [1039, 5151, 2], [5151, 11471, 3], [11471, 15475, 4], [15475, 20374, 5], [20374, 24754, 6], [24754, 30860, 7], [30860, 36151, 8], [36151, 41488, 9], [41488, 48574, 10], [48574, 51870, 11], [51870, 55862, 12], [55862, 63191, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63191, 0.0354]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
294393365c770510e6599027c5f28879d139dd5b
|
NOTICE
The following document is the most complete specification of scheduling policies in the PODL language that exists. It is (as will be seen) part of a to-be-completed thesis and therefore we have some trepidation about releasing it to the world.
Therefore, you are granted access to this document under the following conditions:
• it is used solely for education/research purposes.
• it is not reproduced in any form unless this notice is attached. Please keep reproductions to a minimum, though we don’t expect you to not photocopy it.
• you may not reference it in any papers.
When the thesis is complete (soon) you will be able to do all the things you can normally do with a thesis. If you would like to be notified when the thesis is published, send email to chris@ie.utoronto.ca.
Please direct comments and questions to Chris Beck (chris@ie.utoronto.ca) and leave Sanket alone.
© Copyright Sanket Agrawal, 1995
Chapter 4 Solving Supply Chain Scheduling Problems with ODO
This chapter presents our approach to solving the supply chain scheduling problem. We begin by briefly describing the problem, followed by a discussion of constraint-based problem solving and constrained heuristic search. Also defined are the initial version of ODO, and our extensions to that model. We encapsulate the declaration of our search mechanism and accompanying algorithm(s) within one structure, called a problem solving policy. A meta-policy defines the control structure of the policy hierarchy, whereas an atomic-policy embodies a particular scheduling heuristic. This chapter also discusses the specification of our policies in PODL (ODO Problem Description Language).
4.1 The Supply Chain Scheduling Problem
The typical supply chain scheduling problem is defined by:
1. A set of generic process-plans (or, activity templates), one for each product. Each process plan is composed of a number of activities, all of which are completely defined within the scope of the process plan. The execution of a single instance of any process plan results in the procurement of a fixed, pre-determined quantity of the corresponding product.
Each process-plan is a conjunctive/disjunctive network of activities. In other words, each process-plan may have one or more OR and XOR decisions points within its network. In the case of an XOR decision point, only one process path is to be selected from the alternatives. Thus each activity has a parameter called percentage-demand, which specifies the demand that activity satisfies, as a percentage of the total demand being satisfied by its parent process plan.
2. A set of orders for products, with each order parameterized by one product, required quantity, order due-date, release-date, latest-finish date, and required code-age. Release date specifies the earliest date the order can start being processed; latest end date specifies the time by which an order must be fulfilled. These two time points set the earliest and latest time bounds on the activities corresponding to an order. The code-age specifies the minimum remaining shelf-life for the product to be allocated to an order.
3. Each activity is further defined by start-time, duration, and end-time intervals. It is linked to one or more other activities via precedence constraints. An activity also uses/consumes/produces resources. Each resource-request specifies a specific resource or a pool of resources, and the quantity required of that resource/set. Each activity which produces a resource is also parameterized by a rate of production.
4. Resources are defined as objects, which play certain roles in activities. We model three classes of roles: mechanism, material, and container role. Mechanism role refers to roles which are a mechanism in the execution of activities, like machines, operators, factories, and so on. Material role refers to roles like raw material and inventory (work-in-process, finished products). Container role is associated with warehouses and storage areas.
Each of these role-classes has its own definition of capacity, and we specify an maximum capacity for all resources. Mechanism role is linked to simultaneity capacity, which defines how many activities can a resource support simultaneously. Material role is linked to amount or discrete quantity in number of units, whereas container role corresponds to storage capacity.
Resources can also be aggregated into sets, which is especially useful to model inventory. For example, an inventory of widgets can be composed of two sets, each having a set-size of 200 pencils. An activity which require 10 pencils can take them from either set, leaving it with 190 pencils.
5. We also model shelf-life of inventory; shelf-life specifies the number of time units from the time of manufacture that inventory will spoil. Inventory which is spoilt can no longer be used for its intended purpose, and thus represents a high cost.
4.2 Constraint-Based Problem Solving
One of the general techniques for representing and solving constrained problems is called Constraint Satisfaction. According to [Mackworth 86], the problem is represented by a constraint graph (Figure 9, “Constraint Graph,” on page 63) in this approach. The nodes represent variables, and the arcs connecting these nodes represent the constraints between them. Thus, solving a constraint satisfaction problem (CSP) is tantamount to assigning values to variables such that all constraints are satisfied [Davis 94].
There are many advantages of selecting a constraint based representation for a problem. First, many problems naturally lend themselves to this representation; they are easily expressed in terms of variables and constraints. Secondly, it is easy to create variations of a problem by adding/deleting variables and/or constraints. This is important in dynamic domains where it may be necessary to solve several versions of a problem [Davis 94]. It is also an important property to solve problems in which not all variables and constraints are known prior to problem solving.
However, the most important reason to choose this representation would be to exploit its constraint graph structure during problem solving. The constraint graph enables a mechanism called consistency enforcement: when a variable’s value/domain is modified, the change can propagate through the graph and alter the actual/possible values of other variables [Davis 94]. Many consistency enforcement and search techniques have been developed for CSP’s [Gaschnig 77] [Haralick 80] [Bitner 75]. The constraint based problem structure can also be utilized by domain specific heuristics during search or propagation [Davis 94]. The following figure presents a small problem, and its corresponding constraint graph.
Problem solving is performed by sequentially selecting a variable and assigning a value to that variable. From the perspective of heuristic search, the initial state of a problem contains all variables, their associated domains, and the constraints acting over these variables. The heuristic operators select a variable at each step, and a value to assign to it. The evaluation function may be composed of some constraints, evaluation metrics, and an objective function [Fox 89].
There exist many incremental search heuristics which focus on generic overall variable and value selection strategies that will lead to an efficient solution [Davis 94]. Two most commonly encountered examples are constructive (backtracking) and repair-based search. In constructive search, each new assignment that is made is consistent with all previous assignments. Thus, at any given step, the partial solution is consistent. Backtracking occurs when there can be no complete solution with current assignments. In this situation, one or more previously made assignments are revoked. In repair based search, the initial state has all variables assigned, even if these assignments are inconsistent. Search then involves repeatedly changing assignments on variables, or, making repairs. The goal of this process is
to reduce the number of inconsistencies. As in constructive search where we can backtrack to a previous state, repair based methods can also revert to a previous state.
### 4.2.1 Search as Commitment Transformation
Search is sometimes viewed as the traversal of problem-solving states via state transition operators [Davis 94]. Each search transition transforms one problem solving state into another, as shown below. Search is terminated once an acceptable state is reached. Some systems explicitly represent the search space in this manner. The “belief” is that each decision transforms the problem state to one closer to the desired state.

We can consider a typical backtracking search procedure as a transformational search. The actual transformations being performed are assignment of values to variables as new assertions are made, and retracting previous assignments during backtracking. Upon each assignment, consistency is enforced through propagation, which alters the values or domains of other variables. Thus, consistency enforcement is another way of making new assertions. The search process may also involve posting and retracting of constraints instead of variable/value assignments. Both constraint posting and variable/value assignments create new states that post restrictions of the domains of variables when going forward, and release these restrictions when backtracking.
This leads us to suggest that general search consists of making transformations that create new commitments, and when backtracking occurs, more transformations release these commitments [Davis 94]. Repair-based methods follow these commit-release transformations as
well. Each new assignment of a value to a variable release the previous commitment, and
makes a new one. More commitments are made in case of propagation.
4.3 ODO: Version 1
This section presents the problem solving model of ODO: Version 1, which provided the ini-
tial basis for this work. ODO is founded on the belief that there is an opportunity to discover
some of the associations between problem structure and heuristics performance in con-
straint-based scheduling [Davis 94]. The approach is to view the modeling and solution of a
constraint based scheduling problem from a unified model that combines common compo-
nents and isolates essential differences.
4.3.1 The Unified Model
By employing a constraint based representation, all problems are formulated using a con-
straint graph. It has been demonstrated that graph property measurements can characterize
the heuristic search performance [Davis 94] [Fox 87]. [Fox 87] introduced textures as prop-
erties of a constraint graph that can guide heuristic decision making. This concept clearly dis-
tinguishes between the constraint graph information (texture measurement) and heuristic
decision making. ODO thus exploits past research into the relationships between graph prop-
erties and heuristic performance.
Many constraint-based schedulers perform incremental search of the problem space within
the bounds of a generic “template.” At each step, a modification of the problem space is cho-
sen. In general, this modification asserts a commitment or retracts a previous one. After the
modification has been asserted, the resulting changes are propagated through the problem
space via constraints. The resulting state is accepted or retracted based on certain criteria.
This incremental search is performed until the search termination criteria are met. This basic
search loop is illustrated in the following figure.
In fact, the above loop is also a generic representation of many heuristic search approaches. Each heuristic is characterized by what type of commitments are allowed, how commitments are chosen, how much constraint propagation is performed, how to release a commitment, and what the acceptance and termination criteria are [Davis 94]? ODO defines a policy as the exact specification of how each step within this loop is performed. Texture measures guide the decisions made at these steps.
ODO is a constraint-based scheduling system that implements the above unified model. Search is performed using the above loop, over a constraint graph representation. The problem to be solved and the solution strategy (i.e., the search heuristic) are both input via an input language, which makes ODO an “interpreter”, since heuristics are created at run time. Many well known scheduling heuristics like Min-Conflicts and MicroBOSS have been reconstructed within ODO.
4.3.2 ODO’s Problem Solving
Problem solving within ODO involves the declaration of all policy parameters followed by the initiation of ODO’s search mechanism. Extending the unified model of ODO within the context of search using variable and value selection, a complete policy specification involves:
- How to select candidate variables(s)?
- How to select/generate values for these variables?
- How to evaluate variable/value assignments, i.e., how to evaluate potential commitments?
- How to select one variable/value assignment?
- How to perform propagation?
- How to evaluate and accept/reject resulting state?
- When to terminate the search?
A library of texture measures is provided within ODO to guide the performance of each policy step. Version II follows the same general search loop as described above so we will briefly review each of these policy steps and the associated options. Further, we will also present ODO’s reconstruction of MicroBOSS [Sadeh 91], which has also influenced the design of our heuristics.
ODO implements a 2-tiered policy hierarchy: a parent meta-policy and its children atomic-policies. Each atomic policy represents an actual problem solving policy. A meta policy is the overall mechanism for executing search on a given problem, using a set of atomic policies. The following are the procedures performed in the execution of an atomic policy. Each procedure’s input is a list of filter functions. Each filter function performs a texture measure of the constraint graph. It takes a list or variables or variable/value pairs as input, and returns a subset of that list as output. Thus, it filters down a list of variables.
---
1. See Appendix X for details on each policy step and filter functions.
4.3.2.1 Variable and Value Selection
Four procedures are sequentially executed:
1. **Variable Selection**: filters an input list of variables down according to the variable selection filters.
2. **Value Generation**: accepts the above filtered list and generates a value for each variable in that list.
3. **Score Var/Val Pairs**: each var/val pair is scored on that basis of a scoring function.
4. **Select Var/Val Pair**: one var/val pair out of all scored pairs is selected using a selection filter.
4.3.2.2 Constraint Propagation
Propagation is performed once a variable and a value for it are selected. Both resource and temporal propagation are performed. The propagate procedure accepts as input the selected variable/value pair and a list of propagation filter functions. These functions are executed in sequence on the variable/value pair. A propagation function may enforce consistency on either actual or possible values for a variable. Temporal and resource consistency are performed independently, as is the common practice in scheduling. Full temporal consistency can be achieved more efficiently than resource consistency. The resource propagation functions thus achieve partial resource arc consistency.
4.3.2.3 Accept Criteria
After variable/value assertion and propagation, the resulting state is evaluated against the acceptance criteria in order to determine whether to accept the new state or to restore the previous state.
4.3.2.4 Backtrack
ODO implements only chronological backtracking mechanism which means that no declaration is required.
4.3.2.5 Search Termination
Prior to starting a new iteration, the current state is evaluated to determine whether the search termination criteria have been met. Once the criteria are met, search can be terminated. The termination expression allows for all logical relations and predicates, arithmetical negation, and some problem state measurement functions.
4.3.2.6 Cost Function
It is convenient at many steps during the search process to base decisions on the cost of the current state. Cost functions evaluate the current state and return an integer value. For instance, the termination criteria would often compute the cost of the current state as part of termination decision.
4.3.3 ODO’s Implementation of MicroBOSS
ODO’s implementation of MicroBOSS is based upon the version presented in [Sadeh 91]. Given ODO’s capabilities as a scheduler interpreter, it is possible to reconstruct many heuristics within the unified problem model. MicroBOSS is of particular interest to us since it demonstrates the successful use of demand-based textures in solving job-shop scheduling problems. We have adapted and extended this concept to form the essential component of our heuristics.
MicroBOSS relies upon a constructive approach to generate a satisfying schedule\(^1\). The basic idea is to find the most constrained variable at each step and assign it the least constraining value. This means that the most critical decisions are made early on. This approach has been shown to minimize backtracking and thrashing. The search is micro-opportunistic in that it recalculates all decision making parameters at each step, thus dynamically tracking emerging opportunities in the search space.
For each unscheduled activity and the resource(s) it requires, MicroBOSS generates a demand profile, which is the probability that the activity will require a particular resource at a given time. Demand profiles associated with each resource are algebraically added to give the
---
1. Sadeh [Sadeh 91] presents MicroBOSS as a constrained optimizer scheduler. However, the version reconstructed within ODO employs only its variable/value selection heuristics.
aggregate demand profile of that resource. The resource with the maximum aggregate
demand is the most-contended-for resource. The activity contributing the most to this aggre-
gate demand is the most constraining activity, or, is most reliant on this resource. This is the
activity selected for being scheduled at this step; the heuristic is called ORR (Operations
Resource Reliance).
The next step is to select a value for this activity (variable), one that is least constraining. This
is done by determining that which value results in a partial schedule with high survivability
and compatibility. Survivability is a measure of that particular schedule surviving future
assignments. Compatibility gives a measure of how many schedules would be compatible
with this one. This heuristic is called FSS (Filtered Survivable Schedule). Both ORR and FSS
are implemented as filter functions within the library of ODO. The results of this reconstruc-
tion, while not identical to Sadeh’s MicroBOSS, are quite close and consistent with other
efforts to duplicate MicroBOSS [Davis 94].
4.4 Extensions to the Initial Model
In chapter three we presented the problem representation of ODO version I, and our exten-
sions to and deviations from that model. Similarly in this section, we present our extensions
to and variations from the initial problem solving model. We follow ODO’s proposition of a
unified problem solving model (see Figure 11, “General Search Loop,” on page 66). This ver-
sion also performs search as a commitment based transformation. However, in terms of deci-
sion making heuristics, many of our ideas are founded in demand-based textures, proposed
by Sadeh and Fox [Sadeh 91] [Fox 87]. Some of the proposed extension to Sadeh’s demand-
based textures are along the lines of those proposed in KBLPS [Saks 93] [Saks 93] [Saks 92]
. In the remainder of this section, we describe in detail our problem solving model.
4.4.1 What is a Commitment?
At each problem solving iteration, a commitment is made or retracted which transforms the
current search state into another. The definition of a commitment is completely domain
dependent. For example, we could assign a value to a variable, create new variables and/or
constraints, and so on. In the first version of ODO, making a commitment was assigning a value to the start-time variable of an activity, or reducing its domain. However, as our problem is much more involved than a basic job-shop scheduling problem, we also have a more complicated notion of what a commitment is? There are three main, distinct types of commitments we are required to make in order to arrive at a solution. These are:
1. Decide which activities will satisfy how much demand, in a disjunctive process plan? This can also be looked at as which activities to execute?
2. Assign specific resources to activities, selecting from the set of alternatives available.
3. Sequence the schedule-able activities; instead of assigning start times to activities, we create new temporal constraints between a pair of activities, which has the effect of reducing their domains.
We view these three decisions as different types or “classes” of commitments we have to make, in order to solve our problem. Thus, we encapsulate the decision making heuristic(s) for each of these commitments in a separate and distinct set of atomic-policies, all of which are in turn are modeled as sub-policies of one meta-policy, in order to form a complete algorithm.
4.4.2 Texture Measures
In order to make “good” commitments, we need to address two points. First, we should be able to discern a “good” decision from a “poor” one. In other words, we need a mechanism to measure the “goodness” of a decision. Second, we require information on the state of the problem, vis-a-vis our objective(s), which we can then use as an input to our “goodness” measurement mechanism. Thus, we both need to know the search state and to be able to process that knowledge in order to make commitment related decisions.
We obtain this knowledge through the texture measures we define on the constraint graph representation of the problem. Different heuristics use these measures differently, in order to make “goodness” calculations, and we present these in Section 1.5.1 on page 63. The two fundamental texture measures which are employed in our work are:
- **Individual demand** of activities on resources (a measure of reservation reliance).
- **Aggregate demand** of all activities on resources (a measure of resource contention).
As we have pointed out earlier, the use of demand-based or contention-reliance based metrics has recently been shown to produce very strong performance in solving job-shop scheduling problems. We especially draw on the work of [Sadeh 91] and extend it towards solving the supply chain scheduling problem.
A *reservation* is defined as the assignment of a specific activity to a specific resource at a specific time. The reservation reliance of an activity on a resource is a function of the number of alternative reservations available for that activity, in that search state. It is obvious that an activity with a small set of possible reservations will rely heavily (pose high demand) on each of those reservations, as compared to an activity with a large set.
Let us first consider the most basic case: each activity requires a single, specific resource of unit capacity. Thus, no two activities can overlap on the same resource. Further, in a given search state, each activity has a set of possible (say, start time) reservations. In that case, assuming no biases, each of these reservations is equally likely. Thus for each possible assignment $\rho$ for an activity $A_i$, we have the following probability $\{\sigma_i(\rho)\}$ of being selected (allocated):
$$\sigma_i(\rho) = 1/nA_i$$ \hspace{1cm} (EQ 3)
where $nA_i =$ number of possible reservations for activity $A_i$ in this state. Clearly, if $nA_i$ is greater, $\sigma_i(\rho)$ is smaller, i.e., more possible reservations means lesser reliance on any one of them. This probability is used to compute the individual demand *profile* of an activity on a resource. The demand of activity $A_i$ on resource $R_j$ at time $\tau$ is denoted by $D_i(R_j, \tau)$. This demand is computed by adding the probabilities $\sigma_i(\rho)$ of all reservations $\rho$ of activity $A_i$ that require resource $R_j$ at time $\tau$. Mathematically,
$$D_i(R_j, \tau) = \sum_{t-d_{u_i}}^{t} \sigma_i(\tau)$$ \hspace{1cm} (EQ 4)
---
1. For all formulae & discussion related to this basic model, also refer to [Sadeh 91].
where $d_{ui}$ is the duration of activity $A_i$. The plot of demand vs. time is called the **individual demand profile** of that activity on that resource. Thus, we obtain our first texture measure, individual demand of activities on resources.
From this point, it is quite straightforward to obtain the other texture measure, the contention on resources. **Aggregate** demand on resources gives us a measure of resource contention. The aggregate demand on a resource is given by the **algebraic** sum of all the individual demand profiles related to that resource. This is expressed as:
$$D_{Rj}^{\text{agg}}(t) = \sum_i D_i(R_j, t)$$ \hspace{1cm} (EQ 5)
Figure 12, “Demand Profiles,” on page 75 shows examples of both individual and aggregate demand profiles. There are three$^1$ extensions to the base case discussed above:
- activities have **non-unit requirements**.
- activities have a set of **alternative** resources to choose one from.
- resources have **non-unit capacity**.
Let us consider these on by one. When activities have non-unit requirements, each activity may require a different amount of a given resource. This implies that even for two activities which are identical in all respects except the amount they require, the demands should be different. Further, it is obvious that an activity which requires a greater amount, poses a higher demand. However, the probabilities used in computing the demand profile are unchanged (EQ 3)(EQ 4). Thus, the following equation gives the individual demand in case of non-unit requirements.
$$D'_i(R_j, t) = D_i(R_j, t) \times \text{amt}(A_i)$$ \hspace{1cm} (EQ 6)
where $D'_i(R_j, t)$ is the modified demand, $D_i(R_j, t)$ is the base demand computed from initial probabilities, and, $\text{amt}(A_i)$ is the amount (units) of resource $R_j$ required by activity $A_i$. This is also consistent with the approach for inventory request allocation to supply routes presented in KBLPS [Saks 93] [Saks 93] [Saks 92].
$^1$ The situation that an activity requires multiple resources conjunctively (simultaneously) does **not** change its reliance on any **single** resource.
For the second extension, that activities have a set of alternatives to choose one resource from, we follow the same reasoning as in deriving the probabilities. It is logical to say that if an activity has a large set of alternative resources, it relies on any one of them less. Conversely, if it has a small set to choose from, it relies heavily on each alternative. Thus, the demand of an activity on a single resource is inversely proportional to the number of alternative resource available for that activity. The following equation gives us the modified demand in case of alternative resources:
\[
D'_{ij}(R_j, t) = D_{ij}(R_j, t) \times \frac{1}{alt(A_i)}
\]
(EQ 7)
Again, the above extension is consistent with KBLPS’s reasoning on reliance of an inventory request on a supply node. It is also important to re-emphasize that there are no preferences among alternatives. Combining the above two extensions, the individual demand of an activity on a specific resource is given by:
\[
D''_{ij}(R_j, t) = D_{ij}(R_j, t) \times \frac{1}{alt(A_i)} \times amt(A_i)
\]
(EQ 8)
The third case is of resources having different, non-unit capacities. It is evident that all other factors being equal, a resource having a smaller capacity will be more heavily contended for, since it has less to give. That is, an increase in capacity of a resource lowers the contention on it. Therefore, our measure of contention, aggregate demand on a resource, should also be inversely proportional to resource capacity. The modification is given by:
$$D^\text{aggr'}_{R_j}(t) = D^\text{aggr}_{R_j}(t) \times \frac{1}{\text{capacity}}$$ \hspace{1cm} (EQ 9)
This in fact, normalizes the aggregate demand profile on each resource. We are also using another ‘normalization’ of the aggregate demand curve; one which compares real demand with real capacity. We are using this criteria specifically to terminate search, as it gives an accurate measure of “real” contention on the resource. The normalization function is:
where RD represents “real demand” (as opposed to probabilistic). This measure ensures that only those points in time where the resource is actually violated are reflected in the real-demand curve, and so this can effectively be used as a termination criteria for one of our policies the purpose of which is to allocate all activities to various resources such that all resources are feasible (Section 4.5.6 on page 95). The resource are considered feasible if their capacity at all points in time equals or exceeds demand for them at that time. The profile of actual demand (as opposed to probabilistic demand) with time gives us that information. Thus real demand curves on resources (EQ 10) are used to terminate that policy.
Finally, we should also consider the case of disjunctive process plans; specifically, the percentage-demand associated with activities and how it may affect demand-profile calculations. The percentage-demand parameter represents the demand for the end-product being satisfied through this activity, as a percentage of the demand being satisfied by the parent process-plan. It is quite rational to state that activities with high percentage-demands are more critical within a process-plan, or, require resources more urgently. In other words, reliance is proportional to percentage-demand. This modifies our calculation as:
\[
D_{R_j}^{agg'}(t) = D_{R_j}(t) \times Pdemand(A_i)
\]
(EQ 11)
Thus, the final overall individual demand looks like:
\[
D_{R_j}^*(t) = D_{R_j}(t) \times \frac{1}{alt(A_i)} \times amt(A_i) \times Pdemand(A_i)
\]
(EQ 12)
### 4.4.2.1 Texture Measure on OR Constraints/
Revisiting our texture measures, there are two fundamental textures. Reliance of activities on resources is the first one. An aggregation of this gives us contention on resources, which the second texture being used. This section presents another texture measure, at a yet higher aggregation, defined on OR constraints. The context for this is the framework of a disjunctive process plan, described in chapter three in detail. Each such process plan is a network of activities, and one of the decisions to be made is to assign a percentage-demand to each activ-
ity. In effect, this is equivalent to assigning a percentage demand to each linear branch at all branching nodes in the process plan. For an illustration, refer to the figure below.
Figure 13. Branching Node in a Process-plan
It is evident from the figure that assigning a percentage to activity $a_3$ has the effect of assigning the same percentage to $a_2$ and $a_4$, since they belong in the same linear path. In fact, this is a flow/conservation constraint, explained in Chapter 3 and in Section 4.4.4.1. In other words, at each OR constraint, we are not assigning percentages to activities, but to entire paths. Thus, this decision cannot be made on the basis of information on a single activity alone. We have to consider all the branches/paths involved in the decision.
In order to do that, we introduce the concept of criticality: activity criticality and path criticality. The criticality of an activity reflects its overall importance in the schedule. It combines the demand effects of all resources with which an activity interacts. Consider activity $a_2$ to have more than one resource request variable ($rrv$). Further, consider each $rrv$ to have a non-unit resource domain. Then, the criticality of an activity, $Cr(A_i)$ is given by:
$$Cr(A_i) = \sum_{i} D^{agg}_{rrv_i}$$ \hspace{1cm} (EQ 13)
where $D^{agg}_{rrv_i}$ is the aggregated demand for a resource request variable, given by:
$$D^{agg}_{rrv_i} = \sum_{j} D^{agg}_{R_j}(t)$$ \hspace{1cm} (EQ 14)
where $D^{agg}_{R_j}$ is the aggregated demand (contention) on a resource, given by (EQ 5). The criticality of an activity will give a true picture of the importance of that activity over the entire schedule. However, even this computation does not give us the entire picture for each path. To obtain that, we need to extend this reasoning further.
We propose that the criticality of a linear path is equal to the criticality of the *most* critical activity on that path. An alternative reasoning here could be to use the sum of all activity criticalities on a path to obtain path criticality. However, we believe that stronger results will be observed by relieving the tightest clique of constraints, i.e., most critical activity. Thus when faced with an OR constraint (percentage-demand) decision, we need to traverse each linear branch and isolate the most critical activity on each path, thus giving us path criticalities. This is a new texture measure proposed in this work. Its use is made in variable selection in our Policy 1, explained shortly.
4.4.3 Inventory Reasoning
ODO version 1 had no explicit representation of or reasoning about inventory. We have extended this version to address that shortcoming. The representational aspects have been already presented in chapter three. In this section, we will discuss implementational or problem solving details. In any given process plan, there may be a number of activities that consume or produce inventory. In fact, activities are often linked to each other through inventory; the product of one activity is the raw-material of another. Apart from this, there are two more issues of concern for inventory requests within our model.
- amount/quantity/number of units required
- quality of inventory; specifically, code-age requirements.
Thus each inventory request requires a specific product, specific quantity, and specific quality. Linking each inventory request to the correct product is an inherent part of our process plan design. We have divided the remaining functionality among two policies. The first ‘prunes’ out the inventory (object-sets) that do not meet the age criteria. The second allocates remaining inventory to the requests according to quantity requirements.
4.4.3.1 Inventory Availability Profile
Referring back to (EQ 9), aggregate demand curve is normalized using resource capacities. For non-inventory resources, capacity is considered constant over the duration of the schedule. Evidently, the same does not hold for inventory, which is periodically consumed and
produced. Therefore, we have to first compute inventory capacity, or availability curve. This is complicated by the fact that activities are not scheduled in time yet. We only have a window for activity; thus, the measures used are probabilistic rather than deterministic. This implies that the approach to be used is similar in concept to that of computing demand profiles.
We look at the inventory in an aggregate fashion, and obtain the consumption/production data for products. This is used to build the probabilistic availability profile for each product. What we use to construct this curve on products is: base individual-demand curve (reliance) on activities, amount of a product required by activity, shelf-life of products, and time windows on activities. Consider the following:
- Activity A1, which consumes X of product P, with required shelf life \( t \). Consider A2 which produces Y of P. P has shelf life of time \( T \) units.
- Also consider starting inventory \( I' \) for product P. The shelf-life remaining of this is \( T' \).
We build the individual demand curves on activities as discussed earlier. These curves are utilized in the following manner in order to compute the availability curve of product P:
1. Multiply each probabilistic demand value by the amount required by an activity. This gives us the probabilistic amount required.
2. For the activity which produces P, extend the curve rightward by time \( T \) (shelf-life of produced inventory). This means that this inventory is available (if no other activity consumes it) until at most latest-end + \( T \), after which it spoils.
3. For the consume activity, invert the curve to reflect consumption. Also, extend the curve rightward, to reflect the fact that this amount is not returned after the activity has executed.
4. For starting inventory, start the curve with the Y-axis value at \( I' \), which goes to zero at time \( 0+T' \).
5. Add (algebraically) the modified curves for A1 and A2 to the starting inventory curve, in order to get the availability curve of the product P.
The aggregate demand curves on inventory are computed as for other resources. Once we have both the availability curve and the demand curve, we can obtain the normalized demand curve. Each point of the aggregate demand curve is divided by the corresponding point of the availability curve for this normalization.
4.4.3.2 Related Constraints
This section discusses two sets of constraints which are specific to managing inventory. Each of these is encapsulated in a separate policy. The first of these is the code-age constraint, which constrains the quality of products for different orders. Code-age period is the remaining time/life of a product before it spoils. A code-age constraint is a property of an order, and specifies the minimum code-age of allocated products. Policy 2 performs code-age reasoning for all activities which require inventory towards an order. All inventory which does not meet the criteria is removed from the domain. The constraint is implemented as illustrated below, using activity time window and product shelf-life.
Looking at the inventory availability profile, we see time points T1 and T2. T2 represents the time-point of that product spoiling. T1 is a function of an order, and represents T2 minus code-age required. Thus, with respect to that order, the inventory effectively spoils at time T1. It should be noted that at this point, activities do not have a fixed starting time. Thus, in order for inventory to be acceptable, T1 should always be at least as high on the timeline as lst. In that case, whatever time the activity starts, the inventory is feasible. Thus in the above illustration, the inventory represented is not acceptable.
The second set of constraints drives Policy 3, by relating the total quantity of a product in the ‘available’ domain of an activity to the quantity of that product required by that activity. This is more involved than non-inventory resources. Unlike resource requirement which is satisfied with exactly one resource, an inventory request is often satisfied by a combination. The constraint simply states that the total amount of all inventory (over all object sets in the domain) an activity has access to is greater than or equal to the amount required by the activity. The constraint can be presented as:
\[ Q(a_i) \leq \sum_{i=1}^{n-1} Q(R_j) \]
(EQ 15)
where n-1 is the remaining number of object-sets, after removing the contended one.
### 4.4.4 Disjunctive Process-plans: Further Reasoning
We have already explained much reasoning behind our treatment of network-like process plans. Specifically, the heuristic reasoning for variable selection has been explained. This section discusses the next step: value selection and assertion. Variable selection in this case gives
us an activity with unassigned status value. Thus value selection involves assigning a value (percentage) to that variable.
However, since this process is guided by three sets of constraints and the design of the status variable, we discuss these first. The status of an activity is integral, and can assume values from that activity’s min-status to its max-status. The following relations exist among these values:
\[ 0 \leq \text{minstatus} \leq \text{status} \leq \text{maxstatus} \leq 100 \]
(EQ 16)
Two sets of constraints are flow/conservation constraints, which conserve the flow of material through a network of activities. The first is defined over all linear paths in a process-plan, and is called the equality constraint. This constraint states that “the status values for all activities on the same linear path in a process plan are equal.” With reference to Figure 13, “Branching Node in a Process-plan,” on page 77, this constraint can be represented as:
\[ \text{status}(a_2) = \text{status}(a_3) = \text{status}(a_4) \]
(EQ 17)
The second set is defined over the branching node, and defines the relationship between the status of a parent path and that of its children branches. This is the sum-equals constraint. The constraint states that “the sum of the status values of all the branching activities is equal to the status value to the branching-node predecessor activity.” Again, with reference to the same figure as above, the mathematical form would be:
\[ \text{status}(a_2) + \text{status}(a_3) + \text{status}(a_7) = \text{status}(a_1) \]
(EQ 18)
The use of the above variables and constraints conserves the flow of materials through the supply chain, and also keeps the network consistent with respect to demand. In other words, these concepts are used both in value selection as well as value assertion.
The core of value selection lies in the sum-equals constraint, (EQ 18). At this point, we already have a variable, which is one of the activities on one of the three branches. For the sake of clarity, let us assume it is one of a2, a5 and a7. We have to assign a value to the status variable of that activity, a process which will also change the domains of the status variables of the other two activities. Let us further assume that a2 is the most critical activity (variable selected) and denote status(a_i) by s_i. (EQ 18) can be rewritten and transformed as:
---
1. The form of this constraint is similar to Kirchoff’s Law of electricity.
which a simple linear equality. Referring back to the structure of the status variable set, we can introduce additional constraints through the bounds/domains \((\min\text{-status}, \max\text{-status})\) of the status variables. Thus for each \(s_i\), we get the following constraint:
\[
\min(s_i) \leq s_i \leq \max(s_i)
\]
(EQ 20)
The objective of this heuristic decision is to reduce the overall contention in the network. Activities with high criticalities represent tight constraint cliques. Thus our aim is to minimize the overall criticality level at a branching node. This is gives us an objective function to minimize:
\[
\min \sum_{i=1}^{n} C_r(s_i)
\]
(EQ 21)
where \(C_r\) is the criticality of activity \(a_i\). Thus, value selection is not reduced to a linear program given by (EQ 19) through (EQ 21). Solving this LP gives us the status value to be assigned to the selected activity. The complete procedure is presented in Appendix X.
Value **assertion** involves changes to the selected activity and this is executed using the third set of constraints, *production-rate* constraint. This constraint is also explained in Section 3.2.3.1 on page 38. It relates the status value of an activity to its variable duration. Thus as the status value changes, it is reflected in the changed duration of the corresponding activity. An increase in status (or, percentage-demand) results in a corresponding increase in the duration of that activity, through this constraint, in order to satisfy increased demand. This concept is similar to that of continuous activities.
\[
P_{rate}(a_i) \times duration(a_i) = D(a_i) \times D(a_j)
\]
(EQ 22)
\(D(a_i)\) is the percentage (demand) of the selected activity (say \(a_2\)) while \(D(a_j)\) is the total demand required through its parent branching node (\(a_1\) in this case).
Value **propagation** is performed after assertion, and the purpose of this process is to make the network consistent after the assignment of status value to one variable. There are two distinct procedures of interest here.
- re-balancing the disjunctive/conjunctive branching node (sum-equals constraint)
- consistency along a linear path (equality constraint)
Both of these are performed recursively from the decision point in the process plan to its “leaf” activities. Once an activity is assigned, the ‘balance’ of status is distributed as before among the remaining activities. Referring to the above example (EQ 19), say we assign status = 0.2 to activity $a_2$. The remaining activities’ status is:
$$s_5 + s_7 = s_1 - s_2 = s_1 - 0.2$$
(EQ 23)
This is distributed between $a_5$ and $a_7$ as before, typically equally or equiprobably. Thus after each value assertion, the corresponding node is balanced.
After this procedure, we re-enforce consistency along each of the three linear paths; this is governed by the equality constraint, (EQ 17). This means that:
$$s_4 = s_3 = s_2 = 0.2$$
(EQ 24)
and likewise for all other linear paths. Recursiveness comes into play when any of these linear path branches out when moving downward (opposite the flow of material). Then we again perform both of these procedures, and so on.
At this point, we have provided adequate explanations behind most of our complex reasoning. The next section presents our actual problem solving policies in which the above reasoning is embedded.
### 4.5 Problem Solving Policies
In order to have an efficient algorithm which minimizes backtracking and finds good solutions, it is important to design superior variable/value ‘ordering’ heuristics. [Sadeh 91] has reported excellent results with variable/value ordering heuristics based on contention/reliance measures. In this section we present our variable/value ordering heuristics, which are built using our contention/reliance based texture measures, presented in the previous section. The overall goal of our heuristics is to reduce contention on resources, i.e., reduce the aggregate demand on resources. Each type and instance of commitment we make works towards reducing contention on one or more resources. Since our problem is a satisfaction problem, it is our assertion that reducing contention across the board will lead us to one or more satisfying solutions. Reducing demand is also equivalent to repairing capacity viola-
tions in the solution. Thus lowering resource contention in the problem moves us towards a satisfactory solution.
In terms of the overall solution process, five policies are executed sequentially. The algorithm works in the following manner:
1. The complete problem and the solving policies (heuristics) are input to the scheduler.
2. Pre-processing occurs before any heuristic can be executed. This includes establishing initial arc consistency, resource consistency, and the initial activity network.
3. The first policy to be executed assigns a percentage-demand to each activity. Thus, we start with a “floating” activity network, and this heuristic “fixes” this network. After this step, we have a fixed/firm network, with weights assigned to each activity which specify what percentage of the demand (through a process plan) the activity satisfies.
4. The second policy performs inventory level reasoning on the basis of code-age requirements. Activities requiring inventory are selected at random and for each, all inventory (object-sets) which do not satisfy that activity’s code-age requirements are pruned from its domain.
5. The third policy also performs inventory level reasoning, from the perspective of quantity. For all activities, one or more object-sets are allocated to each such that each activity has the required amount, and each object-set is feasible with respect to capacity.
6. The next policy works on another level of disjunctiveness, that of each activity to be executed having a set of alternative (non-inventory) resources to pick one from. This set of heuristics prunes the set of resources for all activities down to one value only. In other words, at the end of this policy, all activities have one specific resource assigned to them.
7. The last policy sequences all the activities (with best possible resource assignments) in time. As mentioned in Section 1.1 on page 1, our algorithm does not assign fixed start times to activities. Instead, we partially sequence the activities such that there are no
capacity violations. Thus, at the end of this policy, if the problem is solved, we have a feasible window for all activities. In other words, we arrive at a family of solutions. If the problem is not solved, the policy gives us the least-cost solution it can find.
Further, each of these policies may themselves be composed of more than one sub-policy. This cycle is illustrated in the following figure:
Prior to presenting policies, we introduce ODO’s policy declaration mechanism. This would enable us to present a complete PODL declaration of each policy as we explain it.
4.5.1 ODO Policy Declaration
The Policy is the actual embodiment of the heuristic-based algorithm that is operating on the constraint-based representation of a scheduling problem. As such it is based on the selection and retraction of commitments. Policies also provide the main control functionality in the problem solving process.
A real embodiment of a particular scheduling heuristic is defined as an atomic-policy, in contrast to meta-policy, which is the control structure of the Policy hierarchy. It is very much possible for us to employ more than one heuristic techniques on one problem. Each of these
techniques is encapsulated in one atomic-policy, which has its own termination criteria. A meta-policy keeps executing its children (in order) until its own termination criteria are met.
### 4.5.1.1 MetaPolicy
The MetaPolicy is the control structure of the Policy hierarchy. It keeps executing its children (in order) until its own termination criteria are met. In our algorithm, a meta-policy terminates once its has executed all its atomic-policies exactly once. To declare a MetaPolicy, a meta-policy `<command>` is used. A sub-BNF for the meta-policy `<command>` is as follows:
```
(meta-policy
:name <word>
:state-acceptance-criteria <word>
:state-cost-function <word>
:termination-criteria <word>
:sub-policies (<word> <word> ..)
)
```
- **:name <word>**
Associate a unique identifier with the MetaPolicy.
- **:state-acceptance-criteria <word>**
A condition to determine whether the evaluated state is accepted (thus going forward) or rejected (thus backtrack). Current acceptable criteria are:
- **always**: always accept the new state.
- **cost-leq**: only accept the state if its `cost` is less than or equal to that of the previous state.
- **no-empty-pvs**: only accept the state if the (start-time) possible-value set for all activities is non-empty.
- **:state-cost-function <word>**
A function to evaluate the resulting state after propagation. Current acceptable functions are:
- **num-realDemand-contended-resources-early**: the number of resources which are contended for in the current state, on the basis of real demand and earliest start time assignment of requesting activities.
- **num-non-unit-rrvs**: number of resource requests which have non-unit domains.
- **:termination-criteria <boolean-expression>**
A condition to determine whether to terminate search using this policy.
**cost, <integer>, search-time, num-backtracks, iterations, search-exhausted, <, >, >=, <=, ==, !=, ||, &&**
• :sub-policies <word> <word> ..
Specifies the child atomic-policies associated with this meta-policy. Each <word> is the name of a unique atomic-policy.
### 4.5.1.2 Atomic-Policy
The Atomic-Policy is the embodiment of a particular scheduling heuristic. As such it can make only one type of commitment. It has a number of doubly-linked lists of functions used to make, retract, and propagate commitments. To declare an Atomic-Policy, an atomic-policy <command> is used. A complete sub-BNF for the atomic-policy <command>> follows. Some components which are not explained in detail are not part of our algorithm, but are part of the generic policy declaration mechanism of ODO. Also, examples are not provided here but are given with the actual policy descriptions.
```plaintext
(atomic-policy :name <word>
:commitment-type <word>
:forward-commitments
(filters :generate <list>
:select <list>
:score <word>
:select-scored <list>)
:backward-commitments
(filters :generate <list>
:select <list>
:score <word>
:select-scored <list>)
:propagation-methods <list>
:backtrack-method <word>
:state-acceptance-criteria <word>
:state-cost-function <word>
:termination-criteria <word>
)
```
• :name <word>
Associates a unique identifier with the AtomicPolicy.
• :commitment-type <word>
Specifies the type of commitment that is being executed.
• :forward-commitments
• :generate <list>
Based on the list of filters, generate one commitment to be made at this step.
• :select <list>
Defines functions to select a value to commit for the commitment instance which
was generated at the above generate step.
• :backward-commitments
• :generate <list>
Defines the functions to generate/select a commitment to retract at the backtrack
step.
• :propagation-methods <list>
Defines the propagation method(s) which guide the propagation after each commit-
ment is made.
• :backtrack-method <word>
A release procedure to follow if the resulting state is rejected by the evaluation.
• :termination-criteria <boolean-expression>
A condition to determine whether to terminate search using this policy.
• cost, <integer>, search-time, num-backtracks, iterations, search-exhausted, <, >, >=, <=,
==, !=, ||, &&
4.5.2 Pre-Processing Details
In this section, we present some pre-processing details, i.e., the processing of a problem before
actual scheduling starts. Of course, full arc consistency is performed to ensure that the prob-
lem is temporally feasible. Similarly, resource completeness check is performed to see if the
problem is resource consistent. That is, an activity may not request a resource which has not
been declared. Such a case results in pre-processing error and problem solving termination.
Processing of orders and process plans is also fairly complex. Each order instantiates one or
more “copies” of the corresponding process plan, based on its required amount. The system
computes this and creates the activity network corresponding to each order. Further, as men-
tioned earlier, the durations of these activities are not fixed; each activity has a window, the
bounds of which correspond to the minimum and the maximum demand it can satisfy. Based
on the pre-defined and undefined demand values for each activity, the pre-processing rou-
tines set the corresponding durations or windows. The temporal bounds on each activity sub-network are also computed from the due-date of the order it represents.
4.5.3 Policy 1: Disjunctive/Conjunctive Process Plans
We start the scheduling process with a set of disjunctive/conjunctive process plans, i.e., our original activity network has a number of and, or, and and/or relations between activities. This branching arises from the fact that in many process plans, there is more than one sequence to obtain the final product. Each sequence usually is composed of different activities. Now, it becomes a scheduling decision to select one or more sequences in a given process plan. Further, in case more than one path is selected, another scheduling decision is to allocate a percentage of the demand to each of those paths, such that the end result is the order quantity of the process plan.
Thus, the scheduling decision at each step then becomes assigning a demand to an activity, as a percentage of the total demand being met through its parent process plan. A related feature of this approach is that the user can control of this process. Thus, a user can specify these percentages for some or all activities, in which case the algorithm will not alter them. For all activities where the user does not assign percentages, the algorithm does so, with the goal of reducing overall contention on the resources in the system.
The next question is, what actually happens when these percentages are altered? Value assertion and propagation has been explained in detail in Section 4.4.4. With each such decision, the activity network is altered and has to be made consistent again. Demand consistency is ensured by using equality and sum-equals constraints. However, the activities are also altered with change of percentage demands.
The major question here is how to reflect the change of demand for a given, single activity? How can one activity represent production from zero to 100% of demand? We model this change using variable durations and a production rate on activities. The representational concepts are explained in Section 3.2.3 on page 38, whereas the nature of the guiding constraint is presented in Section 4.4.4. This constraint links the percentage demand, production rate, and the duration of each activity. Thus, when the percentage-demand is increased, the dura-
tion of the corresponding activity also increases proportionately. Therefore, the “longer” activity can now handle the increased demand placed on it. This model quite closely reflects the notion of continuous activities also.
Thus, if we decrease the demand on one activity (in order to reduce contention on the resource it requires), we invariably have to increase the demand on a parallel activity, and its duration. This is a trade-off which has to be considered, and one which requires more research into. The heuristics of this commitment type assign a percentage demand to each activity which is not fixed by the user, such that overall contention on resources is reduced as much as possible.
4.5.3.1 Variable Ordering
The goal of this variable ordering heuristic is to select the most critical activity at this iteration. The following steps are executed:
- Compute the individual demand curves of activities, and then aggregate demand on resources.
- For each activity whose status is not fixed yet, compute criticality using (EQ 13), (EQ 14), and the aggregate demand on resources.
- Select the activity with highest criticality; break ties arbitrarily.
4.5.3.2 Value Ordering
This heuristic accepts the most critical activity and assigns a value to its status variable after solving a pseudo-LP. The steps involved are:
- Consider the local network where the selected activity is situated. Specifically, consider the linear path where the selected activity is, the parallel linear paths, and their junction activity. Refer to Figure 13, “Branching Node in a Process-plan,” on page 77.
- Traverse each linear path (except the one of the selected activity) in this local network and find the most critical activity on each.
- Using these activities, construct a LP as explained in Section 4.4.4 on page 81. Solve the LP to obtain the value for the status variable of the selected activity.
• Assign this value to the variable, and using the production-rate constraint, alter the duration of the activity to reflect this.
4.5.3.3 Assertion/Propagation
The network is made consistent with respect to demand recursively using the equality and the sum-equals sets of constraints. This is explained in detail in Section 4.4.4 on page 81. This also alters the duration of the activity involved in each iteration. This means that internal temporal propagation has to be performed in order to make the internal (start, duration, end) variables consistent again. There is no external propagation since the duration window is not affected.
4.5.3.4 Termination Criteria
The goal of this heuristic is to transform a “probable” network into a fixed network, i.e., to assign a fixed percentage demand to all activities. Thus this policy terminates when all initially unassigned activities have been assigned a percentage demand between zero and 100.
4.5.3.5 Policy Specification
(atomic-policy :name fix-process-plan
:commitment-type assign-percentage-demand
:forward-commitments
(filters :generate most-critical-OR
:select fix-least-critical-branch)
:propagation-methods percentage-demand-propagation
:state-acceptance-criteria always
:state-cost-function any-status-change
:termination-criteria cost==0)
4.5.4 Policy 2: Code-Age Reasoning
Code-age period is the remaining time/life of a product before it spoils. A code-age constraint is a property of an order, and specifies the minimum code-age of allocated products. This policy performs code-age reasoning for all activities which require inventory towards an order. All inventory which does not meet the criteria is removed from the domain. The constraint is implemented as illustrated Section 4.4.3 on page 78.
4.5.4.1 Variable Ordering
This heuristic is quite simple and selects a previously not selected activity randomly. Since all activities are to be made code-age compatible independently, ordering is trivial.
4.5.4.2 Value Ordering
The goal of this heuristic is to prune the domain (corresponding to an inventory request) by removing all resource/object sets which do not satisfy the code-age constraint. The reasoning has been explained in Section 4.4.3 on page 78. The steps are:
- Consider the individual demand curve of the selected activity.
- For each set in the domain, compute the code-age expiry time. If this time is less than the latest-end time of the activity, remove this set from the domain.
- Stop when all sets in the domain have been reviewed.
4.5.4.3 Termination
There is no propagation in this policy. We terminate when all activities (and all inventory requests) have been considered.
4.5.4.4 Policy Specification
(atomic-policy :name code-age-reasoning
:commitment-type remove-spoilable-sets
:forward-commitments
(filters :generate unfiltered-activity
:select remove-spoilable-sets)
:propagation-methods none
:state-acceptance-criteria always
:state-cost-function number-of-activities-with-spoilable-sets
:termination-criteria cost==0
)
4.5.5 Policy 3: Inventory Allocation
After code-age requirements have been met, specific inventory (or, specific object sets) has still to be allocated to activities’ inventory requests. This task is more involved than when performed for non-inventory resources, as in Policy 4. The reason is that unlike assigning a single machine to an activity out of a pool of say five, this policy typically will have to satisfy each inventory request through a combination of multiple sets.
4.5.5.1 Variable Ordering
The goal of this heuristic is to select the activity which most heavily relies on the most-contended for inventory (object-set). The steps are:
- Compute the individual demand profiles for all activities; then compute aggregate demand curves for all object-sets.
- Select the most-contended-for set
- Select all activities which demand this set, and sort them in the order of reliance, with the most reliant activity being the first.
4.5.5.2 Value Ordering
This heuristic takes a least commitment approach and removes the most-contended-for set from the domain of the most reliant activity only, subject to (EQ 15). The steps are:
- Accept the sorted list of activities and pick the topmost one.
- Remove the most-contended-for set from that activity’s domain, subject to the constraint that the remaining quantity in the domain is greater than or equal to that required.
- If the constraint is violated, leave the first activity and pick the next one. Check for constraint (EQ 15) again.
- Repeat these steps until one activity’s domain is pruned.
4.5.5.3 Termination
Again, no propagation is required. This policy terminates when for all activities, the quantity of inventory is domain is (approximately) equal to that required. Termination may also occur earlier, if all object-sets are feasible with respect to capacity; i.e., no sets is violated in terms of capacity.
4.5.5.4 Policy Specification
(atomic-policy :name allocate-inventory
:commitment-type filter-resource-set
:forward-commitments
(filters :generate most-contended-for-set
:select most-reliant-activity)
:propagation-methods none)
4.5.6 Policy 4: Alternate Resources
Consider an activity network where each activity requires one or more resources, and has a set of resources to choose one from, for each requirement. In other words, each resource request of an activity has alternatives to select from. The fact that an activity may conjunctively require more than one resource does not have a bearing on this commitment. Thus, the variables here are all resource requests which have a non-unit domain. Assigning a value to them means assigning a single resource to them.
Rather than assign one resource to a request, we perform a finer search and follow a least commitment approach. We do this by not assigning a resource to a request at each iteration, but by pruning an alternative from the domain of a request, at each iteration.
4.5.6.1 Variable Ordering
The goal of our variable ordering heuristic is to select the activity which contributes most to the current bottleneck resource. The following steps are executed:
- Compute the aggregate demand curves of all resources any request for which still has alternatives.
- Compute the most-contended-for interval on each curve, and get the sum of that demand.
- Select the interval which has the maximum demand on it. This is the current point of contention.
- Generate a list of all activities contributing to that contention formation.
- Sort these activities according to their individual demand contributions to this contention, from highest to lowest contribution.
- Return the topmost activity in the list which has a non-unit domain.
We have implemented two versions of this heuristic: one based on probabilistic demand, and the other on real (actual) demand.
4.5.6.2 Value Ordering
Value ordering in this case means the pruning of the resource domain of the request returned by the variable ordering heuristic. This decision is straight-forward here; the current most-contended-for resource is removed from the domain of the selected request. The logic here is that by un-assigning the “critical” activity from the current contention, we relieve maximum contention on the bottleneck.
4.5.6.3 Termination Criteria
Since the goal of this algorithm is to assign one specific resource to each request, that defines the termination criteria. The search terminates once all request by all activities have exactly one resource assigned to them.
4.5.6.4 Policy Specification
(atomic-policy :name assign-resources
:commitment-type filter-resource
:forward-commitments
(filters :generate most-contended-for-resource
:select most-reliant-activity)
:propagation-methods none
:state-acceptance-criteria always
:state-cost-function number-of-non-unit-domain-activities
:termination-criteria cost==0
)
4.5.7 Policy 5: Precedence Constraint Posting
Consider all as of yet unordered activity pairs, a pair being two activities requesting the same resource, without any temporal constraint between them. Each such pair can also be viewed as an ordering \( O(i,j) \), with no value assigned to it. A value assignment in this case would be a sequence, or a precedence constraint between the two activities. Thus, our variables in this commitment type are these possible orderings in the problem. In each iteration, we select one unassigned variable (pair) and assign a value (precedence) to it. It is to be noted here that the aim is not to order all possible pairs. Rather, we order only as many pairs as required to remove any capacity violations from all resources.
4.5.7.1 Variable Ordering
The goal of our variable ordering heuristic in this case is to select two activities $A_i$ and $A_j$, which contribute most heavily to the current contention, and return them as an unassigned ordering $O(i,j)$. The following steps are executed:
- For all contended-for resources, compute their aggregate demand curves.
- Compute the most-contended-for interval on each curve, and get the sum of that demand.
- Select the interval which has the maximum demand on it. This is the most-contended-for resource and interval.
- Generate a list of all activities contributing to that contention formation.
- Sort these activities according to their individual demand contributions to this contention, from highest to lowest contribution.
- Select the two topmost activities, i.e., two activities which contribute the most to the current contention. A temporal constraint will be posted between these activities.
Note: Our algorithm prevents any cycles in the graph by performing a complete check on these activities for any existing linkage through any partial network. If these two activities are already linked via another path, select another activity, until we find an unlinked pair or exhaust all pairs. In case we exhaust all pairs, and the resource is still contended-for, the problem is infeasible.
Again, we have implemented two versions of this heuristic: one that performs all computations using probabilistic demand, and the other that uses real/actual demand.
4.5.7.2 Value Ordering
A variable $O(i,j)$ can take one of two values: $O_1(i\rightarrow j)$ or $O_2(j\rightarrow i)$. A value ordering heuristic should select the value which is most likely to survive future assignments. In this case, since we are sequencing activities, the above could be interpreted to mean that we select a value which is most compatible with the currently existing or “tending-towards” ordering. Thus our assertion is that the value which preserves any explicit or implicit orderings in the current search state is most likely to guide us to a solution.
One component of our value assignment is [Erschler 76] work on Constraint Based Analysis (CBA). CBA attempts to identify some natural orderings in the search space, based on the temporal variables of activities forming the pairs. Following is a brief synopsis of this tech-
nique. Let us consider two activities $A_i$ and $A_j$ with early start times $est_i$ and $est_j$, latest end times $lft_i$ and $lft_j$, and durations $du_i$ and $du_j$ respectively. Consider the following three relationships between these variables:
\[
\begin{align*}
\lambda &= lft_i - est_j \\
\mu &= lft_j - est_i \\
\delta &= du_i + du_j
\end{align*}
\] (EQ 25) (EQ 26) (EQ 27)
Then, for any unsequenced pair $(i,j)$, we can distinguish four cases:
1. If $\lambda < \delta < \mu$, then $i \rightarrow j$
2. If $\mu < \delta < \lambda$, then $j \rightarrow i$
3. If $\delta > \mu$ and $\delta > \lambda$, then no feasible solution is possible
4. If $\delta \leq \mu$ and $\delta \leq \lambda$, then either sequencing decision is still possible
Case 4 becomes interesting since this is where the heuristic search is defined. We have implemented three heuristics for this case, where each of them preserves the implicit ordering, although based on a different criteria. These are:
1. **Demand Centroid:** each activity in the pair has an individual demand curve on that resource. We compute the centroids of the two curves, and sequence the activities according to the position of their demand centroids.

thus, $A_1$ before $A_2$
2. **Earliest Start**: this is a simpler approach, which looks at the earliest-start-time \((est)\) of the two activities, and preserves that ordering.

Thus, \(A1\) before \(A2\)
3. **Temporal Slack**: here, we perform a lookahead to see which ordering \((i\rightarrow j\) or \(j\rightarrow i)\) leaves a larger temporal slack on the resource. We select the ordering which leaves more slack, by the reason that more slack means a lesser constrained resource.

\[ \text{slack (a1->a2)} < \text{slack (a2 -> a1)} \]
\(=>\) \(a2\) before \(a1\)
### 4.5.7.3 Termination Criteria
The goal of this algorithm is to sequence activities on each resource in order to remove all capacity violations. Thus, our termination criteria uses real demand on resources and real capacities, and measures the difference. The algorithm terminates when real demand is less than or equal to capacity, for all resources.
### 4.5.7.4 Propagation
This policy sequences activities in time, and thus requires complete temporal propagation. After each iteration, we perform forward and backward propagation from the two activities respectively. This guarantees network consistency after propagation, if the network was arc consistent before the commitment. The routines also perform internal propagation for all activities.
4.5.7.5 Policy Specification
(atomic-policy :name precedence-constraint-posting
:commitment-type post-precedence-constraints
:forward-commitments
(filters :generate [high-demand-pair, high-real-demand-pair]
:select [CBADemandCentroid, CBAEarlyStart, CBATemporalSlack])
:backward-commitments
(filters :generate most-recent-failure)
:propagation-methods chronological
:state-acceptance-criteria always
:state-cost-function number-of-contended-resources
:termination-criteria cost==0
)
4.6 Conclusion
In this chapter, we have presented in detail our problem solving algorithm in terms of its component heuristics, its execution loop, commitment types, termination criteria, and so on. We have also discussed what pre-processing is performed in ODO. Also presented is the implementation level of our algorithm, in terms of PODL input, heuristic filters and the policy mechanism.
The next chapter discusses our experiments using ODO as a supply chain scheduler.
|
{"Source-Url": "http://www.eil.utoronto.ca/scheduling/odo/podl.policy.pdf", "len_cl100k_base": 16023, "olmocr-version": "0.1.50", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 80167, "total-output-tokens": 17987, "length": "2e13", "weborganizer": {"__label__adult": 0.000446319580078125, "__label__art_design": 0.0008578300476074219, "__label__crime_law": 0.0006012916564941406, "__label__education_jobs": 0.006267547607421875, "__label__entertainment": 0.00017213821411132812, "__label__fashion_beauty": 0.000308990478515625, "__label__finance_business": 0.0037078857421875, "__label__food_dining": 0.00040435791015625, "__label__games": 0.001949310302734375, "__label__hardware": 0.001807212829589844, "__label__health": 0.0004346370697021485, "__label__history": 0.000629425048828125, "__label__home_hobbies": 0.00040340423583984375, "__label__industrial": 0.0035419464111328125, "__label__literature": 0.000415802001953125, "__label__politics": 0.0005097389221191406, "__label__religion": 0.0005812644958496094, "__label__science_tech": 0.1749267578125, "__label__social_life": 0.00016891956329345703, "__label__software": 0.051422119140625, "__label__software_dev": 0.74853515625, "__label__sports_fitness": 0.0004496574401855469, "__label__transportation": 0.0012903213500976562, "__label__travel": 0.0003173351287841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74438, 0.02668]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74438, 0.70663]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74438, 0.92247]], "google_gemma-3-12b-it_contains_pii": [[0, 928, false], [928, 2606, null], [2606, 4924, null], [4924, 6758, null], [6758, 8054, null], [8054, 9777, null], [9777, 11661, null], [11661, 12619, null], [12619, 14361, null], [14361, 15933, null], [15933, 18089, null], [18089, 20316, null], [20316, 22447, null], [22447, 24699, null], [24699, 26838, null], [26838, 27918, null], [27918, 28841, null], [28841, 31034, null], [31034, 32864, null], [32864, 35070, null], [35070, 37151, null], [37151, 38202, null], [38202, 39917, null], [39917, 42411, null], [42411, 44613, null], [44613, 46727, null], [46727, 48774, null], [48774, 49966, null], [49966, 51946, null], [51946, 53538, null], [53538, 55356, null], [55356, 57744, null], [57744, 59649, null], [59649, 61444, null], [61444, 63208, null], [63208, 64862, null], [64862, 66562, null], [66562, 68384, null], [68384, 70731, null], [70731, 72034, null], [72034, 73450, null], [73450, 74438, null]], "google_gemma-3-12b-it_is_public_document": [[0, 928, false], [928, 2606, null], [2606, 4924, null], [4924, 6758, null], [6758, 8054, null], [8054, 9777, null], [9777, 11661, null], [11661, 12619, null], [12619, 14361, null], [14361, 15933, null], [15933, 18089, null], [18089, 20316, null], [20316, 22447, null], [22447, 24699, null], [24699, 26838, null], [26838, 27918, null], [27918, 28841, null], [28841, 31034, null], [31034, 32864, null], [32864, 35070, null], [35070, 37151, null], [37151, 38202, null], [38202, 39917, null], [39917, 42411, null], [42411, 44613, null], [44613, 46727, null], [46727, 48774, null], [48774, 49966, null], [49966, 51946, null], [51946, 53538, null], [53538, 55356, null], [55356, 57744, null], [57744, 59649, null], [59649, 61444, null], [61444, 63208, null], [63208, 64862, null], [64862, 66562, null], [66562, 68384, null], [68384, 70731, null], [70731, 72034, null], [72034, 73450, null], [73450, 74438, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74438, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74438, null]], "pdf_page_numbers": [[0, 928, 1], [928, 2606, 2], [2606, 4924, 3], [4924, 6758, 4], [6758, 8054, 5], [8054, 9777, 6], [9777, 11661, 7], [11661, 12619, 8], [12619, 14361, 9], [14361, 15933, 10], [15933, 18089, 11], [18089, 20316, 12], [20316, 22447, 13], [22447, 24699, 14], [24699, 26838, 15], [26838, 27918, 16], [27918, 28841, 17], [28841, 31034, 18], [31034, 32864, 19], [32864, 35070, 20], [35070, 37151, 21], [37151, 38202, 22], [38202, 39917, 23], [39917, 42411, 24], [42411, 44613, 25], [44613, 46727, 26], [46727, 48774, 27], [48774, 49966, 28], [49966, 51946, 29], [51946, 53538, 30], [53538, 55356, 31], [55356, 57744, 32], [57744, 59649, 33], [59649, 61444, 34], [61444, 63208, 35], [63208, 64862, 36], [64862, 66562, 37], [66562, 68384, 38], [68384, 70731, 39], [70731, 72034, 40], [72034, 73450, 41], [73450, 74438, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74438, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
705b1a6f149c448b5f657a7e17088d7f4da40886
|
ENTERPRISE INTEGRATION SYSTEM
Inventors: Thomas C. Walsh, Cambridge, MA (US); Michael J. Young, Boxborough, MA (US); Joseph J. DiCelis, Boylston, MA (US); David W. H. Wong, Boxborough, MA (US); Alan W. Esenther, Ashland, MA (US)
Assignee: Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 0 days.
This patent is subject to a terminal disclaimer.
Filed: Feb. 3, 2000
U.S. Patent Documents
5,848,426 A 12/1998 Wang et al.
5,870,605 A 2/1999 Bracho et al.
5,940,075 A 8/1999 Mutschler, III et al.
5,996,012 A 1/1999 Jarrie
6,012,098 A * 1/2000 Bayeh et al. .............. 709/246
6,253,239 B1 * 6/2001 Shklov et al. ............. 709/217
FOREIGN PATENT DOCUMENTS
EP 1 030 254 8/2000
WO WO 09/23584 5/1999
OTHER PUBLICATIONS
(List continued on next page.)
Primary Examiner—Jason D. Cardone
Attorney, Agent, or Firm—Dirk Brinkman; Andrew J. Curtin
ABSTRACT
An enterprise integration system is coupled to a number of legacy data sources. The data sources each use different data formats and different access methods. The integration system includes a back-end interface configured to convert input data source information to input XML documents and to convert output XML document to output data source information. A front-end interface converts the output XML documents to output HTML forms and the input HTML forms to the XML documents. A middle tier includes a rules engine and a rules database. Design tools are used to define the conversion and the XML documents. A network couples the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources. Mobile agents are configured to communicate the XML documents over the network and to process the XML documents according to the rules.
21 Claims, 10 Drawing Sheets
U.S. PATENT DOCUMENTS
6,345,259 B1 * 2/2002 Sandoval ..................... 705/7
6,424,979 B1 * 7/2002 Livingston et al. ....... 715/511
6,678,715 B1 * 1/2004 Ando ...................... 718/105
OTHER PUBLICATIONS
* cited by examiner
Legacy Enterprise System
Oracle 8 Database
Lotus Notes
Web Server
SAP System
Users
Dial-Up
FIG. 1a
PRIOR ART
public interface DataAccessService
/**
* Get a document from a data source
*
* @param id The id of the document. The id should
* at least contain the document class and unique
* document id. The id may also contain information
* specific to the back end data source such as further
* processing instructions or identification information.
* @return A DOM Document object containing the XML data
*/
public Document get (String id); 510
/**
* Update an existing document in the data source
*
* @param id The id of the document. The id should
* at least contain the document class and unique
* document id. The id may also contain information
* specific to the back end data source such as further
* processing instructions or identification information.
* @param doc The new document to commit to the data source.
*/
public void put (String id, Document update); 520
/**
* Add a new document to the data source.
* @param id A partial id for the document. The id should
* contain the document class. A unique document id
* will be generated for the document and returned by the
* method. The id may also contain information
* specific to the back end data source such as further
*/
public String add (String id, Document doc); 530
/**
* Delete a document.
*
* @param id The id of the document. The id should
* at least contain the document class and unique
* document id. The id may also contain information
* specific to the back end data source such as further
* processing instructions or identifications information.
*/
public void delete (String id); 540
FIG. 7
710
Get Request by Agent
720
Determine Identity of Caller
730
Identify Document Type
740
Receive Group: Specific Cache for Document Type
750
Is Requested Document in Cache?
Yes
755
No
760
Locate SQL-XML Mapping for Document Type
770
Construct Select Statement
775
Retrieve Database Connection Associated with Agent's Group
780
Execute Statement
785
Walk Result Set
790
Extract Fields
794
Build XML Document
796
Add Document to Group Specific Cache
798
Return Document
Return Cached Document
UpdateRequest by Agent
Determine Identity of Caller
Identify Document Type
Locate Update Mapping for Document Type
Construct update Statement
Retrieve Database Connection Associated with Agent's Group
Execute Statement
Did Update Succeed
Yes
Add Document to Group Specific Cache
Return
No
Return Error
FIG. 8
ENTERPRISE INTEGRATION SYSTEM
FIELD OF THE INVENTION
This invention relates generally to computerized applications, databases, and interface, and more particularly to integrating applications, databases, and interfaces having different formats, contexts, and designs.
BACKGROUND OF THE INVENTION
Computer and computer-related technology have enabled the use of computers in numerous enterprise functions. Almost every facet of a modern enterprise is supported by computer systems in some manner. Computerization is a necessity to allow an enterprise to remain functional and competitive in a constantly changing environment.
Computer systems are used to automate processes, to manage large quantities of information, and to provide fast and flexible communications. Many enterprises, from sole proprietorships, small stores, professional offices and partnerships, to large corporations have computerized their functions to some extent. Computers are pervasive, not only in business environment, but also in non-profit organizations, governments, and educational institutions.
Computerized enterprise functions can include billing, order-taking, scheduling, inventory control, record keeping, and the like. Such computerization can be accomplished by using computer systems that run software packages. There are many application software packages available to handle a wide range of enterprise functions, including those discussed above.
One such package is the SAP R/2™ System available from SAP America, Inc., 625 North Governor Printz Blvd., Essington, Pa. 19029. The SAP R/2 System is a software package designed to run on IBM or compatible mainframes in a CICS (Customer Interface Control System) or IMS (Information Management System) environment. For example, SAP may use CICS to interface with user terminals, printers, databases, or external communication facilities such as IBM’s Virtual Telecommunications Access Method (VTAM).
SAP is a modularized, table driven application software package that executes transactions to perform specified enterprise functions. These functions may include order processing, inventory control, and invoice validation; financial accounting, planning, and related managerial control; production planning and control; and project accounting, planning, and control. The modules that perform these functions are all fully integrated with one another.
Another enterprise area that has been computerized is manufacturing. Numerous manufacturing functions are now controlled by computer systems. Such functions can include real-time process control of discrete component manufacturing (such as in the automobile industry), and process manufacturing (such as chemical manufacturing through the use of real-time process control systems). Directives communicated from the computer systems to the manufacturing operations are commonly known as work orders. Work orders can include production orders, shipping orders, receiving orders, and the like.
However, the computerization of different functions within a single enterprise has usually followed separate evolutionary paths. This results in incompatibility between the different systems. For example, transactions from a system for one function may have a context and a format that are totally incompatible with the context and format of another function. Furthermore, as enterprises grow through mergers and acquisitions, the likelihood of inheriting incompatible systems increases. Consequently, the legacy systems cannot provide all the information necessary for effective top level management and control.
As an additional complexity, enterprise systems need user interfaces for front-end operations. For example, in the healthcare industry, administrative staff and health care providers need reliable access to patient records. If the healthcare enterprise has evolved by a series of mergers, the possibility of a reception desk populated with half a dozen different terminals, each accessing a different patient database and a different accounting system is a certainty, and service and profitability suffers.
Generic computerized solutions that offer an efficient, automated way to integrate an enterprise’s various computerized systems are difficult to implement. Another conventional solution is to implement a custom, computerized interface between the various systems. However, these custom solutions are usually tailored to a specific enterprise environment. As a result, the tailored solutions are not portable into other situations without major modifications. Additionally, these solutions are costly to maintain over time because of inherent difficulties in accommodating change.
Conventional solutions that meet all of the needs for collecting, retrieving, and reporting data in a complex enterprise do not exist. For example, the DASS™ system, available from a SAP AG of Waldorf, Germany, is intended to automate manufacturing functions. DASS receives information from SAP R/2 package described above. However, DASS does not appear to provide a generic solution to connect a computerized business system to a computerized manufacturing system.
FIG. 10 shows an example legacy enterprise system 10. The legacy system includes as subsystems a SAP system 11, an Oracle™ database 12, one or more legacy applications 13, Lotus Notes™ 14, a Web server 15, and user interfaces 20. The system 10 might also permit access to some functions by a mobile computer (laptop) 30 via a dial-up communications link 40.
More than likely, the legacy system 10 will exhibit one or more of the following problems. All sub-systems cannot communicate with every other sub-system because each sub-system has its own application programming interfaces (APIs). Real-time data interchange among all of the sub-systems may be impossible or extremely difficult because each sub-system stores and views data in a different way and uses different communication protocols. Modified enterprise functions or adding automation for new functions is expensive. Each sub-system is developed with its own peculiar programming language. Users cannot always access all the data all of the time, particularly when the user is mobile. It is difficult to provide top level management with an abstractive of all system information.
What is needed is a system that can integrate various computer systems in an enterprise. The system needs to be able to convey transactional data between any number of databases regardless of their format, context, and access methodology. User interfaces to the databases need to be uniform. In addition, as enterprise functions change, new procedures and transactions must be accommodated in a minimal amount of time without having to redesign and re-implement any of the functional systems. The ideal enterprise integration system should be capable of adapting to any number of computerized functions in a modern complex enterprise.
SUMMARY OF THE INVENTION
The present invention is directed to a system and method for integrating computer systems found in many types of enterprises.
An enterprise integration system is coupled to a number of legacy data sources. The data sources each use different data formats and different access methods. The integration system includes a back-end interface configured for converting input data source information to input XML documents and for converting output XML documents to output data source information.
A front-end interface converts the output XML documents to output HTML forms and the input HTML forms to the XML documents. A middle tier includes a rules engine and a rules database. Design tools are used to define the conversion and the XML documents.
A network couples the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources. Mobile agents are configured to communicate the XML documents over the network and to process the XML documents according to the rules.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1a is a block diagram of a legacy enterprise system;
FIG. 1b is a block diagram of an integrated enterprise system according to the invention;
FIG. 2 is a block diagram of design tools used by the system of FIG. 1b;
FIG. 3 is a block diagram of XML data accesses according to the invention;
FIG. 4 is a block diagram of a back-end interface of the system of FIG. 1b;
FIG. 5 is a diagrammatic of a public interface of the back-end interface of FIG. 4;
FIG. 6 is a block diagram of pooled connections;
FIG. 7 is a block diagram of a get request;
FIG. 8 is a block diagram of an update request; and
FIG. 9 is a block diagram of an object of service bridge objects.
DETAIL DESCRIPTION OF THE PREFERRED EMBODIMENTS
Introduction
Our invention provides a robust and scalable environment for integrating legacy enterprise computer systems. The invention integrates databases, transactions, and user interfaces having different formats, contexts, and designs, such as the sub-systems shown in FIG. 1a. We also provide for automated rules based processing.
At the core of our integration system, we utilize XML as a universal data encoding and interchange format. XML (Extensible Markup Language) is a flexible way for us to create common information formats and share both the format and the data on the Internet, the World Wide Web (WWW), intranets, and private local area network. XML, developed by the World Wide Web Consortium (W3C), is “extensible” because, unlike HyperText Markup Language (HTML), the markup symbols of XML are unlimited and self-defining. XML is actually a simpler and easier-to-use subset of the Standard Generalized Markup Language (SGML), the standard for how to create a document structure. XML enables us to create customized “tags” that provide functionality not available with HTML. For example, XML supports links that point to multiple documents, as opposed to HTML links, which can reference just one destination each. These basic interfaces allow our integration system to view, modify and interact with linked legacy applications or legacy data sources.
System Architecture
As shown in FIG. 1b, our enterprise integration system 100 includes the following main components: a back-end interface 110, a front-end interface 120, a middle tier 130, and design tools 140. The components are connected by a network and mobile agents 101 carrying XML documents 102. The mobile agents 101 are described in greater detail in U.S. patent application Ser. No. 08/965,716, filed by Walsh on Nov. 7, 1997, incorporated herein in its entirety by reference. As a feature, the agents can travel according to itineraries, and agents can “meet” with each other at meeting points to interchange information.
With our back-end interface 110, we enable read/write/modify access to existing (legacy) applications and data sources 111. The back-end interface maps (or translates) data from legacy formats into the XML format used by our enterprise integration system 100.
The front-end interface 120 enables us to present information to users 103 using standard presentation methodologies. The front-end interface also allows the user to modify information and to generate transactions to initiate enterprise processes or workflow. The front-end interface can be modified to meet changing requirements of the enterprise.
The middle tier 130 uses our mobile agents 101 to provide an infrastructure for highly flexible, robust and scalable distributed applications. The middle tier combines server technology with a customizable business rules engine and an application framework. The middle tier also provides for the deployment of disconnected applications for mobile users. That is, the middle tier allows the mobile user to perform tasks while disconnected from the system 100.
The design tools 140 support the definition of XML document formats. The design tools also allow us to define mappings of the XML document formats and the legacy data formats, and to provide for the automated generation of forms for user presentation via the front-end interface. These components are now described in greater detail.
Back-End Interface
The back-end interface 110 is composed of one or more service bridges 112. The service bridges provide highly efficient access to various legacy systems. Hereinafter, we will frequently call the legacy systems “data sources” 111. We do not care how the legacy systems are programmed, or how their applications are structured. That is, the back-end interface of our integration system provides a generic and uniform access interface to the highly diverse legacy systems without requiring special knowledge of internal, legacy interfaces of the linked systems.
Semantically, we model the back-end interface as an XML document publishing and management system. We see the data source as “publishing or “serving” XML documents containing enterprise information. The back-end allows users to add, update, delete, browse, and search for documents in the data source. We chose this semantic model of interaction because it provides a generic interface through which many disparate legacy systems can be accessed.
A particular data source 111 can manage multiple types of documents, such as customer accounts, purchase orders, work items, work lists, and the like. Any document in any data source can be uniquely identified and retrieved by a document identification (id) 104. In our implementation, and keeping within the spirit of XML, we use a document identification 104 that is conceptually similar to a Web page.
Universal Resource Locator (URL), although different in detail. As shown, the service bridges include a bridge framework (BF) 113 and a data source-specific runtime access component (RAC) 114. The service bridge is described in greater detail below with reference to FIGS. 4-9.
Bridge Framework
The bridge framework 113 provides generic high level access services for the back-end interface. The framework is relatively independent from the specifics of the linked legacy systems and is implemented with reusable code. The bridge framework performs user authentication, and identifies the user making a request of the data source. The bridge framework also identifies agents 101 making requests, and provides a means to map a generic user identity to specific “logon” information required by any of the legacy data sources, e.g., a username and a password. The bridge framework operates securely such that any sensitive data-source logon information, such as a clear-text password, is encrypted.
Connection Pooling and Document Management
The framework also manages objects involved in establishing and maintaining a connection to the data source, and provides for connection sharing and pooling. Connection pooling and sharing is used when the establishment of a connection or session with the data source is too expensive to perform on a per user basis. The connection pooling and sharing mechanism is based on “user groups.” All members of a user group access a particular data source via a shared connection pool. The connections in this pool are established within the user context of a “pseudo-user account.”
A pseudo-user account is a special data source account that represents a group of users instead of an individual user. Thus, if we have two user names, “john@accounting” and “jim@accounting,” the two accounting users both access the data source within the context of the accounting pseudo user account. Connection pooling may not be necessary for all back-end data sources, but certainly is required for relational database access.
Document Caching
The bridge framework also provides a tunable caching facility to increase system performance. As stated above, a primary function of the back-end interface is to access legacy data and convert that into the XML format. The bridge framework maintains XML documents in a cache 115 so that a subsequent request to retrieve the same data can bypass any data access or conversion work overhead by accessing the cached XML document.
The caching in our system is tunable. For a given type of document, a system administrator can specify caching parameters 116 such as whether caching should be enabled, a maximum lifetime before cache entries become stale, a maximum cache size, whether the cache 115 should be a persisted disk and re-used at next server startup. For document types that contain highly volatile data, caching can be disabled or cache entries can be set to expire quickly. For documents containing data that changes rarely, the caching parameters can be set aggressively to retain the documents in the cache.
Runtime Access Component
The runtime access component (RAC) 114 is specific for a particular data source 111. The RAC uses application programming interfaces (APIs) and structures of the legacy data source to access the data and to map the data into the XML format. The exact semantics of how the data are mapped to the XML format vary. For example, the mapping can be for widely used legacy databases, such as, JDBC, JDBT, SAP, or SQL. An example JDBC implementation is described below with reference to FIG. 4. The RAC supports the following database access operations.
Query
The “query” operation retrieves a document from the data source. The caller supplies the id 104 of the document to fetch. The bridge service returns the specified information in the form of a XML document according to one of the standard programming models supported by W3C, for example, a DOM document object or a SAX document.
DOM (Document Object Model), is a programming interface specification that specifies a tree which applications may then explore or modify. SAX is an event-based tool, more or less ‘reading’ the document to the application using a set of named methods to indicate document parts. SAX is typically used where efficiency and low overhead are paramount, while the DOM is used in cases where applications need random access to a stable tree of elements. The interface allows us to generate and modify XML documents as full-fledged objects. Such documents are able to have their contents and data “hidden” within the object, helping us to ensure control over who can manipulate the document. Document objects can carry object-oriented procedures called methods.
In the case of a relational database, the query operation maps to a SQL SELECT statement with a WHERE clause specifying which record or records from the database are contained in the document.
Update
The “update” operation modifies existing data in the legacy data source. The caller supplies the id of the document and a XML document containing only the fields to be modified. In the case of the relational database, the update operation maps to a SQL UPDATE statement.
Delete
The “delete” operation removes a document from the data source. The caller supplies the id of the document to delete. In the case of the relational database, the delete operation maps to a SQL DELETE statement.
Add
The “add” operation inserts a new document into the data source. The caller supplies the document in the form of a DOM Document object. The bridge service returns the id of the newly added document. In the case of a relational database, the add operation maps to a SQL INSERT INTO statement.
Browse
The browse operation, also known as “buffering,” browses all of the documents in the data source of a certain type. The caller supplies the type of document to browse. The bridge service returns a browse object similar to a JDBC result set. The browse object allows the buffer to traverse the results in either direction, jumping to the first or last document, and to re-initiate the browse operation. In the case of a relational database, the browse operation maps to a SQL SELECT statement that returns multiple records.
Search
The search operation browses the data source for all documents of a certain type that meet a predefined search criteria. The search criteria can be a list of fields and values which the caller wants to match against records in the database. For example, the caller might request all customer records that contain a “state” field matching the string “MA.” The caller supplies the type of document to browse as well as a document containing the fields to be matched. The bridge service returns a browse object as above. In the case of a relational database, the search operation maps to a
SQL SELECT statement in which the WHERE clause contains the LIKE operator.
Front-End Interface
The front-end interface 120 is responsible for user presentation and interaction. The front-end interface uses “forms” to allow users to view and modify information. As an advantage, the front-end interface provides a “thin” user interface, with simple interactivity that can easily be customized as the environment in the enterprise changes. The front-end forms use HTML 121, HTTP 122, Javascript, Java servlets 123, Java applets, and plug-ins as necessary. Being Web based, the user 103 can use any standard browser 124 to interact with the system from anywhere there is an Internet access point.
HTTP Communications
The HTTP is used as the communication mechanism between agents and users. The user 103 browses and modifies information, and initiates processes via the web browser 124. User requests are routed to agents 101 via HTTP and through the Java servlet. The servlet 123 in turn communicates with a front-end service bridge 125 that serves as an interface for the agents 101.
The servlet/service bridge combination 123/124 supports the establishment of user sessions that are the channel for two-way communication between the user and the agents. Within the context of a session, the user can send HTTP GET or POST requests to the agents, and the agents process such requests, and send back an HTTP response. Sessions allow the user to wait for an agent to arrive and allow an agent to wait for a user to connect.
HTML Form Style Sheets
We accomplish the display of information to users with HTML, web pages, and web forms. As stated above, the information that agents retrieve from data sources is in the form of the XML documents 102. To format the XML documents into a form suitable for users, the front-end servlet 123 converts the XML document to a HTML page using a style sheet 126, e.g. XSL, JSP or some other data replacement technique as described below. The result of this conversion is the HTML page containing the information in a user-friendly format. By applying the style sheet, the servlet recognizes and replaces certain data from the XML document and converts the data to HTML form.
For example, a particular XML document 102 includes the following information:
```
<customer>
<firstname>John</firstname>
<lastname>Smith</lastname>
</customer>
```
The HTML style sheet 126 for this document is as follows:
```
<html>
<h1>John Smith</h1>
</html>
```
After applying the style sheet to the XML document, the resultant HTML form 121 would appear as:
```
<html>
<h1>John</h1>
<h2>Smith</h2>
</html>
```
The style sheet supports accessing all of the elements and attributes in the XML documents, and iteration over groups of repeating elements. For example, an XML document contains:
```
<customer type="preferred">
<firstname>John</firstname>
<lastname>Smith</lastname>
</customer>
```
The “type” attribute of the customer is accessed by using a syntax such as the following:
```
'customer.type[type]'
```
which yields the value “preferred.” Given a document containing repeating groups as follows:
```
<customers>
<customer type="preferred">
<firstname>John</firstname>
<lastname>Smith</lastname>
</customer>
<customer type="standard">
<firstname>Jones</firstname>
<lastname>Jones</lastname>
</customer>
</customers>
```
The “lastname” element of the second customer is accessed using a syntax such a ‘customers[1].lastname’ which yields the value “Jones.” To iterate over all of the customers and access their “type” attributes, an expression such as:
```
'foreach(c in customers.customer) {
c.lastname
}'
```
can be used to produce first the string “preferred,” and then “standard.”
Validation
The front-end interface also supports the validation of user entered information. Field validation information supplies some immediate feedback and interactivity to the user. Field validation also increases application efficiency by detecting common errors within the web browser process before any other network traffic is incurred or application logic is executed. Client side validation can be broken down into two related levels.
Field-Level
Field-level validation performs simple checks on user entered data to validate that the information is of the correct format or data type. For example, field-level validation can validate that a user enters numeric values in a particular field, or uses a proper date format. We implement field-level validations with Javascript. A library of common validations is supplied as a script file on a web server. The library has a “.js” file extension. This script file can be included into
HTML forms as desired using the `<script>` HTML tag. Validation is enabled for a field by indicating the name of an appropriate validation routine, e.g. `onChange` within an event handler of the field. The event handler is triggered when an INPUT field changes. Setting up validation for a field requires HTML coding as follows:
```html
<input type="text" name="birthdate" onChange="validateDate(birthdate)/">
```
The validation library provides routines for common data types such as dates, times, currency, etc. The validation library can also provide a pattern matching ability allowing user input to be matched against arbitrary patterns, e.g., a pattern `###` to match a monetary amount.
Cross-Field Validation
Cross-field validation allows for more complex validations. In this type of validation, the contents of one field depends on the contents of another field. For example, cross-field validation can detect a situation where a telephone number must be entered. Such validation usually requires a more detailed knowledge of the requirements of the application.
Middle Tier
The middle tier 130 provides the “glue” that links the back-end and the front-end interfaces. The middle tier utilizes the mobile agents 101 to communicate with the interfaces. The middle tier also provides support for disconnected applications and users. In addition, the middle tier customizes the system 100 to the needs of specific enterprise functions without actually having to reprogram the legacy systems.
The middle tier supports the automation of complex workflow and complex validations of data that may require access to multiple data sources. As a feature, the middle tier uses a rules engine (RE) 131 operating on rules stored in a database 132. The rules are defined in a rules language, and can be retrieved by the agents 101 as needed.
In a typical scenario, the user launches an agent 105 due to interaction with the browser 124. The agent carries an XML document, e.g., a purchase order 106, to the rules database 132. The agent retrieves the appropriate rule for processing the order, such as a purchase order workflow. The agent then interprets the rule to appropriately route the document to the locations in the network specified by the rule. The rule can include a travel itinerary, as well as instructions on how to interact with the data sources.
As an advantage, the operation of our system is always current. As rules change so does the operation of the system. The agents always execute according the current state of the rules database.
Design Tools
As shown in FIG. 2, the primary purpose of the design tools 140 is to generate 141 XML document type definitions (DTD) 142, to specify 143 data mappings, i.e., RACs 114, to encode 144 rules 132, and to design 145 user interfaces 126.
Document Type Definitions
The step 141 identifies the different types of document information (DTD) 142 that needs to be shared by the various data sources 111 of the back-end 110 and the browser 124 of the front-end 120. This information is specified in the DTDs. For example, to share purchase order information between systems, the type of information needed in a purchase order needs to be identified, then that information needs to be encoded in a corresponding DTD. In one embodiment, the design tools use the service bridge to extract schemas from the data sources.
Data Mapping
After a data source independent data format has been generated, the mappings between the XML format and legacy formats for a particular database needs to be specified as shown in FIG. 3. A query operation to a relational databases 111 involves extracting the schema of the database by generating a SQL runtime access component (RAC) 114 which makes the JDBC calls to the database, converting the resulting data into the XML format, and handing the XML document 113 to an agent 101. The access components can be implemented as Java code. The agent delivers the XML to the front-end 120 for conversion to the HTML form 121 using the style sheet 126 so that the data can be viewed by the user 103 using a standard browser 124.
Conversely, the update operation converts the HTML form to the corresponding XML document. The XML document is converted to a legacy format and the RAC modifies the data source using its schema. For other legacy data sources that are not specified by a schema or some other metadata, the mapping may need to be done by means that access the APIs directly.
Rule Encoding
After the data format definition is generated, and the RAC has been specified to access the appropriate data source, the next step is to encode what agents are going to do with the information. In a simple data replication system, an agent may retrieve modified records from a master database, travel to the location of a backup database, and then update the backup database with a copy of the modified record. This process involves the encoding of a specific rule.
Designing the User Interface
As shown in FIG. 2, generating the user interface requires three steps: manipulating document type definitions (DTD) 145, importing DTD 146, and generating DTD from database schema 147.
Authoring DTD
The design tools 140 allow the system designer to define, design, and manipulate XML and HTML DTDs. A DTD 142 defines the name of the following document elements: the contents model of each element, how often and in which order elements can appear, if start or end tags can be omitted, the possible presence of attributes and their default values, and the names of the entities.
Because the DTDs represent many different types of documents in the system, this step essentially defines the data types of the enterprise’s computerized applications. As an advantage, the resulting DTDs do not directly tie the system to any specific legacy data source, nor do the definitions preclude the integration of other legacy systems in the future.
DTD Import
The tools also allow one to import already existing DTD definitions. Such functionality can be used in environments where DTDs have already been defined for standard document types. These DTDs may have been defined by standards bodies or a designer of the legacy system.
DTD generation from Database Schema
This part of the tools automatically generate DTDs from existing database schema.
XML→SQL Mapping Definition
Given the existence of the DTDs, the system 100 provides tools that map between legacy back-end data formats and XML document formats. In the case of relational database access, these mappings link tables, columns, and fields from the legacy database to elements and attributes of the XML documents as defined by the DTDs. This also allows the definition of several distinct mappings, each of which involves accessing slightly different information in the data source.
Data Mappings
Query Mapping
A query mapping enables an agent to retrieve information from a legacy data source. In the case of a relational database, this mapping specifies the contents of the SELECT statement, including any information relevant for a table join. A query mapping for a purchase order may involve accessing a purchase order table, a customer table, and a product catalog table.
Update Mapping
An update mapping allows an agent to modify information in the data source. This involves specifying the contents of an UPDATE statement. An update mapping for a purchase order involves updating the purchase order table, but not modifying the customer table or the product catalog table.
Delete Mapping
A delete mapping allows an agent to delete information in the data source. This involves specifying the contents of a DELETE statement. A delete mapping for a purchase order involves deleting a record or records from the purchase order table, but not modifying the customer table or the product catalog table.
Add/Create Mapping
An add/create mapping allows an agent to add information to the data source. This involves specifying the contents of an INSERT statement. An insert mapping for a purchase order involves adding a record or records to the purchase order table, but not modifying the customer table or the product catalog table.
Schema Extraction and Caching
In order to allow for mapping between a legacy database schema and XML DTD formats, the mapping design tool extracts the schema from legacy databases. Because schema extraction is an expensive and time consuming task, the tools allow one to save extracted schemas on a disk for subsequent use.
Form Generation
The tools will also allow one to automatically generate a form from a DTD. Such a form may require minor modifications to enhance the physical appearance of the form. For example, color or font size of text can be adjusted to enhance usability.
Embedding Binary Data in XML Documents
Some enterprise applications may need to retrieve arbitrary binary data from the data source. For example, a legacy database contains employee information. Included with that information is a picture of the employee in standard JPEG format. The employee information is stored as a single table named `employees`, which has a schema as Table 1, where the field `<image>` represents the picture:
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>HireDate</th>
<th>Photo</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John Smith</td>
<td>1/1/96</td>
<td><image></td>
</tr>
</tbody>
</table>
The XML document that retrieves the above table appears as follows:
```xml
<employee>
<ID>1</ID>
<name>John Smith</name>
<hiredate>1996-01</hiredate>
<photo href="http://server/directory/john.jpg" />
</employee>
```
However, there are a number of problems with this type of approach. First, it is the responsibility of the user to issue the proper additional commands to retrieve the linked document before it can be displayed, e.g., the user must click on the URL of the picture. Second, the DTD for the XML document must specify the URL. For most legacy databases, it is unlikely that the records storing the binary data are accessible via an HTTP URL. Furthermore, the binary data is transported through the system by a follow on transport, such as HTTP. For reliability, security, consistency, and other reasons we prefer to carry all data, including binary data with the agents.
To allow the servlet to generate an agent that can access the binary data, we define a new type of URL. The new URL incorporates the location of the binary data, as well as a unique "name" that can be used to retrieve the binary data. The URL contains the hostname of the data source, a service name, an action name that can be used to perform the retrieval of the binary data, and a document identification referring to the binary data. This still results in a fairly complex URL.
Using multiple requests to retrieve the binary data is inconsistent with our agent model. Agents try to use the network effectively by batching data into fairly large self-contained packets. This is very different than the hypertext model used on the web in which a single page display can lead to multiple network requests.
Compound Documents
In an alternative solution, we define a compound document. In a compound document, the binary data is embedded in the same document as the textual XML data. This approach is consistent with our agent driven system that attempts to transport data as larger batches. Compound documents can be built in two ways.
Embed Binary Data into XML Text Element
The binary data is embedded directly into an XML text element. This can be done as long as the binary data is encoded in such a way that the text only contain XML characters. Such an encoding could be based on the Base64 encoding. With Base64, special characters, such as "<" and ">", are replaced with equivalent entities (i.e., < and >). We also can use a character data (CDATA) section to work around the problem of illegal characters within the Base64-encoded data. We may want to prefix the embedded binary data with standard mime headers that specify content type, encoding, and name. Such a format for the photo element appears as follows:
```xml
<employee>
<ID>1</ID>
<name>John Smith</name>
<binaryimage href="http://server/directory/john.jpg" encoding="Base64" type="image/jpeg"/>
</employee>
```
It should be noted that this alternative increases the size of the binary data by 33% as well as increasing the overhead to encode and decode the data.
This alternative requires that a SQL RAC extracts the binary data and encodes the data into Base64, and then adds the encoded data to the XML document with the proper mime headers.
Compound Document Encoded as Mime Document
Another alternative, embeds both the XML document and the binary data into separate parts of a multipart mime document. Each part of the overall document has a Content-ID which is referenced from a standard XML link in part, such a format appears as follows:
```
Content-Type: multipart/related; boundary="--XXXXX"
--XXXXX
Content-Type: text/xml
Content-ID: foo
<br>http://cid:photo/>
--XXXXX
Content-Type: image/jpeg
Content-Encoding: base64
Content-Name: john.jpg
Content-ID: photo
9/4AAQSkZJ--gEASABIAAD/---XXXXX---
```
With this alternative, the binary data may not need to be encoded. However, this requires that agents also retrieve MIME documents via the RAC.
JDBC Service Bridge
FIG. 4 shows details of a preferred embodiment of a service bridge 400 of the back-end interface 110 for accessing a data source. In this embodiment, JDBC is used to access a SQL type of database. The bridge 400 includes a public interface 410, JDBC run-time access component (RAC) 420, XML-SQL data mapping 430, and a document cache 440 as its main components.
Public Interface
As stated above, the public interface 410 provides the means by which agents access the data sources 111. The public interface allows data retrieval, modification, and addition. As an advantage, the public interface 410 makes no assumptions about how data in the legacy database 111 is sourced or maintained. Instead, we make the public interface resemble the GET/PUT model of HTTP.
JDBC Run-Time Access Component
The JDBC access component 420 is responsible for establishing and managing JDBC connections, building and executing SQL statements, and traversing result sets. This component works entirely within the context of JDBC and SQL.
XML-SQL Data Mapping
The XML-SQL data mapping 430 uses the mapping information generated by the design tools 140 to map data between XML and SQL.
Document Cache
The document cache 440 operates entirely with XML documents. XML documents that have been retrieved from the data source can be cached for fast future retrieval. The caching services are configurable so that maximum cache sizes and cache item expiration times can be specified. Caching can be disabled for certain classes of documents which contain highly volatile information.
FIG. 5 shows the public interface 410 in greater detail. The interface supports four basic types of accesses, namely get 510, put 520, add 530, and delete 540.
At the heart of the interface is the document id 104. The document id is a string which uniquely identifies every document instance within the data source. The document id can be thought of as corresponding to the URL of a World Wide Web document, or to the primary key of a record in a database. Although the id has a different format than a URL, it does serve as a document locator.
In order to interact with information in the legacy data source, an agent needs to provide the id for the document containing the information. The id contains multiple sections of information and follows the following pattern:
The first character of the id string specifies a separator character (S) 501 that is used to separate the different sections that make up the document id, e.g., a colon (:). This character is used in conjunction with a Java StringTokenizer to parse the document id. The subsequent information in the id includes name=value pairs (N, V) 502. One pair 502 specifies a document type, e.g., “type=cast_list:”
In most common cases, the id 104 also contains a key specifying the exact document instance in order to uniquely identify an individual document in a data source. For example, in a document containing customer information, this key contains a data source specific customer number or a customer id. Within the service bridge, this key is mapped to a WHERE clause of a SQL statement. For example, an agent can request customer information for a particular customer by specifying an id string as follows:
```
"type=customer:key=SMITH:"
```
This request results in a SQL query to the database that appears as follows:
```
SELECT * FROM Customers WHERE Customers.ID=SMITH
```
The exact semantics of how they key is mapped into the resultant SQL statement is specified by the design tools 140.
The key portion of the id can be composed of multiple pieces of information separated by, for example, commas. Such a key is used in cases in which the WHERE clause of the corresponding SQL query needs multiple pieces of information to be specified by the agent. An example of this is a document containing a list of customers, where the customers names are within a certain alphabetic range, for example, “all customers whose last names begin with the letters A or B.” Such a document has an id as follows:
```
"type=cast_list_by_name:keys=A,Bzzz:
```
In this case, the request would map into a SQL statement resembling the following:
```
SELECT * FROM Customers
WHERE Customers.LastName BETWEEN A AND B
```
Implementation Details of the Service Bridge
Database Access
User Authentication
The service bridge is responsible for performing any authentication necessary in order to establish a database
connection. This may involve supplying a database specific username and password or other login information. When a database access (get, put, add, delete) is made by an agent, the bridge examines the agent's runtime context to determine the user identity associated with the agent.
After the agent’s identity has been ascertained, the service bridge maps the identity into simultaneous database-specific user identification using a mapping table generated by the design tools. For example, the mapping maps the user identity "steve@accounting" into an Oracle username "steve.”
In order to establish a connection to a database on behalf of a user, the service bridge retrieves both the username and clear-text password for the corresponding database user account. In such cases, the clear-text password is stored in the identity-mapping table. For security reasons, the table is encrypted on disk using a public/private key pair.
Connection Management
To enhance performance and scalability, the service bridge supports database connection pools. This means that multiple users share a common pool of JDBC connections. Establishing a database connection can be a slow and relatively expensive operation. The use of shared connection pools decreases this expense.
The basis for this connection sharing are "users groups." When an agent attempts an operation which requires a connection to a database, the service bridge performs that operation using a connection established in the context of a special "pseudo-user" account. The pseudo-user is a database system account that represents not an individual user, but instead a particular group of users. A pool of such pseudo-user connections is available for use by all of the agents of the group. The service bridge creates and maintains a connection pool for each distinct group of users who access the bridge.
FIG. 6 shows agents 101 for three users tom, joe and david 601–603 accessing the data source 111. Two of the users, tom@users and joe@users, are members of a users group. The third user, david@managers, is a member of a "managers" group. When these agents attempt to access the database, the two members of the users group share a connection pool 610 that was established with the credentials of the "users" pseudo-user. The third agent will communicate with the database using a separate connection pool 620 established with the credentials of the "managers" pseudo-user.
A connection pool for a particular group is generated when a member of the group makes the first access request. Connections within the pool are constructed as needed. The service bridge does not pre-allocate connections. After a configurable, and perhaps long period of inactivity, the connection pool is closed to free database resources. If a connection pool for a particular group has been closed due to inactivity, then any subsequent request by a member of that group results in the generation of a new pool. When a request is completed, the connection allocated for that request is returned to the pool. A maximum number of connections in a pool can be specified. If no connections are available when a request is made, then the request is blocked until a connection becomes available.
Statement Construction and Execution
The actual generation and execution of SQL statements is performed by a separate "modeler" object. The modeler object is generated by the design tools 140. For each type of document used in the system, there is a distinct modeler object. Each modeler knows how to construct exactly one type of document. During the design process, one specifies what information is to be retrieved from the database, and how to map the information into an XML document. The design tools serialize and save the modeler objects in a "ser" file. At runtime, the service bridge Los de-serializes the modeler objects from the "ser" file. The resultant modeler objects are able to perform all of the data access and mapping functions required to retrieve information from the data sources. As stated above, SQL to XML data mapping is performed by the modeler object designed for a particular document type.
Data Caching
To improve the performance of document retrieval, the data service caches database information as converted XML documents. When a first request is made to retrieve a document, the service performs the SQL access and SQL to XML data mapping as described above. The resultant XML document is added to the cache of documents 440 maintained by the service bridge. Any subsequent request to retrieve the document will be satisfied by retrieving the document from the cache, bypassing the need for an additional expensive database access and mapping.
When an update or addition is made to a data source, the cache is updated to reflect the new information. The update to the cache is made only after the SQL statement performing the update of the end database has been completed successfully. This prevents the cache from storing information that has not been committed to the database due to errors or to security restrictions.
The XML document cache is configurable to specify a maximum size of the cache, the maximum amount of time a single document can be retained in the cache before it becomes stale, and whether the cache should be persisted to disk, in which case the cache can be re-used after a server restart. One can also customize how different classes of documents are cached. If a document represents highly volatile information, then caching can be disabled for that class of document. If a document class is completely (or virtually) static, then documents of that class can be cached for a very long time.
Execution Flow
The following section describes the execution flow for basic database access requests. FIG. 7 shows the steps 700 of a "get" or retrieval access in greater detail. After the request is received from the agent 710, the caller and document identity are determined 720, 730. The group specific cache is identified 740, and the cache is checked 750. If the cache stores the document, return the document in step 755. Otherwise, locate the XML-SQL mapping 760, construct the select SQL select statement 770, retrieve the connection 775, and execute the statement in step 780. Next, the result set is "walked" 785, fields are extracted 790 to build the XML document 794, the document is cached 796 and returned to the agent in step 798. FIG. 8 shows the steps 800 for the addition (add) and modification (put) similar to the get steps. The delete request simply deletes data from the database as shown at 540 in FIG. 5.
Run-time Object Hierarchy
FIG. 9 shows the run-time hierarchy 900 of objects of the service bridge 110. The objects can be classified as data source independent 901, and data source dependent 902. The data source independent object 901 includes data source factory object 910 indexed by group name, group specific data source objects 920, document factory objects 930 (one per document), document cache objects 940, document builder objects 950, connection pool objects 960, mapping table objects 970, document manager objects 980, and the data source manager objects 990. The data source dependent
object 902 include source connection 991, string authentication 992, document map 993, and specific driver objects 994.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. An enterprise integration system, comprising:
a back-end interface, coupled to a plurality of data sources, configured to convert input data source information to input XML documents and to convert output XML documents to output data source information, wherein the plurality of data sources use different data formats and different access methods;
a front-end interface including means for converting the input XML documents to input HTML forms and for converting output HTML forms to the output XML documents;
a middle tier including a rules engine and a rules database; design tools for defining the conversion and the XML documents;
a network coupling the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources;
a plurality of mobile agents configured to communicate the XML documents over the network and to process the XML documents according to the rules.
2. The system of claim 1 wherein each XML document is identified by a document identification.
3. The system of claim 2 wherein the document identification is a character string.
4. The system of claim 3 wherein the character string includes a plurality of sections, and a first character of the string is a section separator.
5. The system of claim 4 wherein one of the sections stores a document type.
6. The system of claim 3 wherein one of the sections stores a key to an instance of the XML document in one of the data sources.
7. The system of claim 1 wherein the back-end interface further comprises:
a public interface;
a document cache; and
a run-time access component.
8. The system of claim 7 wherein the run-time access component generates access requests for the plurality of data sources.
9. The system of claim 8 wherein the access requests include query, update, delete, add, browse, and search.
10. The system of claim 7 wherein the public interface forwards the input XML document to the plurality of the mobile agents for distribution, and the public interface receives the output XML documents for storing in the plurality of data sources.
11. The system of claim 7 wherein the document cache includes caching parameters.
12. The system of claim 7 wherein the caching parameters include a maximum lifetime for each cache entries, a maximum cache size, and a persistance indicator.
13. The system of claim 1 wherein the XML documents include binary data.
14. The system of claim 13 wherein the binary data is embedded as a compound document.
15. The system of claim 14 wherein the compound document embeds the binary data as an encoding in a character set.
16. The system of claim 14 wherein the compound document embeds the binary as a MIME document.
17. The system of claim 13 wherein the binary data is referenced by a Universal Resource Locator.
18. The system of claim 1 wherein the input documents are presented to a browser.
19. The system of claim 1 wherein the back-end interface performs user authentication.
20. The system of claim 1 wherein the back-end interface supports database connection pools.
21. A method for integrating a plurality of data sources, comprising:
converting input data source information to input XML documents and converting output XML documents to output data source information, wherein the plurality of data sources use different data formats and different access methods;
converting the input XML documents to input HTML forms and converting output HTML forms to the output XML documents;
providing a rules engine and a rules database;
defining the converting and the XML documents;
communicating the XML documents over a network using mobile agents; and
processing the XML documents by the mobile agents according to the rules database.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6810429", "len_cl100k_base": 12086, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 24087, "total-output-tokens": 14668, "length": "2e13", "weborganizer": {"__label__adult": 0.000370025634765625, "__label__art_design": 0.0005168914794921875, "__label__crime_law": 0.000629425048828125, "__label__education_jobs": 0.0017614364624023438, "__label__entertainment": 0.00012314319610595703, "__label__fashion_beauty": 0.00017344951629638672, "__label__finance_business": 0.0046539306640625, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0007619857788085938, "__label__hardware": 0.0028533935546875, "__label__health": 0.00035643577575683594, "__label__history": 0.0004012584686279297, "__label__home_hobbies": 8.553266525268555e-05, "__label__industrial": 0.0012340545654296875, "__label__literature": 0.00028896331787109375, "__label__politics": 0.0002853870391845703, "__label__religion": 0.0003414154052734375, "__label__science_tech": 0.1002197265625, "__label__social_life": 6.097555160522461e-05, "__label__software": 0.089111328125, "__label__software_dev": 0.79443359375, "__label__sports_fitness": 0.00016891956329345703, "__label__transportation": 0.0006346702575683594, "__label__travel": 0.0002104043960571289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61448, 0.06591]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61448, 0.56984]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61448, 0.85649]], "google_gemma-3-12b-it_contains_pii": [[0, 2541, false], [2541, 4440, null], [4440, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 6155, null], [6155, 6155, null], [6155, 6670, null], [6670, 6992, null], [6992, 6992, null], [6992, 13931, null], [13931, 20569, null], [20569, 27447, null], [27447, 32161, null], [32161, 38996, null], [38996, 44417, null], [44417, 49936, null], [49936, 57160, null], [57160, 61448, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2541, true], [2541, 4440, null], [4440, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 4557, null], [4557, 6155, null], [6155, 6155, null], [6155, 6670, null], [6670, 6992, null], [6992, 6992, null], [6992, 13931, null], [13931, 20569, null], [20569, 27447, null], [27447, 32161, null], [32161, 38996, null], [38996, 44417, null], [44417, 49936, null], [49936, 57160, null], [57160, 61448, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61448, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61448, null]], "pdf_page_numbers": [[0, 2541, 1], [2541, 4440, 2], [4440, 4557, 3], [4557, 4557, 4], [4557, 4557, 5], [4557, 4557, 6], [4557, 4557, 7], [4557, 6155, 8], [6155, 6155, 9], [6155, 6670, 10], [6670, 6992, 11], [6992, 6992, 12], [6992, 13931, 13], [13931, 20569, 14], [20569, 27447, 15], [27447, 32161, 16], [32161, 38996, 17], [38996, 44417, 18], [44417, 49936, 19], [49936, 57160, 20], [57160, 61448, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61448, 0.00634]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
07d7291f05a897d99bc4037ab96a1d37c9cb54b9
|
Reasoning about intended actions
Chitta Baral† and Michael Gelfond‡
†Department of Computer Science and Engineering
Arizona State University, Tempe, AZ 85233, USA.
chitta@asu.edu
‡Department of Computer Science
Texas Tech University, Lubbock, TX 79409, USA.
mgelfond@cs.ttu.edu
Abstract
In most research on reasoning about actions and reasoning about narratives one either reasons about hypothetical execution of actions, or about actions that actually occurred. In this paper we first develop a high level language that allows the expression of intended or planned action sequences. Unlike observed action occurrences, planned or intended action occurrences may not actually take place. But often when they do not take place, they persist, and happen at an opportune future time. We give the syntax and semantics for expressing such intentions. We then give a logic programming axiomatization and show the correspondence between the semantics of a description in the high level language, and the answer sets of the corresponding logic programming axiomatization. We illustrate the application of our formalism with respect to reasoning about trips.
Introduction and Motivation
In reasoning about actions (for example, (?), ?) and reasoning about narratives we often reason about action sequences that are executed in a particular situation, or actions that happened at particular time points. Alternatively, there have been some work on reasoning about natural actions (?) and actions that are triggered. In this paper we consider intended execution of actions and formalize how to reason about such intentions.
To motivate this further, consider a narrative where an agent intended to execute action a at time point i. A commonsense reasoner looking back at this intention would conclude that the agent must have executed action a at time point i. To ground this example, consider that the wife of our reasoner says that she intends to leave work at 5 PM. At 6 PM the commonsense reasoner would conclude that his wife must have left at 5 PM. Now suppose the reasoner checks his email and finds an email from his wife saying that she has been held up in a meeting and later gets information that the meeting ended at 5:30. The reasoner would then conclude that his wife must have left at 5:30 PM. I.e., her intended action, since it became impossible at the initially intended time point, must have persisted and executed at the next time point when it became executable.
Now let us generalize this to a sequence of actions where an agent intends to execute a sequence a1, ..., an at time point i. Now what if it happens (the world evolved in such a way) that the executability condition of ak is not true at the time point where ak−1 ended. Does this mean the agent abandoned his intention to execute a1, ..., an? It seems to us that most agents, if they failed to execute their intended action ak after the execution of ak−1, would execute ak in the next possible time point when it became executable. As before, let us consider a more grounded example. John is supposed to have taken flight A to B and then take a connection from B to C. Suppose Peter finds out that John’s flight from A to B was late. Once Peter knows when exactly John reached B, his reasoning would be that John would have taken the next flight from B to C. In other words, failure to go from B to C at a particular time point, does not mean that John would have abandoned his intention to go from B to C; rather most likely he would have just done it at the next possible time point. This actually happened to one of the authors. He correctly guessed that his wife would take the next flight (after missing a connection) and was able to meet her at the airport when the next flight arrived.
In most earlier work on reasoning about actions and narratives (for example, (?)), if one or many of the actions in a given sequence a1, ..., an are not executable or otherwise prevented from execution then the reasoning process rigidly assumes that either the actions were not executed or considers the domain to be inconsistent. The formulation there is appropriate with respect to the assumptions in those languages. Here we consider the new notion of “intended (or planned) execution of actions”, which needs a different formalization. In this we can take pointers from prior studies on intentions (?), (?). In particular, intentions have been studied from the point of view of the design of rational agents (?), and they are one of the three main components of BDI (Belief-Desire-Intention) agents. In (?), various properties of ‘intentions’ of a rational agent is discussed. In particular the author says:
Summarizing, we can see that intentions play the following important roles in practical reasoning
• Intentions drive means-ends reasoning.
If I have formed an intention, then I will attempt to achieve the intention, ...
• Intentions persist.
I will not usually give up on my intentions without good
reason – they will persist, ...
... In this paper we first present an action language that allows the expression of intentions. We then use AnsProlog (logic programming with answer set semantics) to implement reasoning with intentions. The ability of AnsProlog to express defaults and normative reasoning, becomes a key tool in expressing the normative reasoning associated with characterizing intentions, in particular the statements: (i) normally intended actions take place, and (ii) normally intentions, that are not executed as intended, persist.
Syntax and Semantics of the language
The signature of our language \( ALI \) contains two disjoint finite sets: \( A \), a set of names for elementary actions (agent’s and exogenous); and \( F \), whose elements are referred to as fluents and used to denote dynamic properties of the domain\(^1\). By fluent literals we mean fluents and their negations (denoted by \( \neg f \)). The set of literals formed from a set \( X \subseteq F \) of fluents will be denoted by \( \text{lit}(X) \). A set \( Y \subseteq \text{lit}(F) \) is called complete if for any \( f \in F\), \( \neg f \in Y \) or \( \neg f \in Y; Y \) is called consistent if there is no \( f \) such that \( f, \neg f \in Y \).
Actions are sets \( \{a_1, \ldots, a_n\} \) of elementary actions. Intuitively, execution of an action \( \{a_1, \ldots, a_k\} \) corresponds to the simultaneous execution of its components. Action sequences are constructed using \( \langle \cdot \rangle \) a la Prolog, i.e. we allow sequences \( \langle \{a_1, a_2\}, \{a_3, a_4\} \rangle \), etc. We will frequently identify an action \( a \) with the sequence \( \langle a \rangle \).
By a transition diagram over signature \( \Sigma \) we mean a directed graph \( T \) such that:
(a) the states of \( T \) are labeled by complete and consistent sets of fluent literals (corresponding to possible physical states of the domain) denoted by \( \sigma_s \).
(b) the arcs of \( T \) are labeled by actions.
Paths of a transition diagram, which are of the form \( \langle \sigma_1, a_1, \sigma_2, \ldots, a_{n-1}, \sigma_n \rangle \), are called trajectories of the domain.
Background: Representation of the transition diagram
In this section we briefly review the syntax of an action description language \( AL \) (\?) and its semantics that defines the transition diagram corresponding to a give action description in \( AL \).
An action description of \( AL \) is a collection of propositions of the form (1) \( \text{causes}(a_c, l_0, \{l_1, \ldots, l_n\}) \), (2) \( \text{caused}(l_0, \{l_1, \ldots, l_n\}) \), and (3) \( \text{impossible}_i \langle f(a_e, \{l_1, \ldots, l_n\}) \rangle \);
where \( a_e \) is an elementary action and \( l_0, \ldots, l_n \) are fluent literals. The first proposition says that, if the elementary action \( a_c \) were to be executed in a situation in which \( l_1, \ldots, l_n \)
hold, the fluent literal \( l_0 \) will be caused to hold in the resulting situation. Such propositions are called dynamic causal laws. The second proposition, called a static causal law, says that, in an arbitrary situation, the truth of fluent literals, \( l_1, \ldots, l_n \), is sufficient to cause the truth of \( l_0 \). The last proposition says that action \( a_e \) cannot be performed in any situation in which \( l_1, \ldots, l_n \) hold. (The one presented here is actually a simplification of \( AL \). Originally \text{impossible}_i \langle f \rangle took as argument an action rather than an elementary one. The restriction on \( a_e \) being elementary is not essential and can be lifted. We require it to simplify the presentation).
To define the transition diagram, \( T \), given by an action description \( AD \) of \( AL \) we use the following terminology and notation. A set \( S \) of fluent literals is closed under a set \( Z \) of static causal laws if \( S \) includes the head, \( l_0 \), of every static causal law such that \( \{l_1, \ldots, l_n\} \subseteq S \). The set \( Cn_Z(S) \) of consequences of \( S \) under \( Z \) is the smallest set of fluent literals that contains \( S \) and is closed under \( Z \). \( E(a, \sigma) \) stands for the set of all fluent literals \( l_0 \) for which there is a dynamic causal law \( \text{causes}(a, l_0, \{l_1, \ldots, l_n\}) \) in \( AD \) such that \( \{l_1, \ldots, l_n\} \subseteq \sigma \). \( E(a, \sigma) = \bigcup_{a_e \in a} E(a_e, \sigma) \). The transition system \( T = (\{S, R\}) \) described by an action description \( AD \) is defined as follows:
1. \( S \) is the collection of all complete and consistent sets of fluent literals of \( \Sigma \) closed under the static laws of \( AD \).
2. \( R \) is the set of all triples \( \langle \sigma, a, \sigma' \rangle \) such that \( a \) is executable in \( \sigma \) (i.e., \( AD \) does not contain a proposition of the form \( \text{impossible}_i \langle f(a_e, \{l_1, \ldots, l_n\}) \rangle \)) such that \( a_e \in a, \{l_1, \ldots, l_n\} \subseteq \sigma \) and \( \sigma' \) is the fixpoint of the equation
\[
\sigma' = Cn_Z(E(a, \sigma) \cup (\sigma \cap \sigma'))
\]
where \( Z \) is the set of all static causal laws of \( AD \). The argument of \( Cn_Z \) in (1) is the union of the set \( E(a, \sigma) \) of the “direct effects” of \( a \) with the set \( \sigma \cap \sigma' \) of facts that are “preserved by inertia”. The application of \( Cn_Z \) adds the “indirect effects” to this union.
We call an action description deterministic if for any state \( \sigma_0 \) and action \( a \) there is at most one such successor state \( \sigma_1 \).
Syntax of the rest of the language: Observations and intentions
As we mentioned earlier our focus is on the recorded history, including past intentions, and their characterization on how the world evolved. The recorded history is a collection of statements of the following forms:
(i) \text{intended}(a_1, i),
(ii) \text{happened}(a_2, i), and (iii) \text{observed}(l, i).
where \( a \)‘s are action sequences, \( l \) is a fluent literal, and \( i \) is a time-step. We assume that the elementary actions of \( a_1 \) are not exogenous.
Intuitively, the statement \text{intended}(a_1, i) \) means that the agent intended to execute the action sequence \( a_1 \) at time point \( i \). In the context of an agent architecture one can view this as that the agent made a plan (at time point \( i \) ) to achieve its goal and the plan was to execute \( a_1 \). As mentioned earlier, it may so happen that the first action of \( a_1 \) may not be immediately executable at time point \( i \), as things might have
changed while the agent was making its plan. In that case the intuition is that the agent would execute it at the next possible time point. (The agent would most likely not go for making a new plan immediately as there is no guarantee that things would remain unchanged while he is making the new plan. But if \(a_1\) does not become executable for a long time, then the agent may indeed look for alternatives.)
The intuitive meaning of the statements \(\text{happened}(a_2, i)\) and \(\text{observed}(l, i)\) are that the sequence of actions \(a_2\) was observed to have happened starting from time point \(i\), and \(l\) was observed to be true at time point \(i\) respectively.
**Semantics**
For the formal characterization, since we adopt the usual meaning of \(\text{happened}\) and \(\text{observed}\), our main focus is the characterization of \(\text{intended}\). In particular, our characterization formulates the following assumptions:
1. a reasoner executes an intended action the moment such execution becomes possible;
2. intending an execution of a sequence of actions \(a_1, \ldots, a_n\) at time step \(i\) consists of intending the execution of \(a_1\) at \(i\) followed by intending the execution of \(a_2\) at the time step at which execution of \(a_1\) is completed, and so on. (The intuition remains the same, if \(a_1\)s are action sequences themselves.)
3. Intentions persist even if execution of an action at intended time-step proves to be impossible.
The following example illustrates the above assumptions.
**Example 1** In accordance with these assumptions a history consisting of \(\text{intended}(\langle a_1, a_2 \rangle, 1)\) defines a collection of trajectories of the form:
\[
\langle \sigma_1, a_1, \sigma_2, a_2, \sigma_3 \rangle,
\]
while a history consisting of \(\text{intended}(\langle a_1, a_2 \rangle, 1)\) and \(\text{happened}(a_3, 2)\), where \(a_2\) and \(a_3\) can not be executed in parallel, defines a collection of trajectories of the form
\[
\langle \sigma_1, a_1, \sigma_2, a_3, \sigma_3, a_2, \sigma_4 \rangle.
\]
We now define when a trajectory is a model of a history. In this we assume that all actions that have occurred are either recorded by \(\text{happened}\) facts, or are due to intentions.
**Definition 1** Let \(P = \langle \sigma_1, a_1, \sigma_2, \ldots, \sigma_m, a_m, \sigma_{m+1} \rangle\) be a trajectory.
1. \(P\) is said to satisfy a statement \(\text{intended}(a, i)\), where \(a\) is an action, if there is \(i \geq j\) such that \(a \subseteq a_j\) and for every \(i \leq k < j\), \(a\) is not executable at \(\sigma_k\) (i.e., for some \(a_e \in a\), we have \(\text{impossible}_j(a_e, [l_1, \ldots, l_n])\) in our action description such that \(\{l_1, \ldots, l_n\} \subseteq \sigma_k\)). We then say that \(j+1\) is the point of \(a\)’s completion, and we say that each element of \(a\) is supported at \(j\).
2. \(P\) is said to satisfy a statement \(\text{intended}(\alpha, i)\) where \(\alpha = \langle a_1', \ldots, a_n' \rangle\), and \(n > 1\), if \(P\) satisfies \(\text{intended}(a_1', i)\) and \(\text{intended}(a_2', \ldots, a_n', j)\) where \(j\) is the point of \(a_1'\)’s completion.
3. \(P\) is said to satisfy a statement \(\text{observed}(f, i)\) if \(f\) is true in \(\sigma_i\).
4. \(P\) is said to satisfy a statement \(\text{happened}(\alpha, i)\), where \(\alpha = \langle a_1', \ldots, a_n' \rangle\), if for \(1 \leq j \leq n\), \(a'_j \subseteq a_{i+j-1}\). We then say that each element of \(a'_j\) is supported at \(i+j-1\).
5. \(P\) is a model of \(H\) if it satisfies all the statements of \(H\), and for \(1 \leq i \leq m\), all elements of \(a_i\) are supported at \(i\).
**Axiomatization of the semantics in AnsProlog**
In this section we give an AnsProlog encoding that captures the semantics of the previous section. Initially, we assume that there is a set of rules which capture the transition diagram. With that assumption, our initial goal is to write the additional AnsProlog rules which when given facts about the history \(H\), consisting of \(\text{happened}\), \(\text{observed}\) and \(\text{intended}\) atoms, will enumerate trajectories (through its answer sets) that are models of \(H\). This encoding of \(H\) consists of the representation of the \(\text{happened}\), \(\text{intended}\), and \(\text{observed}\) facts as given below (denoted by \(\alpha(H)\)), and the rules itemized in 1, 2, and 3 below. The rules are denoted as \(\Pi_1\).
Since \(\text{happened}\) and \(\text{intended}\) facts are about sequences of actions, we represent them in \(\alpha(H)\) as follows. To encode \(\text{happened}(\alpha, i)\), where \(\alpha = \langle a_1, \ldots, a_n \rangle\), and \(a_i = \langle a_{i1}, \ldots, a_{i2} \rangle\), we write the facts:
\[
\text{happened}(s, i).
\]
\[
\text{seq}(s, 1, a_1). \text{in}(a_{i1}, a_1) . . . \text{in}(a_{i1}, a_1).
\]
\[
\text{seq}(s, n, a_n). \text{in}(a_{i2}, a_n) . . . \text{in}(a_{i2}, a_n).
\]
\[
\text{intended}(\alpha, i)\text{ is encoded similarly. observed facts are encoded directly.}
\]
The collection of rules \(\Pi_1\) that reasons about a given history consists of the following.
1. To account for \(\text{happened}\) atoms we have the following rule:
\[
\text{occurs}(A, I+J-1) :- \text{happened}(S, I), \text{seq}(S, J, A'). \text{in}(A, A').
\]
2. To account for \(\text{observed}\) atoms we have the following rule:
\[
\text{holds}(L, 0) :- \text{observed}(L, 0).
\]
\[
\neg \text{holds}(L, T), \text{observed}(L, T).
\]
3. To account for \(\text{intended}\) atoms we need to add several rules as explained below.
(a) Unfolding intention of executing a sequence to planning the execution of actions in that sequence.
\[
\text{planned}(A, I) :- \text{intended}(S, I), \text{seq}(S, 1, A).
\]
\[
\text{planned}(B, K+1) :- \text{intended}(S, I), \text{seq}(S, J, A), I <= K, \text{seq}(S, J, A), \text{occurs_set}(A, K), \text{seq}(S, J+1, B).
\]
The first rule above encodes that an individual action \(A\) is planned for execution at time point \(I\), if \(A\) is the first
action of a sequence which is intended to be executed in time point $J$. The second rule encodes that an individual action $B$ is planned for execution at time point $K+1$, if $B$ is the $J$-th action of a sequence intended to be executed at an earlier time point and the $J$-th action of that sequence is $A$ which is executed at time point $K$.
(b) Planned actions occur unless they are prevented
\[
\text{occurs}_{-}\text{set}(A,I) \leftarrow \text{planned}(A,I), \not\text{occurs}_{-}\text{set}(A,I).
\]
(c) If a planned action does not occur as planned then the plan persists.
\[
\text{planned}(A,I+1) \leftarrow \text{planned}(A,I), \not\text{occurs}_{-}\text{set}(A,I).
\]
(d) If an action $A$ occurs then all elementary actions in $A$ occur.
\[
\text{occurs}(B,I) \leftarrow \text{occurs}_{-}\text{set}(A,I), \text{in}(B,A).
\]
(e) If an elementary action $B$ does not occur then all actions containing $B$ do not occur.
\[
\text{not}_{-}\text{occurs}_{-}\text{set}(A,I) \leftarrow \text{not}_{-}\text{occurs}(B,I), \text{in}(B,A).
\]
Example 2 We now illustrate SmoTo encoding of the above with respect to the second part of Example 1. Since that example deals with actions that are singletons, we simplify the code a bit.
\[
\begin{align*}
\text{action}(a_1; a_2; a_3). & \quad \text{time}([1..3]). \\
\text{intended}(s, 1). & \quad \text{seq}(s, 1, a_1).
\end{align*}
\]
\[
\begin{align*}
\text{seq}(s, 2, a_2). & \quad \text{occurs}(a_3, 2).
\end{align*}
\]
\[
\begin{align*}
\text{not}_{-}\text{occurs}(B,X) \leftarrow & \quad \text{action}(A), \\
& \quad \text{time}(X), \\
& \quad \text{occurs}(A,X), \text{A} \neq B.
\end{align*}
\]
\[
\begin{align*}
\text{planned}(A,I) \leftarrow & \quad \text{intended}(S,I), \\
& \quad \text{seq}(S, 1, A).
\end{align*}
\]
\[
\begin{align*}
\text{planned}(B, K+1) \leftarrow & \quad \text{intended}(S,I), \\
& \quad \text{seq}(S, J, A), \\
& \quad \text{occurs}(A,K), \text{time}(K), \\
& \quad \text{seq}(S, J+1, B).
\end{align*}
\]
\[
\begin{align*}
\text{occurs}(A,I) \leftarrow & \quad \text{action}(A), \text{time}(I), \\
& \quad \text{planned}(A,I), \\
& \quad \not\text{not}_{-}\text{occurs}(A,I).
\end{align*}
\]
\[
\begin{align*}
\text{planned}(A, I+1) \leftarrow & \quad \text{action}(A), \text{time}(I), \\
& \quad \text{planned}(A,I), \\
& \quad \not\text{not}_{-}\text{occurs}(A,I).
\end{align*}
\]
As expected, the above program has a single answer set which contains:
\[
\begin{align*}
\text{planned}(a_1, 1) \quad & \text{planned}(a_2, 2) \quad \text{planned}(a_2, 3) \\
\text{occurs}(a_1, 1) \quad & \text{occurs}(a_3, 2) \quad \text{occurs}(a_2, 3)
\end{align*}
\]
Translation of the action description
So far we assumed the existence of an AnsProlog encoding of the action description part. To precisely relate the semantics of $\mathcal{ALI}$ with an AnsProlog encoding we now present the encoding of the action description part as given in (7).
We start with the encoding of the static and dynamic causal laws and the impossibility conditions. This encoding is done via a mapping $\alpha$, from action descriptions of $\mathcal{AL}$ into programs of AnsProlog, defined as follows:
1. $\alpha(\text{causes}(a, l_0, [l_1 \ldots l_m]))$ is the collection of atoms
\[
\begin{align*}
\text{d}_{\text{law}}(d), & \quad \text{head}(d, l_0), \\
& \quad \text{action}(d, a), \\
& \quad \text{prec}(d, 1, l_1), \ldots, \text{prec}(d, m, l_m), \text{prec}(d, m+1, \text{nil}).
\end{align*}
\]
Here and below $d$ will refer to the name of the corresponding law. Statement $\text{prec}(d, i, l_i)$, with $1 \leq i \leq m$, says that $l_i$ is the $i$-th precondition of the law $d$; $\text{prec}(d, m+1, \text{nil})$ indicates that the law has exactly $m$ preconditions. This encoding of preconditions has a purely technical advantage. It will allow us to concisely express the statements of the form "All preconditions of a law $d$ are satisfied at moment $T". (See rules (3-5) in the program $\Pi_2$ below.)
2. $\alpha(\text{caused}(l_0, [l_1 \ldots l_m]))$ is the collection of atoms
\[
\begin{align*}
\text{s}_{\text{law}}(d), & \quad \text{head}(d, l_0), \\
& \quad \text{prec}(d, 1, l_1), \ldots, \text{prec}(d, m, l_m), \text{prec}(d, m+1, \text{nil}).
\end{align*}
\]
3. $\alpha(\text{impossible if}(a, [l_1 \ldots l_m]))$ is the rule
\[
\begin{align*}
\text{not}_{-}\text{occurs}(a, T) \leftarrow & \quad \text{holds}(l_1, T), \ldots, \text{holds}(l_n, T).
\end{align*}
\]
where $\text{occurs}(a, t)$ stands for 'elementary action $a$ occurred at time $t$'.
By $\alpha(AD)$ we denote the result of applying $\alpha$ to the laws of the action description $AD$. Finally, for any history, $H$, of $S$
\[
\alpha(AD, H) = \Pi_1 \cup \alpha(H) \cup \Pi_2 \cup \alpha(AD)
\]
where $\Pi_2$ is defined as follows:
\[
\begin{align*}
\Pi_2 = & \quad \{ \\
1. & \quad \text{holds}(L, T') \leftarrow \text{d}_{\text{law}}(D), \\
& \quad \quad \text{head}(D, L), \\
& \quad \quad \text{action}(D, A), \\
& \quad \quad \text{occurs}(A, T), \\
& \quad \quad \text{prec}_{\text{H}}(D, T). \\
2. & \quad \text{holds}(L, T) \leftarrow \text{s}_{\text{law}}(D), \\
& \quad \quad \text{head}(D, L), \\
& \quad \quad \text{prec}_{\text{H}}(D, T). \\
3. & \quad \text{all}_{\text{H}}(D, N, T) \leftarrow \text{prec}(D, N, \text{nil}). \\
4. & \quad \text{all}_{\text{H}}(D, N, T) \leftarrow \text{prec}(D, N, P), \\
& \quad \quad \text{holds}(P, T), \\
& \quad \quad \text{all}_{\text{H}}(D, N', T). \\
5. & \quad \text{prec}_{\text{H}}(D, T) \leftarrow \text{all}_{\text{H}}(D, 1, T). \\
6. & \quad \text{holds}(L, T') \leftarrow \text{holds}(L, T), \\
& \quad \quad \not\text{holds}(L, T), \\
& \quad \quad \not\text{holds}(L, T). \\
7. & \end{align*}
\]
Here $D, A, L$ are variables for the names of laws, actions, and fluent literals respectively, $T, T'$ denote consecutive time points, and $N, N'$ are variables for consecutive integers. (To run this program under SMOides we need to either define the above types or add the corresponding typing predicates in the bodies of some rules of $\Pi_2$. These details will be omitted to save space.)
Relation $\text{prec}_{\text{H}}(d, t)$, defined by the rule (5) of $\Pi_2$, says that all the preconditions of law $d$ are satisfied at moment $t$.
This relation is defined via an auxiliary relation all \( h(d, i, t) \) (rules (3), (4)), which holds if the preconditions \( l_1, \ldots, l_m \) of \( d \) are satisfied at moment \( t \). (Here \( l_1, \ldots, l_m \) stand for the ordering of preconditions of \( d \) used by the mapping \( \sigma_d \).) Rules (1), (2) of \( \Pi_2 \) describe the effects of causal laws and constraints of \( AD \). Rule (6) is the inertia axiom (?), and rule (7) rules out inconsistent states.
The following terminology will be useful for describing the relationship between answer sets of \( \alpha(AD, H) \) and models of \( H \).
**Definition 2** Let \( AD \) be an action description, and \( A \) be a set of literals over \( \text{lit}(\alpha(AD, H)) \). We say that \( A \) defines the sequence
\[
\langle \sigma_0, a_0, \sigma_1, \ldots, a_{n-1}, \sigma_n \rangle
\]
if \( \sigma_k = \{ l \mid \text{holds}(l, k) \in A \} \) and \( a_k = \{ a \mid \text{occurs}(a, k) \in A \} \).
The following theorem establishes the relationship between action domains and histories in \( \mathcal{ALT} \) and their encoding in AnsProlog.
**Theorem 1** If the initial situation of \( H \) is complete (i.e., for any fluent \( f \) of \( AD \), \( H \) contains \( \text{obs}(f, 0) \) or \( \text{obs}(-f, 0) \)), and the action sequences in the atoms of \( H \) do not have repeated actions then \( M \) is a model of \( H \) iff \( M \) is defined by an answer set of \( \alpha(AD, H) \).
We now elaborate Example 2 by adding information about the actions, their executability, and their impact on the states. We also move the starting time point to 0 to make it more interesting.
**Example 3** Let's assume that we have a fluent \( f \) which is initially true. We have three actions \( a_1, a_2 \), and \( a_3 \). \( a_1 \) is executable in all situations and causes \( \neg f \). \( a_3 \) is executable in all situations and causes \( f \). \( a_2 \) is executable in situations where \( f \) is true and causes \( \neg f \). Now suppose \( a_3 \) has been observed to occur at time point 2, and the agent intended to execute \( \langle a_1, a_2 \rangle \) at time point 0.
In that case \( a_1 \) must have been executed in time point 0. But \( a_2 \) could not have been executed at time point 1 because at time point 1, \( \neg f \) would have been true making \( a_2 \) inexecutable. The action \( a_2 \) could not have been executable at time point 2 because at time point 2 \( a_3 \) occurred and two actions can not happen at the same time. Now, at time point 3, the executability conditions of \( a_3 \) was satisfied, and no other action is observed to have occurred at that time, and hence \( a_2 \) must have occurred at time point 3.
We now illustrate how an AnsProlog encoding based on the previous sections does the same reasoning. Since this example does not have static causal laws, and only deals with singleton actions, we simplify the code a bit.
```
fluent(f).
literal(F) :- fluent(F).
literal(neg(F)) :- fluent(F).
action(a1; a2; a3). time(0..4).
intended(s, 0). seq(s, 1, a1). seq(s, 2, a2).
occurs(a3, 2).
-occurs(B, X) :- action(A), action(B), time(X), occurs(A, X), A != B.
planned(A, I) :- intended(S, I), seq(S, 1, A).
planned(B, K+1) :- intended(S, I), seq(S, J, A), occurs(A, K), time(K), seq(S, J+1, B).
occurs(A, I) :- action(A), time(I), planned(A, I), not -occurs(A, I).
planned(A, I+1) :- action(A), time(I), planned(A, I), not occurs(A, I).
holds(f, 0).
-holds_set(C, T) :- in(F, C), literal(F), set(C, T), not holds(F, T).
holds_set(C, T) :- set(C, T), time(T), not -holds_set(C, T).
holds(F, T+1) :- causes(A, F, C), literal(F), set(C), time(T), not holds(F, T+1).
holds(F, T+1) :- holds(F, T), fluent(F), time(T), not -holds(F, T+1).
-holds(F, T+1) :- -holds(F, T), fluent(F), time(T), not holds(F, T+1).
-holds(F, T) :- fluent(F), time(T), holds(neg(F), T).
holds(neg(F), T) :- fluent(F), time(T), holds(F, T).
causes(a1, neg(f), empty). set(empty).
causes(a3, f, empty).
causes(a2, neg(f), empty).
-occurs(a2, T) :- time(T), holds(neg(f), T).
```
As expected, the above program has a single answer which contains the following:
- \( \text{occurs}(a1, 0) \)
- \( \text{occurs}(a3, 2) \)
- \( \text{planned}(a1, 0) \)
- \( \text{planned}(a2, 1) \)
- \( \text{planned}(a2, 2) \)
- \( \text{planned}(a2, 3) \)
- \( \text{holds}(f, 0) \)
- \( \text{holds}(f, 1) \)
- \( \text{holds}(f, 2) \)
- \( \text{holds}(f, 3) \)
- \( \text{holds}(f, 4) \)
- \( \text{holds}(f, 5) \)
### Allowing repeated actions
In the previous encodings we assumed that sequences of intended actions do not have the same action repeated. To remove this assumption, the following changes in the encoding suffices.
- **Replace 3(a)** by
\[
\text{planned}(S, 1, I) :- \text{intended}(S, I).
\text{planned}(S, J+1, K+1) :- \text{intended}(S, I), \text{occurs}(S, J, K).
\]
- **Replace 3(b)** by
\[
\text{occurs}(S, J, K) :- \text{planned}(S, J, K), \text{not} -\text{occurs}(S, J, K).
\]
- **Replace 3(c)** by
\[
\text{planned}(A, I) :- \text{intended}(S, I), \text{seq}(S, 1, A).
\text{planned}(B, K+1) :- \text{intended}(S, I), \text{seq}(S, J, A), \text{occurs}(A, K), \text{time}(K), \text{seq}(S, J+1, B).
\text{occurs}(A, I) :- \text{action}(A), \text{time}(I), \text{planned}(A, I), \text{not} -\text{occurs}(A, I).
\text{planned}(A, I+1) :- \text{action}(A), \text{time}(I), \text{planned}(A, I), \text{not} \text{occurs}(A, I).
\text{holds}(f, 0).
-\text{holds_set}(C, T) :- \text{in}(F, C), \text{literal}(F), \text{set}(C, T), \text{not} \text{holds}(F, T).
\text{holds_set}(C, T) :- \text{set}(C, T), \text{time}(T), \text{not} -\text{holds_set}(C, T).
\text{holds}(F, T+1) :- \text{causes}(A, F, C), \text{literal}(F), \text{set}(C), \text{time}(T), \text{not} \text{holds}(F, T+1).
\text{holds}(F, T+1) :- \text{holds}(F, T), \text{fluent}(F), \text{time}(T), \text{not} -\text{holds}(F, T+1).
-\text{holds}(F, T+1) :- -\text{holds}(F, T), \text{fluent}(F), \text{time}(T), \text{not} \text{holds}(F, T+1).
-\text{holds}(F, T) :- \text{fluent}(F), \text{time}(T), \text{holds}(\neg F, T).
\text{holds}(\neg F, T) :- \text{fluent}(F), \text{time}(T), -\text{holds}(F, T).
\text{causes}(a1, \neg \text{f}, \text{empty}). \text{set}() \text{empty}.
\text{causes}(a3, \text{f}, \text{empty}.
\text{causes}(a2, \neg \text{f}, \text{empty}.
-\text{occurs}(a2, T) :- \text{time}(T), \text{holds}(\neg \text{f}, T).
\]
An application: reasoning about trips
We came across the issue of reasoning about intentions when we were trying to develop a representation and reasoning module to reason about trips. We now briefly mention some of the aspects of modelling trips and its relationship with reasoning about intentions.
A trip is an activity with many participants who join the trip and may drop out at different points of the trip. The trip has a sequence of planned (or intended) stops, and the same location may appear multiple times in the sequence as some locations are hubs. The first stop of the trip is referred to as its origin, and the last stop is referred to as its destination. The trip may use many different vehicle types for its different legs. At any point the status of a trip may be in transit or in one of its possible stops (in our case - cities). The various actions associate with a trip include: a person embarking on the trip, a person dropping out (or disembarking) from the trip, depart (from a stop), and stop (in a stop). To embark on the trip a person may need to have some travel documents. Things packed in various containers can make the trip more pleasant, etc.
Our goal was to develop a reasoning and representation system which encodes the above general knowledge about trips and together with additional facts about the trips can answer various relevant questions. Following is a sample of questions which our system, built using the various encodings mentioned in this paper, answers correctly. (Our encodings of many of these and similar questions are available at http://www.public.asu.edu/~cbaral/aquaint04-06/travel-module/.)
• j1 is a trip which starts in Boston in day 1, supposed to go to Baghdad on day 3, leave Baghdad on day 5 and come back to Boston on day 6. The plane broke down in Baghdad for 2 days on the day it was supposed to leave. When did the plane reach Boston? (Answer: day 8).
• John took the plane from Paris to Baghdad. He planned to meet his friend Mike there. Did John meet Mike? (Answer: yes.)
• John joined a plane trip which was scheduled to go from Paris to Rome to Baghdad. John was arrested in Rome. Where is John? (Answer: Rome.) Where is the plane? (Answer: Baghdad.)
Conclusion
Intentions have been discussed and properties of intentions have been formalized using modal logic in the past (?) but this is perhaps the first time intentions about actions has been formalized and declaratively implemented together with reasoning about actions and narratives. In this paper we not only give a formal characterization of intended actions but also give a provable correct implementation of it using AnsProlog. Our implementation is part of a bigger project involving representation and reasoning about trips.
Although in this paper we consider intentions of action sequences, this can be generalized to more expressive execution structures, such as Golog programs (?) by combining the encodings in this paper together with the encodings in (?) with minor changes.
Acknowledgements
We would like to thank the anonymous reviewers for their insightful comments. This work was supported by NSF grants 0070463 and 0412000 and an ARDA contract.
|
{"Source-Url": "http://www.depts.ttu.edu/cs/research/krlab/pdfs/papers/bg05c.pdf", "len_cl100k_base": 10190, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 25814, "total-output-tokens": 11090, "length": "2e13", "weborganizer": {"__label__adult": 0.0004620552062988281, "__label__art_design": 0.0006685256958007812, "__label__crime_law": 0.0005946159362792969, "__label__education_jobs": 0.0025787353515625, "__label__entertainment": 0.00015211105346679688, "__label__fashion_beauty": 0.00021219253540039065, "__label__finance_business": 0.0004153251647949219, "__label__food_dining": 0.0006389617919921875, "__label__games": 0.0012712478637695312, "__label__hardware": 0.0008730888366699219, "__label__health": 0.0007653236389160156, "__label__history": 0.00043392181396484375, "__label__home_hobbies": 0.00021088123321533203, "__label__industrial": 0.00064849853515625, "__label__literature": 0.001190185546875, "__label__politics": 0.00033402442932128906, "__label__religion": 0.0006842613220214844, "__label__science_tech": 0.1146240234375, "__label__social_life": 0.00020742416381835935, "__label__software": 0.01114654541015625, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.0002956390380859375, "__label__transportation": 0.0011491775512695312, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33699, 0.01281]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33699, 0.52286]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33699, 0.86268]], "google_gemma-3-12b-it_contains_pii": [[0, 4909, false], [4909, 11636, null], [11636, 17726, null], [17726, 24007, null], [24007, 30499, null], [30499, 33699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4909, true], [4909, 11636, null], [11636, 17726, null], [17726, 24007, null], [24007, 30499, null], [30499, 33699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33699, null]], "pdf_page_numbers": [[0, 4909, 1], [4909, 11636, 2], [11636, 17726, 3], [17726, 24007, 4], [24007, 30499, 5], [30499, 33699, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33699, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
b370b728653b8c56a8b358653d435b459c1495ef
|
THE UNIVERSITY OF MICHIGAN
Memorandum 31
CONCOMP
July 1970
DEFAULTS AND BLOCK STRUCTURE IN THE MAD/I LANGUAGE
Allen Springer
BEST
AVAILABLE COPY
THE UNIVERSITY OF MICHIGAN
Memorandum 31
DEFAULTS AND BLOCK STRUCTURE IN THE MAD/I LANGUAGE
Allen Springer
CONCOMP: Research in Conversational Use of Computers
ORA Project 07449
F.H. Westervelt, Director
supported by:
DEPARTMENT OF DEFENSE
ADVANCED RESEARCH PROJECTS AGENCY
WASHINGTON, D.C.
CONTRACT NO. DA-49-083 OSA-3050
ARPA ORDER NO. 716
administered through:
OFFICE OF RESEARCH ADMINISTRATION ANN ARBOR
July 1970
ACKNOWLEDGMENTS
The author would like to acknowledge the support of the CONCOMP Project; IBM, who sent the author on an IBM Resident Study Program; and especially his co-workers on the MAD/I compiler, Bruce Bolas, Ronald Srodawa, Charles Engle, David Mills, Fred Swartz; and the MAD/I coordinators, Profs. Bernard Galler and Bruce Arden.
<table>
<thead>
<tr>
<th>Chapter</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>ACKNOWLEDGMENTS</strong></td>
<td>111</td>
</tr>
<tr>
<td>1. Introduction</td>
<td>1</td>
</tr>
<tr>
<td>2. Defaults in MAD/I</td>
<td>3</td>
</tr>
<tr>
<td>3. Block Structure in MAD/I</td>
<td>6</td>
</tr>
<tr>
<td>4. The Organization of the Compiler</td>
<td>13</td>
</tr>
<tr>
<td>5. The Block Structure Algorithm</td>
<td>17</td>
</tr>
<tr>
<td>6. Conditional Declaration Handling in MAD/I</td>
<td>25</td>
</tr>
<tr>
<td>7. Some Implementation Details</td>
<td>27</td>
</tr>
<tr>
<td>8. Conclusion</td>
<td>32</td>
</tr>
</tbody>
</table>
1. INTRODUCTION
This paper describes the default and block structure mechanisms of MAD/I, a PL/I-like language, and the interaction of these mechanisms with the three types of MAD/I declarations: explicit declarations, default declarations, and conditional declarations. MAD/I allows the programmer extraordinary control over the default assignment of data types to variables, and also allows the programmer more than usual control over the scope of variable names in block structure. The interaction of these two facilities can make the handling of declaration information a difficult problem. This paper outlines an algorithm in which this information is processed "on the fly" in the first pass of the compiler over the source program, and then the symbol table is processed to assign defaults and allocate storage. A simple second pass over a transformed version of the source text resolves the scope and interpretation of variable names.
MAD/I is a computer language under development at the University of Michigan Computing Center, sponsored by the CONCOMP Project. It can be thought of as a remote descendant of 7090 MAD and ALGOL 60, with PL/I being a not-too-distant relative. However, MAD/I and its compiler have some unusual features that aid language modification and extendibility, although these features are
beyond the scope of this paper. Except for block structure scope facilities and the default setting facilities, then, MAD/1 may be regarded as simply another representative of the class of procedural languages which includes ALGOL 60 and PL/I.
Briefly, MAD/1 has blocks, as in PL/I and ALGOL 60. Like PL/I (but unlike ALGOL 60), declarations may occur anywhere within a block, and are not required for all variables in the program. If some attributes of a variable are not declared then they are given "default" values. Such attributes include storage class (e.g., static, based, etc.) and data type. The facilities for specifying the defaults are very different from those of PL/I, and are a generalization of those of 7090 MAD. The scope of a variable is determined in much the same spirit as in ALGOL 60 and PL/I, but the programmer has more control over the specification of scope, including the scope of variables which are not declared. This makes determining scope and determining defaults a complicated problem.
The default assignment of data types is done in a very systematic and general manner. At any point within the program there is defined a current default data type. This default data type may be declared by the programmer on a block basis. A special symbol, 'DEFAULT', is used to carry the default information, and is treated like a variable when in the context of declarations, but otherwise it is not written by the programmer.
The default data type is given to any variable for which no data type has been explicitly declared. For some data types one can declare a "sub-data-type," such as the component data type of an array, the data type of the result returned from a subroutine, or the data type of a component of a structure. If such a "sub-data-type" is not specified then it is given the default data type. For example, assume that the default has been declared as follows:
'DECLARE' 'DEFAULT' 'INTEGER'
Then assume the following declaration:
'DECLARE' A 'FIXEDARRAY'(4,4)'FLOATING',
B 'FIXEDARRAY'(4,4);
The mode of both A and B is 'FIXEDARRAY', with dimension 4 x 4. The component data type of A was explicitly declared to be 'FLOATING'. Since the component data
type of B is not explicitly declared, it is taken to
be the default, 'INTEGER'. If some other variable that
belonged to that block were referenced in the block but
no declaration made about its data type then it would
also be assigned 'INTEGER' data type.
There are other cases where default actions occur
in MAD/I. They will be mentioned briefly here although
they are not involved in the rest of this paper. The
dimension information given in the above example is
specified in a declaration "suffix." If such a suffix
is omitted for declarations where they are normally ex-
pected, then a default set of information is assumed
for the missing information. For example, in the case
of an item of 'CHARACTER' mode, the suffix specifies
how many characters the variable has. If the suffix
is omitted, the number of characters is assumed to be
one. If the dimension information were omitted above,
a warning message would be issued, and an array which
has one dimension and one component would result. The
lexical class of constants specifies an implicit data
type which they are assigned, unless a declaration is
explicitly written which specifies some other data
type. As an example, the constant 5 will be assigned
'INTEGER' data type (32 bits long on the IBM 360),
whereas 50('INTEGERSHORT') produces a constant 16-bit
integer.
These two types of default operations are presently not controllable any further by the programmer. For suffixes the default is associated with the mode involved. For constants the default is associated with the lexical class of the symbol. It would be possible to have these defaults also controllable by the programmer by adding special declarations to the language, but this has not yet been done.
The default data type in the outermost block is 'FLOATING' unless it is explicitly declared to be something else. For any other block the default is the same as for the next outer block unless it was explicitly declared in the inner block. If a default mode is declared, but not completely, then the remainder of the default is taken from the default of the next outer block. This is done in exactly the same manner in which defaults are applied to a variable whose "sub-data-type" may not have been declared. As an example, assume that in the outer block the default is 'BOOLEAN'. Assume that the default is then declared as follows in the inner block: 'DECLARE' 'DEFAULT' 'FIXEDARRAY'(10); Thus the component of the inner block's default is not explicitly specified. It will be made the data type of the default of the next outer block, 'BOOLEAN'. Generally the default propagates inward from the next outer block, in a manner similar to the propagation of scope of variables.
There are three concepts embodied in block structure as it is traditionally specified in ALGOL 60 and similar languages. Typically the block is denoted by a beginning keyword and an ending keyword. In ALGOL a block has three functions: (1) to specify scope of variables, (2) to specify the dynamic nature of storage allocation for certain classes of variables, and (3) to group statements. In PL/I the grouping effect can also be obtained with a DO statement as well as with a BEGIN statement. In MAD/I the 'BEGIN' statement is used for simple grouping of statements, and the other two facilities are specified by 'BLOCK' or 'PROCEDURE', corresponding to BEGIN and PROCEDURE in PL/I. Thus MAD/I has facilities similar to those of PL/I, although with different names.
The scope in which a variable is known is determined rather simply in ALGOL. If a variable is declared in a given block, that variable's name represents a different variable from one of the same name in the next outer block. If a variable is used in an inner block but not declared there, then it is the same variable as one of the same name in the next outer block. Finally, in ALGOL 60 all variables must be explicitly or implicitly declared in the outermost block in which they are to be known.
In MAD/I the "naming" or scope rules are similar to those of ALGOL 60, but there are additional rules allowing the programmer more control over the "naming" facility. In MAD/I the user does not have to declare a variable at all; therefore he needs conventions in order to know in which block an undeclared variable belongs. In PL/I the rule apparently is that a declared variable belongs to the outermost block in which it is declared. If it is not declared, then it belongs to the outermost block.
Let us motivate the additional rules for assigning defaults to symbols. By writing a large block and specifying the default within that block, the user can avoid writing a large number of individual declarations for variables in that block. But if the block is an inner one, then, following the PL/I rule, variables that are not declared in that block would belong to the next outer block and would not be affected by the default. What is desired, in some cases, is that unless otherwise specified, any variable used in a block is declared in that block implicitly. In other cases we would want to have the PL/I rule. Thus we have modified the scope rules for MAD/I as follows:
(1) If no default is declared for a block then the only symbols that belong to that block
are those that are declared in the block.
(2) If a default is declared in a block, but 'NEW' has not been declared for that default symbol, then symbols that have not been declared in the block are treated as if they were referenced in the next outer block.
(3) If default was declared for the block and 'NEW' was also declared for the default, then, unless otherwise specified (by rules below), all symbols referenced in the block are implicitly declared in the block.
Note that under these rules a block with default declared 'NEW' would not be able to access any variables outside that block. Therefore, we have devised additional rules, which apply irrespective of any default declarations or 'NEW' declarations currently in effect:
(1) If a symbol is declared 'NOTNEW' in a given block, then it is treated as if it were referenced in the next outer block.
(2) If a symbol is declared 'GLOBAL', then it is treated as if it were declared 'NOTNEW' in that block and each surrounding block.
(3) If there is no next outer block as stated
in (1) and (2) above, then the variable belongs to the outermost block.
Although MAD/I has not yet been used extensively, most of these rules have proved useful and have eliminated much writing of declarations in some cases. Typically, the scope rules of block structure are used to allow the writing of relatively independent sections of program which are to be part of the same compilation. The block structure allows the user to write the sections without worrying that two variables in different sections may accidentally have the same name. In ALGOL the variables in the two blocks would be declared in their own blocks, and those that are intended to be common would be declared in the next outer block. In MAD/I the programmer has the freedom of not declaring all variables in such blocks; instead he declares 'NEW' 'DEFAULT' in each independently written block. Then all variables referenced in each block belong to that block unless declared 'NOTNEW'. This combination of rules gives the user the advantages of both the block structure and the default declaration facility.
In the left-hand column below are several blocks representing the skeleton of a complete MAD/I program. All references are indicated by occurrences of variable names. All declarations are indicated. The right-hand column contains comments about items to the left.
'PROCEDURE' MAIN;
... A ...
This variable A is not declared, so it belongs to the outermost block and has default mode of 'FLOATING'.
'BLOCK'
The beginning of block 2. This block has no default declared for it.
... A ...
This is the same A as in block 1.
'DECLARE' B;
B is new to this block, and will have the default mode, 'FLOATING'.
'DECLARE' C;
'DECLARE' 'DEFAULT' 'INTEGER';
There is a new default for this block, but 'DEFAULT' has not been declared 'NEW'.
Since it is only referenced here, this A is the same as in block 1 and 2.
... B ...
Since it is not declared in this block, B is the same as in block 2.
'DECLARE' C;
C is new to this block and has default mode of 'INTEGER'.
The end of block 3.
The beginning of block 4.
'DECLARE' 'DEFAULT' 'NEW' 'CHARACTER'(256);
As a result, this A is a new variable even though it is only referenced in the block; it has the default mode of 'CHARACTER'(256).
... A ...
'DECLARE' B 'FLOATING';
B is new to this block.
'DECLARE' D 'NOTNEW' 'BOOLEAN';
D is the only variable referenced in this block which does not belong to the block. It belongs to the next outer block.
The end of block 4.
This is the same D as in block 4. D belongs in this block instead of the next...
outer one because of this declaration.
'END';
End of block 2.
'END';
End of block 1.
In this example have two distinct As, two distinct Bs, and two distinct Cs. Of course this example looks rather complicated, because no other program details are supplied to make it look more natural, and because it attempts to illustrate many rules with one example.
It is interesting to point out, for procedures in MAD/1, that entry points to a procedure fall inside the 'PROCEDURE' ... 'END' brackets, and are implicitly declared to be 'ENTRYPOINT' mode. According to the strict rules specified above, these entry points would be "new" variables in the block and thus not known outside the block, definitely an undesirable situation! Thus there is also an implicit 'NOTNEW' declaration on each entry point specified in the prefix of a 'PROCEDURE' statement.
4. THE ORGANIZATION OF THE COMPILER
This section discusses the organization of the compiler so that the algorithm given in the next section will be seen in the proper context. The compiler makes two passes over the source program, in which it collects all declaration information, parses the source text, resolves all default information for symbols referenced by the programmer, and straightens out all block structure information. Between the two passes there is a symbol-table-processing phase.
The first pass does most of the work. Briefly, it parses the input character stream into "symbols," parses the program in symbol form, and expands the parsed symbols into "n-tuples" of the form of an operator followed by zero or more operands. The n-tuples become a new representation of the source program. For example, \( A:=B+C \) might be transformed into
\[
+,\%T1,B,C; \\
:=,\%T2,A,\%T1;
\]
where the percent symbols are user-generated temporary symbols. The algorithm described below assigns data types to the symbols \( A, B, \) and \( C \) (but not to the temporary symbols). Also, if several variables named \( A \) are declared, the algorithm will determine which variable named \( A \) is represented by any given instance of the Symbol \( A \).
The major problem encountered in scanning the input text is that after a symbol has been found which could represent a variable, nothing more may be known about it until the end of the block is encountered. This is because declarations about a variable, if there are any at all, may occur anywhere in the block. By the end of the block it is possible to determine whether a given symbol referenced in the block represents a variable belonging to the block. To solve this problem, we need to know (1) what, if anything, has been explicitly declared about the symbol, and (2) whether a 'DEFAULT' has been declared 'NEW' for the block. A second problem is that attributes cannot be completely assigned for any variable until all the attributes of the default for that block are known. But the attributes of the default cannot always be known until they are known for the default of the next outer block. Thus, since the last statement of the program might be the declaration of the default of the outermost block, the whole program has to be scanned before defaults can be applied to the variables.
Let us examine in more detail what happens to a specific symbol during the processing of the program. When a symbol which can represent a variable is first encountered, all that can be done is to save its name and note that it was referenced in the block currently
being scanned. We cannot know whether it represents a variable belonging to that block until the end of the block has been found. Furthermore, we cannot know whether it belongs to that block even if a declaration occurs for it, since a subsequent 'NOTNEW' or 'GLOBAL' declaration might occur for it in that block. More particularly, we cannot know whether it represents the same variable or a different variable from the symbol of the same name found in the next outer block.
Note that if we are to produce n-tuples while parsing the input text, we must represent a variable in the n-tuple by a pointer to the symbol for that variable, at the very least. We cannot include which block it belongs to, however, since that is not known yet. Therefore we must either (1) assume which block it belongs to, and correct that assumption later if it is incorrect, or (2) not bother to assume which block it belongs to, and correct the n-tuples some way later. No matter what is done initially, however, the n-tuples must somehow be corrected later. The method of doing so, of course, depends upon how the symbol is represented in that n-tuple. In the first implementation of MAD/I we have chosen to have the representation of the symbol in the n-tuple always point to the same "main symbol table" entry for that symbol. Then, in the second pass, the n-tuple is made to point to some
other symbol table entry, if necessary.
Let us assume that something was declared about A in an outer block and then something else was declared about A in the next inner block. If A is subsequently declared 'NOTNEW' in the inner block, then the two declarations must refer to the same variable. If the 'NOTNEW' does not occur, then the declarations refer to two different variables. In the present implementation of MAD/I, the symbol table entries carry the declaration information. We thus need a way of keeping separated the information of the two declarations about A until it can be determined definitely whether they should be separated or not. (Note that it is not illegal to have several declarations about the same variable in MAD/I. Requiring all information about a variable to be made in the same declaration statement might simplify some of the declaration-processing problems but it would lessen the convenience to the user.)
5. THE BLOCK STRUCTURE ALGORITHM
Several routines can be called upon to perform various functions when the compiler is scanning the descriptors before and during parsing. The algorithm will describe these routines and the circumstances under which they are called. A particular symbol can have associated with it several variables whose names are the same, but at most one variable per block. The job of this algorithm is to determine to which blocks such variables belong, and then to map the symbols in the n-tuples which result from the parse into the proper variables for that point in the program.
At any point during the scan of the input descriptors, a symbol can be in one of four states with respect to a block: "unreferenced," "referenced," "declared," and "not new." For the declared state there is a variable associated with that block. This is not true for the other three states except when the block is the outermost block. The outermost block is a special case, of course, since it is not surrounded by another block. In the outermost block a variable must be in one of the first three states; i.e., it cannot be in the "not new" state.
Note that in PL/I-like languages, a symbol like IF can represent either a variable or a statement keyword,
depending upon context, and the dilemma must be resolved before this algorithm will work. In MAD/I this is not a problem, since keywords and variables are represented by distinct lexical classes. Subsequently we will assume that this problem has been solved for any given language, and we are considering only symbols which represent variables.
A "referenced" symbol in a block is one which has been encountered in that block but for which no declarations of any type have occurred, including 'NOTNEW' and 'GLOBAL'. A symbol is termed "declared" when it has been declared in the block but not declared 'NOTNEW' or 'GLOBAL'. A variable is created for it which is a carrier of mode and other declared information. A symbol in a "not new" state has been declared 'NOTNEW' or 'GLOBAL' in that block, and its status with respect to that block cannot be further altered. A symbol cannot have "not new" status with respect to the outermost block since that status indicates that it is a symbol which belongs to a block surrounding the one under consideration, and which cannot be declared to belong to that block.
When the beginning of a block is encountered a routine called BEGINBLOCK is called which pushes down the status of all symbols of the current (old) block, if any, and sets the status of all variables to
When a symbol is encountered in a block it is passed to a routine called SETREF. If the symbol is in "unreferenced" status it is set to "referenced" status for that block, otherwise nothing is done.
When a symbol is declared in a block, except for a declaration of 'NOTNEW' or 'GLOBAL', it is passed to the SETDECL routine. If the symbol is "declared" in the block nothing is done. If the symbol is "unreferenced" or "referenced" in the block, then a variable is created for that block with the name of the symbol, and the symbol is set to "declared" status. Note that declaration information is always applied to the first variable encountered for the symbol when the search is made outward from the current block to surrounding blocks. Thus the declaration information, if any, which is associated with the declared symbol is to be applied only after SETDECL has been called. If the symbol is in "not new" status, a search is made outward, successively through surrounding blocks until the symbol is found in other than "not new" status. Then the symbol is treated with respect to that block in the same manner as an "unreferenced," "referenced," or "declared" symbol would be for the current block. The variable that results from the SETDECL operation is the one to which the original
declaration information was assigned.
When a symbol is declared 'NOTNEW' in a block it is passed to a routine called SETNOTNEW. At this point, one of three situations will occur:
1. If the symbol is already "not new" or if the symbol is 'DEFAULT', or if the current block is the outermost block, nothing is done.
2. If the symbol is "unreferenced" or "referenced" it is set "not new" in the current block. A search is then made outward through the containing blocks and the status of the symbol is determined for each block, until the symbol is found in other than "not new" status. If that status is "unreferenced" then it is set to "referenced."
3. If the symbol was in "declared" status when SETNOTNEW was called then it is set to "not new" status in the current block. A search is made outward through all the surrounding blocks until the symbol is found in other than "not new" status. If that status was "unreferenced" or "referenced" it is changed to "declared" status, and the variable of the the symbol for the current block is used as the variable for the symbol in the outer block where the search ended. If the search ended
on a variable with a "declared" status then we have an interesting situation of two variables in existence which should be replaced by a single variable for the outer block. These variables will have to be "merged." Any declarations declared on the inner variable must be copied over to the outer variable, with appropriate error comments if conflicts are discovered. In the case of MAD/I a variable may be declared only once with mode information; an attempt to do so more than once causes an immediate error comment, except in the case where the two variables are being merged into one, as above, due to the 'NOTNEW' declaration. If modes were declared for each of the variables before they were merged the conflict will not cause an error comment until the 'NOTNEW' declaration is encountered.
When a symbol is declared 'GLOBAL', SETNOTNEW is called for that symbol for the current block and for each surrounding block. Hence, SETNOTNEW has a block as one of its arguments.
The actions of the above-described routines determine, as closely as possible, the status of symbols within a block by the time the end of that block is
reached. The end of the block triggers a call on the routine ENDBLOCK which will complete the determination of the status of all symbols which have variables in the block. In the outermost block, the action is simple: the symbols referenced in the block are made into variables with "declared" status. If it is not the outermost block, then the symbols "referenced" in the block are treated in one of two ways:
1) If 'DEFAULT' is declared for the block, including the attribute 'NEW', then all the "referenced" symbols are made "declared" symbols for the block, and variables are created for each such symbol.
2) If 'DEFAULT' was not declared 'NEW' in the block, the status of "referenced" symbols is checked against the next outer block, since these symbols belong there.
If a symbol is "unreferenced" in the next outer block, it is set to "referenced." After one of these two actions is done, we are finished with the inner block, and the status of all symbols is "popped" back to that of the next outer block.
When a program has been completely scanned, each variable will have been assigned to its appropriate block. It is then possible to go through the blocks from the outermost to the innermost to supply default
information for those variables. It is also possible to go through the parsed form of the program, replacing each occurrence of a symbol with the appropriate variable for that block. This process is called the "remap" phase in the current MAD/I compiler. There is no restriction on whether remapping or default assignment is done first as far as the algorithm is concerned. The method of default assignment on a block and variable basis was described in Section 2. The method of remapping is described below in fairly general terms.
Many "tricks" could be used for implementing the described routines, for the method of representing symbols and variables, for remapping and for default assignment, but these tricks all depend upon the representation of descriptors, the method of parsing, the sort of declarations to be stored and the method of storing, etc. These details, in turn, depend on the language and the particular compiler implementation. In the case of MAD/I, as the present implementation of the compiler and language evolved, it almost always increased rather than decreased, in complexity, and hence it is presently difficult to debug.
The remap phase assumes that the beginning and end of each block are easily spotted in the parsed form, and that there is an easy way to search through the parsed form such that the beginning and end of each block are
encountered in the same order as in the scan to produce calls on **BEGINBLOCK** and **ENDBLOCK**, and such that the symbols previously encountered within a given block are again encountered in that same block. Assume that if we have a symbol representing a variable then we can easily find a place to look for the current variable representing that symbol. Let us assume that there is a field within the symbol which can point to the variable. Also assume that there is a similar field associated with each variable of each block, and that the field initially points to itself. At the beginning of each block, for each variable belonging to that block, exchange the symbol field with the variable field. At that point, the symbol points to the current variable. At the end of the block perform the same exchange. As we progress through the parsed program the symbols will effectively be pushed and popped properly so that whenever a symbol is encountered we can replace it with the variable it represents at that point. This is a workable alternative to the one of keeping a pushdown stack for each symbol, the current variable being at the top of the stack. Both of these approaches assume that remapping would be more expensive if we simply searched the blocks outward from the current block until a variable of the right name is found.
A conditional declaration is one which is applied when a variable appears in a certain context, unless that variable has had an explicit declaration which would conflict. There are a number of such declarations in PL/I. In MAD/I there is presently only one. If the "." operator has been used on a variable (the function call operator), then there is a conditional declaration applied to the variable. The declaration says that if no mode or storage class information has been declared about the variable, then it is to be taken as an 'EXTERNAL' 'ENTRYPOINT', which returns a value of default mode when called. Thus it is the name of an externally compiled subroutine. Notice that the explicit declarations are applied first, then conditional declarations, if any, and finally default declarations. This is also the usual order in PL/I.
Conditional declarations are handled somewhat differently from other declarations because of the convention that a conditional declaration does not imply that a variable has been declared "new" to a block. Conditional declarations have no influence in determining what block the variable resides in. This is contrary to the effects of all other declarations. Therefore the previously described algorithms do not work for conditional declarations.
Such declarations are easily handled, however, in the following manner:
When a conditional declaration is discovered, it is saved in some way on a list associated with the current block, as is the symbol to which it will conditionally apply. After the end of the block is encountered and it is closed out (so that the variables that belong to the block are known), the list of conditional declarations is searched. For each symbol on the list there is either an associated variable that now belongs to the block, or else there is not. If there is such a variable, then the conditional declaration belongs with it. Otherwise, the symbol and conditional declaration are put on the list for the next outer block.
7. SOME IMPLEMENTATION DETAILS
This section discusses some details of the implementation of the block structure algorithm in the present version of the MAD/I compiler.
In the MAD/I compiler, symbols and variables have the same form. The symbols themselves are used as variables for the outermost block, thus economizing storage. Associated with any block are three lists, one each for symbols which are "referenced," "declared," and "not new." In addition, in each symbol there is a two-bit field which specifies which of the four states the symbol is in. Each block points to the next outer block, thus facilitating popping back to or searching to the next outer block. At any given time the symbol carries the current information declared about the current variable associated with that symbol. Any previously specified variable associated with that symbol in an outer block has its information pushed down in some fashion. Thus the symbols which are declared in a block must have their contents appropriately popped at the end of each block. In any case, as long as a call is made on SETDECL before applying the declaration information, the declaration information associated with a symbol can be stored with the symbol itself, and it is not necessary to search for an associated variable at that time. This
is carried out according to how pushing and popping are done; however, such details are outside the scope of this paper.
However, the method of copying attribute information from default symbols is relevant. Associated with each mode are two routines, each having two arguments, a "from" variable and a "to" variable. The intention is to copy information from one to the other under certain circumstances. One routine is used when the "to" variable has no mode information, in which case the information is copied to it from the other symbol (which may be the default symbol or a subtype of the default symbol). The other routine is used to copy mode information when the "to" symbol already has mode information. In that case the "from" symbol may be used to copy information to a subtype of the "to" symbol which does not have mode information. Needless to say, the routines are recursive.
Another routine, let us call it COPYATRS, selects one of the two routines just described and decides which mode to use. The routines are also used to set information (associated with a mode) which was not explicitly declared, such as length or dimension information.
The action of COPYATRS is described here, with two arguments, FROM and TO.
1. If TO has a mode, then select the "to" routine for that mode and pass to it FROM and TO as its arguments. Exit upon return of control from the
"to" routine.
2. If TO has no mode set, then select the "from" routine associated with the mode of the FROM symbol. Exit upon return from the "from" routine. Note that there will always be a mode on the FROM symbol (if everything is properly debugged).
Consider the "from" routine for an array mode, and call the routine FROMARRAY. This routine is called when its TO symbol has no mode, and thus the job of FROMARRAY is to copy the FROM symbol information to the TO symbol. Therefore it will copy the array mode, the dimension information, and any other mode-associated information to the TO symbol. It will create a symbol-like construct associated with the TO symbol which is to carry mode information for the component mode of TO. Let us denote the carrier by component-of-TO; there is a similar carrier for component-of-FROM. Then a call on COPYATRS (component-of-FROM, component-of-TO) is made to copy the component information. It will always work out that the data type information for the FROM symbol will be complete.
Consider the "to" routine for an array mode, and call the routine TOARRAY. This routine is called when the TO symbol already has a mode, and that mode is array mode. The job of TOARRAY is to set any remaining undeclared information about the TO symbol. For example, if
suffix (i.e. dimension), information was omitted from the array declaration, then default information would be set for it. (In MAD/I such information is associated with the mode and not taken from a default which can be declared about arrays. There is no way presently in MAD/I to say that the default dimensions of an array are to be 3 x 3, for example. In principle this would be possible, however.) Next the TO symbol is examined to see if it has a component mode carrier, and if not, it is attached to the TO symbol. Whether or not the carrier was there before, a call is made on COPYATRS (FROM, component-of-TO). This will cause defaults to be set on the component-of-TO symbol, if needed.
Obviously, if the modes involved had no components, then the associated "from" and "to" routines would be simpler. The "from" and "to" routines may do other jobs also, such as returning length and alignment information to their caller. Thus the initial caller of COPYATRS would call it with a symbol to be allocated, as the TO symbol, and the 'DEFAULT' symbol as the FROM symbol. It would get in return the length and alignment of the TO symbol. Notice that the COPYATRS routine is also used to set the information on the default symbol itself. When starting allocation of variables in a block, first COPYATRS is called with FROM being the default of the next outer block (which has already been taken care of),
and TO being the default of the current block. For the outermost block there is no next outer block, so a special FROM symbol is used which has the "default default," i.e., the default mode which is used if none is declared in the outermost block.
8. CONCLUSION
Everything described in this paper has been successfully implemented in a compiler for the MAD/I language which runs on an IBM/360 model 67 under the University of Michigan timesharing system, MTS. For various reasons which are not relevant here it is a very large compiler. Since the system provides very large virtual memory for execution (about four million bytes), the compiler is written to take advantage of a large virtual memory. MAD/I was also written mostly in an experimental compiler implementation "macro" language, which allows easy modification of the compiler, even at run time, for those who know the incredible intricacies of the compiler. These factors, of course, have influenced the implementation of the block structure and default facilities. Nevertheless, it is felt that what we have learned about these facilities may be useful to compiler implementers whose design requirements impose very different constraints on their compilers.
This paper describes the default and block structure mechanisms of MAD/I, a PL/I-like language, and the interaction of these mechanisms with the three types of MAD/I declarations: explicit declarations, default declarations, and conditional declarations. MAD/I allows the programmer extraordinary control over the default assignment of data types to variables, and also allows the programmer more than usual control over the scope of variable names in block structure. The interaction of these two facilities can make the handling of declaration information a difficult problem. This paper outlines an algorithm in which this information is processed "on the fly" in the first pass of the compiler over the source program, and then the symbol table is processed to assign defaults and allocate storage. A simple second pass over a transformed version of the source text resolves the scope and interpretation of variable names.
<table>
<thead>
<tr>
<th>KEY WORDS</th>
<th>LINK A</th>
<th></th>
<th>LINK B</th>
<th></th>
<th>LINK C</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>MAD/I</td>
<td>ROLE</td>
<td>WT</td>
<td>ROLE</td>
<td>WT</td>
<td>ROLE</td>
<td>WT</td>
</tr>
<tr>
<td>PL/I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>defaults</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>declarations</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>block structure</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/715503.pdf", "len_cl100k_base": 8687, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 62741, "total-output-tokens": 9905, "length": "2e13", "weborganizer": {"__label__adult": 0.0003001689910888672, "__label__art_design": 0.0002522468566894531, "__label__crime_law": 0.00018930435180664065, "__label__education_jobs": 0.0003604888916015625, "__label__entertainment": 5.030632019042969e-05, "__label__fashion_beauty": 0.00011754035949707033, "__label__finance_business": 0.00013065338134765625, "__label__food_dining": 0.0002505779266357422, "__label__games": 0.0004611015319824219, "__label__hardware": 0.0008034706115722656, "__label__health": 0.00027751922607421875, "__label__history": 0.0001531839370727539, "__label__home_hobbies": 5.429983139038086e-05, "__label__industrial": 0.0002701282501220703, "__label__literature": 0.00023734569549560547, "__label__politics": 0.00015103816986083984, "__label__religion": 0.0004017353057861328, "__label__science_tech": 0.007328033447265625, "__label__social_life": 5.08427619934082e-05, "__label__software": 0.004741668701171875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.0002036094665527344, "__label__transportation": 0.00030541419982910156, "__label__travel": 0.00013720989227294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40148, 0.01887]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40148, 0.79192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40148, 0.94145]], "google_gemma-3-12b-it_contains_pii": [[0, 130, false], [130, 150, null], [150, 578, null], [578, 917, null], [917, 1818, null], [1818, 3143, null], [3143, 4164, null], [4164, 5346, null], [5346, 6677, null], [6677, 8058, null], [8058, 9324, null], [9324, 10593, null], [10593, 11637, null], [11637, 12986, null], [12986, 14230, null], [14230, 15081, null], [15081, 16341, null], [16341, 17703, null], [17703, 19078, null], [19078, 20019, null], [20019, 21282, null], [21282, 22594, null], [22594, 23883, null], [23883, 25023, null], [25023, 26155, null], [26155, 27379, null], [27379, 28750, null], [28750, 30089, null], [30089, 31373, null], [31373, 32084, null], [32084, 33397, null], [33397, 34781, null], [34781, 36080, null], [36080, 37488, null], [37488, 37736, null], [37736, 38710, null], [38710, 39637, null], [39637, 40148, null]], "google_gemma-3-12b-it_is_public_document": [[0, 130, true], [130, 150, null], [150, 578, null], [578, 917, null], [917, 1818, null], [1818, 3143, null], [3143, 4164, null], [4164, 5346, null], [5346, 6677, null], [6677, 8058, null], [8058, 9324, null], [9324, 10593, null], [10593, 11637, null], [11637, 12986, null], [12986, 14230, null], [14230, 15081, null], [15081, 16341, null], [16341, 17703, null], [17703, 19078, null], [19078, 20019, null], [20019, 21282, null], [21282, 22594, null], [22594, 23883, null], [23883, 25023, null], [25023, 26155, null], [26155, 27379, null], [27379, 28750, null], [28750, 30089, null], [30089, 31373, null], [31373, 32084, null], [32084, 33397, null], [33397, 34781, null], [34781, 36080, null], [36080, 37488, null], [37488, 37736, null], [37736, 38710, null], [38710, 39637, null], [39637, 40148, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40148, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40148, null]], "pdf_page_numbers": [[0, 130, 1], [130, 150, 2], [150, 578, 3], [578, 917, 4], [917, 1818, 5], [1818, 3143, 6], [3143, 4164, 7], [4164, 5346, 8], [5346, 6677, 9], [6677, 8058, 10], [8058, 9324, 11], [9324, 10593, 12], [10593, 11637, 13], [11637, 12986, 14], [12986, 14230, 15], [14230, 15081, 16], [15081, 16341, 17], [16341, 17703, 18], [17703, 19078, 19], [19078, 20019, 20], [20019, 21282, 21], [21282, 22594, 22], [22594, 23883, 23], [23883, 25023, 24], [25023, 26155, 25], [26155, 27379, 26], [27379, 28750, 27], [28750, 30089, 28], [30089, 31373, 29], [31373, 32084, 30], [32084, 33397, 31], [33397, 34781, 32], [34781, 36080, 33], [36080, 37488, 34], [37488, 37736, 35], [37736, 38710, 36], [38710, 39637, 37], [39637, 40148, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40148, 0.09184]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
8c40324004e25e3e09aa7ecea0236e87269a1fcf
|
[REMOVED]
|
{"Source-Url": "http://www.mlschmid.de/preprints/conferences/2010_CIAA_preprint.pdf", "len_cl100k_base": 9655, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 42323, "total-output-tokens": 10967, "length": "2e13", "weborganizer": {"__label__adult": 0.0005393028259277344, "__label__art_design": 0.0005049705505371094, "__label__crime_law": 0.0006585121154785156, "__label__education_jobs": 0.0012159347534179688, "__label__entertainment": 0.00018084049224853516, "__label__fashion_beauty": 0.0002865791320800781, "__label__finance_business": 0.0003819465637207031, "__label__food_dining": 0.0006189346313476562, "__label__games": 0.0009541511535644532, "__label__hardware": 0.0015954971313476562, "__label__health": 0.001678466796875, "__label__history": 0.00048470497131347656, "__label__home_hobbies": 0.0002036094665527344, "__label__industrial": 0.0007982254028320312, "__label__literature": 0.0007433891296386719, "__label__politics": 0.0004837512969970703, "__label__religion": 0.000850677490234375, "__label__science_tech": 0.26904296875, "__label__social_life": 0.00017881393432617188, "__label__software": 0.00820159912109375, "__label__software_dev": 0.708984375, "__label__sports_fitness": 0.0004355907440185547, "__label__transportation": 0.0008993148803710938, "__label__travel": 0.00025653839111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35038, 0.0275]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35038, 0.63584]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35038, 0.80842]], "google_gemma-3-12b-it_contains_pii": [[0, 2519, false], [2519, 6132, null], [6132, 9632, null], [9632, 12805, null], [12805, 16226, null], [16226, 20168, null], [20168, 24882, null], [24882, 28309, null], [28309, 32005, null], [32005, 35038, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2519, true], [2519, 6132, null], [6132, 9632, null], [9632, 12805, null], [12805, 16226, null], [16226, 20168, null], [20168, 24882, null], [24882, 28309, null], [28309, 32005, null], [32005, 35038, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35038, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35038, null]], "pdf_page_numbers": [[0, 2519, 1], [2519, 6132, 2], [6132, 9632, 3], [9632, 12805, 4], [12805, 16226, 5], [16226, 20168, 6], [20168, 24882, 7], [24882, 28309, 8], [28309, 32005, 9], [32005, 35038, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35038, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
aa1b4d8e8f43d6114ef1478ecec2f2c1062c83c1
|
Sparsity Preserving Algorithms for Octagons
Jacques-Henri Jourdan
To cite this version:
HAL Id: hal-01406795
https://inria.hal.science/hal-01406795
Submitted on 1 Dec 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Sparsity Preserving Algorithms for Octagons
Jacques-Henri Jourdan
MPI-SWS, Inria Paris
Abstract
Known algorithms for manipulating octagons do not preserve their sparsity, leading typically to quadratic or cubic time and space complexities even if no relation among variables is known when they are all bounded. In this paper, we present new algorithms, which use and return octagons represented as weakly closed difference bound matrices, preserve the sparsity of their input and have better performance in the case their inputs are sparse. We prove that these algorithms are as precise as the known ones.
1 Introduction
In order to capture numerical properties of programs, static analyzers use numerical abstract domains. The choice of a numerical abstract domain in a static analyzer is a compromise between precision, the ability of capturing complex numerical properties, and performance. Non-relational abstract domains, such as intervals [6], are very efficient but relatively imprecise: they cannot represent relations between program variables. On the other hand, in order to capture numerical relations between program variables, one can express them as linear inequalities. This class of relational numerical abstract domain is composed of linear abstract domains. A linear abstract domain corresponds to a different precision vs. performance trade-off: they range from the less precise, efficient ones such as zones [13], pentagons [12] or octagons [13,14] to the more precise, costly ones, such as subpolyhedra [11], octahedra [5], two variables per inequalities [16], zonotopes [15] or general polyhedra [8].
In particular, the Octagon abstract domain [13,14] accurately represents many of the variable relationships appearing in a program, while being still reasonably fast (all the operations have quadratic or cubic complexity on the number of variables). It is very popular in the static analysis community, which explains why algorithmic improvements [3,1,17] and precision improving variants [4] are regularly published.
As reported by the designers of Astrée [7], its quadratic or cubic performances still make it unusable as-is with a reasonable number of variables. Indeed, the
1 This work was supported by Agence Nationale de la Recherche, grant ANR-11-INSE-003.
This paper is electronically published in
Electronic Notes in Theoretical Computer Science
URL: www.elsevier.com/locate/entcs
data structures typically used to represent octagonal abstract values, i.e., strongly closed difference bound matrices, have a quadratic size in the number of variables for which an upper or lower bound is known. A common solution is the use of variable packing [13, §8.4.2], where the Octagon abstract domain is only used on small packs of variables. The downside of packing is that no relation is stored between variables that are not in the same pack. A variant of packing has been introduced to mitigate the imprecision [2], but loss in precision can still occur.
The problem of the performance of octagons has already been studied: in particular, Singh et al. [17] proposed an implementation of the Octagon abstract domain optimized in the case its representation is sparse. But they do not address the fact that it is dense as soon as interval bounds are known for many variables, and we anticipate that, for this reason, the sparsity is very low in their implementation.
Instead, in this paper, we propose to use new algorithms for the Octagon abstract domain: these algorithms work on a sparse representation for octagons, so that the cost of the analysis of two independent sets of variables is the sum of the costs of the analyses of the two sets of variables, taken independently. Our algorithms have the same precision as the traditional ones. Our main idea is the following: in order to ensure an optimal precision of all the operations, the data structures representing octagons, difference bound matrices, are usually kept strongly closed: that is, algorithms make sure that any returned difference bound matrix is a best abstraction. However, most often, strongly closed difference bound matrices are dense because of the necessary strengthening step. In this paper, we propose to weaken the maintained invariant on difference bound matrices and to keep them weakly closed hence skipping the strengthening step. Weakly closed difference bound matrices are not necessarily dense, so that we can use sparse data structures to represent them. We prove that some algorithms can be kept unchanged to work on weakly closed difference bound matrices without losing any precision and give new algorithms for the other operations.
We begin by preliminary definitions in §2. In §3, we describe and prove the soundness and relative precision of our new algorithms. We conclude in §4.
2 Definitions
Let $\mathbb{V}_+$ be a finite set of variables. We call a regular environment a function from $\mathbb{V}_+$ to $\mathbb{R}$. A regular environment represents the numerical state of a program. The role of the Octagon abstract domain is to approximate sets of regular environments $\rho$. To that end, the abstract domain of octagons stores a set of inequalities of the following form:
$$\pm \rho(u) \pm \rho(v) \leq C_{stuv} \quad u, v \in \mathbb{V}_+$$
(1)
This corresponds to giving bounds to sums and differences of values of $\rho$. Moreover, if we use twice the same variable with the same sign, we see that, using such constraints, we can express interval constraints over values of an environment [13].
In order to handle in a unified way all the different combinations of signs in these constraints, we introduce the set $\mathbb{V}_\pm$ of signed variables. Signed variables are of two kinds: they are either usual variables from $\mathbb{V}_+$, called positive variables in the context of signed variables, or their opposites form, negative variables. We equip
meaning of these constraints is given by two concretization functions bound matrix of the Octagon abstract domain.
In optimization literature, because it corresponds to the well-known shortest path problem in a weighted directed graph.
The following easy lemma states that this order relation makes \( \gamma_{\text{pot}} \) and \( \gamma_{\text{oct}} \) increasing, which makes \( \leq \) a good candidate for a comparison operator of the Octagon abstract domain.
\[
\gamma_{\text{pot}}(B) = \{ \sigma : V_{\pm} \to \mathbb{R} \mid \forall uv \in V_{\pm}, \sigma(u) - \sigma(v) \leq B_{uv} \}
\]
\[
\gamma_{\text{oct}}(B) = \{ \rho : \gamma_{\text{pot}}(B) \mid \forall u \in V_{\pm}, \rho(u) = -\rho(v) \}
\]
**Example 2.1** Consider \( V_{\pm} = \{ x ; y ; z \} \) a set of three (positive) variables. The set of signed variables is \( V_{\pm} = \{ x ; \bar{x} ; y ; \bar{y} ; z ; \bar{z} \} \). Let \( A \) be the DBM such that \( A_{x\bar{x}} = 1, A_{\bar{y}y} = 3, A_{\bar{z}z} = 1 \) and \( A_{uv} = +\infty \) for all the other entries. The set \( \gamma_{\text{oct}}(A) \) contains all the environments \( \rho : V_{\pm} \to \mathbb{R} \) such that:
- \( \forall u \in V_{\pm}, \rho(u) = -\rho(u) \)
- \( \rho(x) \leq 1/2, -\rho(y) \leq 3/2 \) and \( \rho(y) + \rho(z) \leq 1 \)
This concretization is assimilated to the set of environments \( \rho : V_{\pm} \to \mathbb{R} \) over positive variables such that \( \rho(x) \leq 1/2, -\rho(y) \leq 3/2 \) and \( \rho(y) + \rho(z) \leq 1 \).
We denote as \( \leq \) the natural order relation over DBMs, defined as follows:
\[
A \leq B \iff \forall uv \in V_{\pm}, A_{uv} \leq B_{uv}
\]
Lemma 2.2 Let $A$ and $B$ be two DBMs such that $A \preceq B$. Then, we have:
$$\gamma_{\text{pot}}(A) \subseteq \gamma_{\text{pot}}(B) \quad \quad \quad \gamma_{\text{oct}}(A) \subseteq \gamma_{\text{oct}}(B)$$
For any non-empty set $S$ of irregular environments, there exists a minimal (in the sense of $\preceq$) DBM that approximates it. That is, there exists a minimal DBM $\alpha(S)$ such that $S \subseteq \gamma_{\text{pot}}(\alpha(S))$. This property follows immediately from the definition of $\alpha$:
$$\alpha(S)_{uv} = \sup_{\sigma \in S} \{ \sigma(u) - \sigma(v) \} \quad (6)$$
This function $\alpha$ is called the abstraction function. We can easily see that $\alpha$ is an increasing function. Moreover, $\alpha$ does not only return best abstractions for $\gamma_{\text{pot}}$, but also for $\gamma_{\text{oct}}$: if the set $S$ contains only regular environments, we can see that $\alpha(S)$ is also the minimal DBM such that $S \subseteq \gamma_{\text{oct}}(\alpha(S))$. In fact, it is easy to see that $\preceq$ defines a complete lattice over DBMs extended with a bottom element, and that the pairs $(\alpha, \gamma_{\text{pot}})$ and $(\alpha, \gamma_{\text{oct}})$ form Galois connections.
2.1 Closure and Strong Closure
Many DBMs have the same concretization. This is a problem, because the abstract environments that we manipulate are therefore not necessarily the most precise ones, and this can lead to imprecision. Thus, usually, an implementation of the Octagon abstract domain maintains the invariant that it only manipulates “canonical” forms of DBMs, such that $B = \alpha(\gamma_{\text{oct}}(B))$. Such “canonical” DBMs are always the best possible representative over all the DBMs with the same concretization.
An important fact is that we can characterize best abstractions using the values they contain, and that we have algorithms to compute them. We expose these characterizations, together with these algorithms. Moreover, we give a weaker closedness condition over DBMs, that does not ensure canonicity, but that allows better algorithms without loss of precision.
2.1.1 Best abstractions for $\gamma_{\text{pot}}$
A first step is to remark that canonical DBMs always have null diagonal values. Moreover, canonical DBMs should always verify the triangular inequality. We call such DBMs closed DBMs:
Definition 2.3 [Closed DBM] A closed DBM is a DBM $B$ verifying the two following properties:
- $\forall v \in V_{\pm}, B_{vv} = 0$
- $\forall uvw \in V_{\pm}, B_{uw} \leq B_{uw} + B_{vw}$
Closed DBMs are exactly best abstractions for $\gamma_{\text{pot}}$ [13, Theorem 3.3.6]. Hence, closed DBMs always have non-empty concretizations. We do not detail here the algorithm used to detect the emptiness of the concretization of a DBM and to compute closures: instead, we refer the interested reader to previous work [13,1].
Example 2.4 The closure $\alpha(\gamma_{\text{pot}}(A))$ of the DBM $A$ as defined in Example 2.1 contains the following additional finite entries:
• \( \forall u \in V_\pm, \alpha(\gamma_{pot}(A))_{uu} = 0 \)
• \( \alpha(\gamma_{pot}(A))_{\overline{u}v} = 4 \) (corresponding to the constraint \( \rho(z) - \rho(y) \leq 4 \)).
2.1.2 Best abstractions for \( \gamma_{oct} \)
We now refine the notion of closure to canonical forms for \( \gamma_{oct} \). It is easy to see that, for any non-empty set \( S \) of regular environments, \( \alpha(S)_{uv} = \alpha(S)_{v\overline{u}} \). Thus, canonical DBMs for \( \gamma_{oct} \) will verify the coherence property:
**Definition 2.5** [Coherent DBM] A DBM \( B \) is coherent when:
\[
\forall uv \in V_\pm, B_{uv} = B_{v\overline{u}}
\]
Moreover, matrix elements of the form \( B_{u\overline{u}} \) (for \( u \in V_\pm \)) impose interval constraints on values of \( \rho \). These interval constraints can be combined to entail constraints on any difference of values of \( \rho \). For this reason, canonical forms for \( \gamma_{oct} \) will verify the following strong closedness property:
**Definition 2.6** [Strongly closed DBM] A DBM \( B \) is strongly closed when it is closed and coherent and:
\[
\forall uv \in V_\pm, B_{uv} \leq \frac{B_{u\overline{u}} + B_{v\overline{v}}}{2}
\]
This condition is necessary and sufficient: strong closedness characterizes canonical DBMs for \( \gamma_{oct} \).
**Theorem 2.7** Let \( B \) be a DBM. The two following properties are equivalent:
(i) \( B \) is strongly closed
(ii) \( \gamma_{oct}(B) \neq \emptyset \) and \( B = \alpha(\gamma_{oct}(B)) \)
**Proof.** See, e.g., [13, Theorems 4.3.2 and 4.3.3].
Usually [1], to compute strong closure, one first ensures that the given matrix is coherent, then computes a closure (i.e., a canonical representative in the sense of \( \gamma_{pot} \)), and, finally, performing a so-called strengthening step:
**Definition 2.8** [Strengthening] Let \( B \) be a DBM. The strengthening of \( B \), noted \( \sharp S(B) \) is defined by:
\[
\sharp S(B)_{uv} = \min \left\{ \frac{B_{u\overline{u}} + B_{v\overline{v}}}{2}, B_{uv} \right\}
\]
The following theorem states the correctness of the strong closure algorithm sketched above, consisting in computing a closure followed by a strengthening:
**Theorem 2.9** Let \( B \) be a coherent DBM with \( \gamma_{oct}(B) \neq \emptyset \). Then:
\[
\alpha(\gamma_{oct}(B)) = \sharp S(\alpha(\gamma_{pot}(B)))
\]
In particular, if \( B \) is coherent and closed, then \( \sharp S(B) \) is strongly closed.
**Proof.** See, e.g., [10, Theorem 8.2.7].
---
3 This is actually an improvement of the method described initially by Miné [13].
Example 2.10 In order to consider the strong closure of the DBM $A$ as defined in Example 2.1, we first need to make it coherent: let $\tilde{A}$ be the DBM containing the same entries as $A$, except that $\tilde{a}_{z\bar{y}} = 1$.
The closure of $\tilde{A}$ contains the following additional finite entries:
- $\forall u \in \mathbb{V}_\pm, \alpha(\gamma_{pot}(\tilde{A}))_{uu} = 0$
- $\alpha(\gamma_{pot}(\tilde{A}))_{xy} = \alpha(\gamma_{pot}(A))_{\bar{y}\bar{z}} = 4$ (corresponding to the constraint $\rho(z) - \rho(y) \leq 4$)
- $\alpha(\gamma_{pot}(\tilde{A}))_{\bar{z}\bar{y}} = 5$ (corresponding to the constraint $\rho(z) \leq 5/2$).
The strong closure $\alpha(\gamma_{oct}(\tilde{A}))$ is then obtained by strengthening $\alpha(\gamma_{pot}(\tilde{A}))$. The strengthening operation creates the following new entries:
- $\alpha(\gamma_{oct}(\tilde{A}))_{xy} = \alpha(\gamma_{oct}(\tilde{A}))_{\bar{y}\bar{z}} = 2$
- $\alpha(\gamma_{oct}(\tilde{A}))_{xz} = \alpha(\gamma_{oct}(\tilde{A}))_{\bar{z}\bar{x}} = 3$.
2.2 Weak Closedness
Usually, the implementations of the Octagon abstract domain maintain all DBMs strongly closed, so that maximal information is known when performing an abstract operation. However, this breaks sparsity: indeed, matrix elements of the form $B_{uu}$ are non-relational interval bounds on the variables: as we expect many variables to be bounded, the strengthening step gives finite bounds for many DBM cells, and a strengthened DBM loses most of the sparsity. In general, a DBM has a quadratic size in the number of variables, and therefore this loss of sparsity is costly. Previous attempts at improving performances using sparsity [17] did not make this observation. We believe that, when using these implementations, DBMs quickly become dense, hence reducing the efficiency of sparse algorithms.
In our algorithms, we propose to skip the strengthening step: instead of maintaining the invariant that all the manipulated DBMs are strongly closed, we maintain the invariant that they are weakly closed:
**Definition 2.11** [Weakly closed DBM] Let $B$ be a DBM. We say that $B$ is weakly closed when any of the two following equivalent statements hold:
(i) $B$ has a null diagonal and $\sharp S(B)$ is strongly closed;
(ii) $B$ has a null diagonal, $\sharp S(B)$ is coherent, and:
$$\forall uvw, \sharp S(B)_{uvw} \leq B_{uv} + B_{vw}$$
**Proof.** The proof of equivalence of the definitions is in [10, Definition 8.2.5].
In order to make sure we do not lose precision, we will prove for each of those operators that it computes abstract values with the same concretization as with the usual algorithms. Equivalently, we prove that the strengthening of the abstract values computed by our operators are equal to the abstract values computed by the usual operators on the strengthened parameters.
A weakly closed DBM is neither necessarily strongly closed nor closed. However, a closed and coherent DBM is always weakly closed: this helps us easily building weakly closed DBMs from arbitrary sets of octagonal constraints.
**Example 2.12** Continuing on the definitions of Example 2.10, \( \alpha(\gamma_{pot}(\tilde{A})) \) is closed and coherent hence weakly closed. This DBM contains no entry relating the variable \( x \) and the other variables. This is an improvement in sparsity compared to the strong closure \( \alpha(\gamma_{oct}(\tilde{A})) \). To the best of our knowledge, this opportunity is not leveraged by previously known algorithms, such as [17].
This notion of weak closedness has been introduced by Bagnara et al. [1, Appendix A] as an intermediate notion for proving the correctness of the tight closure algorithm (see §3.5). To the best of our knowledge, the use of weak closedness as an invariant for manipulating sparse DBMs is an original result of our work.
## 3 Operations on Difference Bound Matrices
The abstract domain of octagons defines several operations manipulating difference bound matrices. They include lattice operations, like comparison and join, and abstract transfer functions, which model state change in the program.
In this section, we recall the standard definition of these operations, and give the new sparsity-preserving definition on weakly closed DBMs. All these algorithms preserve the sparsity and weak closedness of DBMs and can be proved to be as precise as the standard ones. More precisely, we claim that they always return DBMs whose strengthening equals the DBMs that would have been returned by the traditional algorithms. The implementation of the widening operation, detailed in [10, Section 8.2.7], is more complex and omitted by lack of space.
### 3.1 Comparison
In order to use octagons in a static analyzer, we need to define a comparison operator, taking two DBMs and returning a Boolean. If this Boolean is `true`, then we have the guarantee that the concretization of the first operand is included in that of the second operand.
A good candidate is \( \check{\leq} \), the natural order relation between DBMs. Its soundness is guaranteed by the monotonicity of \( \gamma_{oct} \). In usual implementations of the Octagon abstract domain, DBMs are kept strongly closed, hence this operator is actually as precise as possible: it returns `true` if and only if the concretizations are included.
However, in the setting of weakly closed DBMs, this property does not hold. In order not to lose precision while still using sparse DBM, we need another comparison operator that strengthens the bounds of the left operand when they do not entail the right operand:
**Definition 3.1** [Weakly closed comparison] Let \( A \) and \( B \) be DBMs. The weakly closed comparison of \( A \) and \( B \), noted \( A \check{\leq}_{weak} B \) is defined by:
\[
A \check{\leq}_{weak} B \equiv \bigwedge_{u,v \in \mathbb{V}_K} \left( A_{uv} \leq B_{uv} \lor \frac{A_{uv} + A_{vu}}{2} \leq B_{uv} \right)
\]
That is, for every finite bound on \( B \), we first check whether it is directly entailed by the corresponding bound in \( A \), and then try to entail it using non-relational...
bounds. The following theorem states that it implements the comparison on concretizations, hence we can use it in a sparse context without losing precision:
**Theorem 3.2** Let $A$ a weakly closed DBM and $B$ any DBM. The two following statements are equivalent:
(i) $\gamma_{\text{oct}}(A) \subseteq \gamma_{\text{oct}}(B)$
(ii) $A \overset{\text{weak}}{\preceq} B$
**Proof.** See, e.g., [10, Theorem 8.2.9].
3.2 Forgetting Variables
An important operation provided by the Octagon abstract domain is forget. When given a DBM and a variable $v$, it returns another DBM where all the information on $v$ has been forgotten. Its concrete and abstract definitions are given by:
**Definition 3.3** [Concrete forgetting] Let $x \in V_+$ and $S$ be a set of regular environments. We define:
$$F_{\text{oct}}^x(S) = \{ \sigma + [x \Rightarrow r; \overline{x} \Rightarrow -r] \mid \sigma \in S, r \in \mathbb{R} \}$$
**Definition 3.4** [Abstract forgetting]
(i) Let $x \in V_+$ and $B$ be a DBM. We define $\sharp F_{\text{pot}}^x(B)$ the DBM such that:
$$\sharp F_{\text{pot}}^x(B)_{uv} = \begin{cases} 0 & \text{if } u = v = x \\ +\infty & \text{otherwise if } u = x \text{ or } v = x \\ B_{uv} & \text{otherwise} \end{cases}$$
(ii) Let $x \in V_+$ and $B$ a DBM. We define:
$$\sharp F_{\text{oct}}^x(B) = \sharp F_{\text{pot}}^x(\sharp F_{\text{pot}}(B))$$
It is a known result from the Octagon literature [13, Theorems 3.6.1 and 4.4.2] that $\sharp F_{\text{oct}}^x$ is sound when applied to any DBM. Moreover, when applied to any strongly closed DBM, it is exact and returns a strongly closed DBM. To these properties, we add similar properties for weak closedness, that let us use $\sharp F_{\text{oct}}^x$ as-is for weakly closed DBMs without loss of precision:
**Theorem 3.5** Let $B$ be a weakly closed DBM and $x \in V_+$. We have:
(i) $\sharp S(\sharp F_{\text{oct}}^x(B)) = \sharp F_{\text{oct}}^x(S(B))$
(ii) $\mathcal{F}_{\text{oct}}^x(\gamma_{\text{oct}}(B)) = \gamma_{\text{oct}}(\mathcal{F}_{\text{oct}}^x(B))$
(iii) $\sharp F_{\text{oct}}^x(B)$ is weakly closed
**Proof.** See [10, Theorem 8.2.11].
3.3 Join
The usual join operator on DBMs is the least upper bound operator for $\overset{\text{weak}}{\preceq}$:
Definition 3.6 [DBM least upper bound] Let $A$ and $B$ be two DBMs. The least upper bound $\mathcal{U}$ on DBMs is defined by:
$$\forall uv, (A \mathcal{U} B)_{uv} = \max\{A_{uv} ; B_{uv}\}$$
The order relation $\mathcal{U} \leq$ and the operator $\mathcal{U}$ clearly form an upper semi-lattice, thus usual properties on Galois connections hold, providing the usual results on the soundness and precision of this operator: $\mathcal{U}$ is sound, and, if given strongly closed DBMs, it returns the best strongly closed DBM approximating the concrete union.
For weakly closed DBMs, even though $\mathcal{U}$ is sound, it would possibly lose precision when applied to non-strongly closed DBMs. For example, the weakly closed DBM $A$ represents the two following inequalities on positive variables $x$ and $y$:
$$x + x \leq 1 \quad y + y \leq 0$$
The weakly closed DBM $B$, in turn, represents the two following inequalities:
$$x + x \leq 0 \quad y + y \leq 1$$
The inequality $x + y \leq 1/2$ is not present in $A$ nor in $B$, even though it is in $\mathcal{S}(A)$ and in $\mathcal{S}(B)$. As a result, $A \mathcal{U} B$ contains the inequalities $x + x \leq 1$ and $y + y \leq 1$, but does not entail $x + y \leq 1/2$, which is entailed however by $\mathcal{S}(A) \mathcal{U} \mathcal{S}(B)$.
The rationale behind this example is that a join can create some amount of relationality that was not present in one or both operands. Our operator has to reflect this fact. Care should be taken, however, not to break the sparsity of the operands by introducing spurious finite values in the matrix. Our join for weakly closed DBMs is defined as follows:
Definition 3.7 [Weakly closed join for octagons] Let $A$ and $B$ be two weakly closed DBMs. We take, for $u,v \in \mathbb{V}_\pm$,
$$B_{uv}^{1/2} = \frac{B_{uv}}{2} + \frac{A_{uv}}{2} \quad \text{and} \quad A_{uv}^{1/2} = \frac{A_{uv}}{2} + \frac{A_{uv}}{2}.$$ The weakly closed join $\mathcal{U}_{\text{weak}}$ is defined in two steps:
(i) We first define $A \mathcal{U}_{\text{weak}} B$. Let $u, v \in \mathbb{V}_\pm$. We define:
$$\begin{align*}
(A \mathcal{U}_{\text{weak}} B)_{uv} &= \begin{cases}
A_{uv} & \text{if } A_{uv} = B_{uv} \\
B_{uv} & \text{if } A_{uv} < B_{uv} \leq B_{uv}^{1/2} \\
\max\{A_{uv} ; B_{uv}^{1/2}\} & \text{if } A_{uv} < B_{uv} \wedge B_{uv}^{1/2} < B_{uv} \\
(B \mathcal{U}_{\text{weak}} A)_{uv} & \text{if } A_{uv} > B_{uv}
\end{cases}
\end{align*}$$
(ii) Let $u, v \in \mathbb{V}_\pm$. We define:
$$\begin{align*}
(A \mathcal{U}_{\text{weak}} B)_{uv} &= \min\left\{\begin{align*}
(A \mathcal{U}_{\text{weak}} B)_{uv} \\
\max\{A_{uv}^{1/2} ; B_{uv}^{1/2}\}
\end{align*}\right\} \\
&\quad \text{if } A_{uv} < B_{uv} \wedge A_{uv} > B_{uv} \\
&\quad \text{or } A_{uv} > B_{uv} \wedge A_{uv} < B_{uv} \\
&\quad \text{otherwise}
\end{align*}$$
The first step can be computed by iterating over all the matrix elements that are different in $A$ and $B$. This first step thus preserves the sparsity, and consumes
computing time only for variables that are different in both branches. The second step can be computed efficiently by first collecting in a list all the variables $u$ for which $A_u < B_u$ and, in another list, all those for which $B_u < A_u$. By iterating over the two lists, we can efficiently modify only the cells meeting the given condition. It should be noted that we break in the second step only the sparsity that needs to be broken, as the modified cells correspond to the cases where the join creates new relational information (as in the example above).
The following theorem states that this modified join operator can be used on weakly closed DBMs without losing precision or soundness:
**Theorem 3.8** Let $A$ and $B$ be two weakly closed DBMs. We have:
(i) $\mathcal{S}(A \triangleleft \text{weak} B) = \mathcal{S}(A) \triangleleft \mathcal{S}(B)$
(ii) $\gamma_{\text{oct}}(A \triangleleft \text{weak} B) = \gamma_{\text{oct}}(\alpha(\gamma_{\text{oct}}(A) \cup \gamma_{\text{oct}}(B)))$
(iii) $A \triangleleft \text{weak} B$ is weakly closed
**Proof.** See [10, Theorem 8.2.13]. □
### 3.4 Assuming Constraints
An important operation for abstract domains is the `assume` primitive, which refines the internal state of an abstract domain using a new assumption over the set of approximated environments. In this section, we only consider the cases where this operation is exact, i.e., it does not lead to any approximation. These cases amount to assuming that $\rho(x) - \rho(y) \leq C$, for $C \in \mathbb{R}$ and $x$ and $y$ two variables. In order to deal with arbitrary linear inequalities or even arbitrary arithmetical constraints, it is necessary to write some supporting module for the Octagon domain that will translate arbitrary constraints into exact ones. Such a support module is out of the scope of this paper: we refer the reader to [13] for more detail. Moreover, note that the combination of `assume` together with the `forget` let us emulate variable assignment, hence we do not detail variable assignment in this paper.
We give the `assume` primitive in two versions: one adapted to $\gamma_{\text{pot}}$, and one adapted to $\gamma_{\text{oct}}$. We first give the concrete semantics of this operation, which is the same for irregular and regular environments:
**Definition 3.9** [Assuming constraints in the concrete] Let $C \in \mathbb{R}$, $x, y \in V_\pm$ and $S$ be a set of irregular environments. We define:
$$A^{x-y \leq C}(S) = \{\sigma \in S | \sigma(x) - \sigma(y) \leq C\}$$
It is easy to see that we can reflect exactly this operation in DBMs. Indeed, it suffices to change the cell corresponding to the new constraint, if the old value is larger than the new one. However, this does not maintain any kind of closedness, whether it be the normal closure, the strong closure or the weak closedness. As a result, it is necessary to run a closure algorithm when inserting the new constraint. These algorithms are costly (i.e., cubic complexity), and do not leverage the fact that the input matrix is already almost closed. For this reason, incremental closure
---
4 An efficient implementation would however use a specific, optimized implementation for assignments.
algorithms have been developed, with quadratic complexity. We give here a slightly different presentation of these algorithms as the one originally given by Miné [13]:
**Definition 3.10** [Assuming constraints in the abstract] Let \( C \in \mathbb{R} \), \( B \) be a DBM and \( x, y \in \mathbb{V}_+ \).
(i) We define \( \mathcal{A}_{\text{pot}}^{x-y \leq C}(B) \) the DBM such that, for \( u, v \in \mathbb{V}_+ \):
\[
\mathcal{A}_{\text{pot}}^{x-y \leq C}(B)_{uv} = \min\{B_{uv} ; B_{ux} + C + B_{yv}\}
\]
(ii) If \( x, y \in \mathbb{V}_+ \), we define \( \mathcal{A}_{\text{weak}}^{x-y \leq C}(B) \) and \( \mathcal{A}_{\text{oct}}^{x-y \leq C}(B) \) as:
\[
\mathcal{A}_{\text{weak}}^{x-y \leq C}(B) = \mathcal{A}_{\text{weak}}^{x-y \leq C}(\mathcal{A}_{\text{pot}}^{x-y \leq C}(B))
\]
\[
\mathcal{A}_{\text{oct}}^{x-y \leq C}(B) = \mathcal{S}(\mathcal{A}_{\text{weak}}^{x-y \leq C}(B))
\]
It is well-known [10, Theorem 8.2.14] that \( \mathcal{A}_{\text{oct}}^{x-y \leq C} \) is sound and exact when applied to a DBM with a null diagonal. When applied to a strongly closed DBM \( B \) with \( 0 \leq C + B_{yx} \), the result is strongly closed. Therefore, an implementation of the **assume** primitive in the strongly closed setting first checks whether \( 0 \leq C + B_{yx} \). If so, it returns \( \mathcal{A}_{\text{oct}}^{x-y \leq C} \); otherwise it returns \( \bot \).
In particular, when applied to weakly closed DBMs, \( \mathcal{A}_{\text{oct}}^{x-y \leq C} \) is sound and exact, since weakly closed DBMs have null diagonals. However, because this operator uses \( \mathcal{S} \), it breaks sparsity. The advantage of using weakly closed DBMs is that, in the setting of weakly closed DBMs, \( \mathcal{S} \) is no longer needed: \( \mathcal{A}_{\text{weak}}^{x-y \leq C} \) can be used as-is, provided the implementation additionally checks that \( 0 \leq 2C + B_{yx} + B_{xx} \). The following theorem summarizes this result, and justifies the use of this transfer function in the context of sparse DBMs without loss of precision:
**Theorem 3.11** Let \( C \in \mathbb{R} \), \( B \) a weakly closed DBM and \( x, y \in \mathbb{V}_+ \). We have:
(i) If \( 0 \leq 2C + B_{yx} + B_{xx} \), then \( \mathcal{A}_{\text{oct}}^{x-y \leq C}(B) = \mathcal{A}_{\text{weak}}^{x-y \leq C}(\mathcal{A}_{\text{pot}}^{x-y \leq C}(B)) \)
(ii) \( \gamma_{\text{oct}}(\mathcal{A}_{\text{weak}}^{x-y \leq C}(B)) = A^{x-y \leq C}(\gamma_{\text{oct}}(B)) \)
(iii) If \( B \) is weakly closed, the following statements are equivalent:
(i) \( A^{x-y \leq C}(\gamma_{\text{oct}}(B)) \neq \emptyset \)
(ii) \( 0 \leq \mathcal{A}_{\text{weak}}^{x-y \leq C}(B)_{xx} \)
(iii) \( 0 \leq C + B_{yx} \) and \( 0 \leq 2C + B_{yx} + B_{xx} \)
(iv) \( \mathcal{A}_{\text{weak}}^{x-y \leq C}(B) \) is weakly closed.
**Proof.** See, e.g., [10, Theorem 8.2.15]. \( \square \)
### 3.5 Tightening
Miné [13] and Bagnara et al. [1] study the case of the Octagon abstract domain when the considered environments take only values in \( \mathbb{Z} \): in contrast with the previous sections, in this case, the strongly closed DBMs are not all canonical, so that modified algorithms need to be used. We explain here that the use of the weakly closed setting is compatible with the integer case. To this end, we define a different concretization function, \( \gamma_{\text{oct}}^{\mathbb{Z}} \), that concretizes to integer environments:
Definition 3.12 [Integer concretization of octagons] Let $B$ be a DBM. We define:
$$
\gamma^Z_{oct}(B) = \{ \rho \in \gamma_{oct}(B) \mid \forall u \in V_+, \rho(u) \in \mathbb{Z} \}
$$
If we consider only integer environments, best abstractions have a slightly stronger characterization. Such DBM are said *tightly closed*. We also define the notion of *weakly tightly closed* DBMs, which is the analog of *weakly closed* DBMs for the integer case:
Definition 3.13 [Tight closure] Let $B$ be a DBM. $B$ is *tightly closed* (respectively weakly tightly closed) when:
- $B$ is strongly closed (respectively weakly closed)
- $\forall uv \in V_\pm, B_{uv} \in \mathbb{Z}$
- $\forall u \in V_\pm, \frac{B_{uu}}{2} \in \mathbb{Z}$
Tightly closed DBMs are exactly best abstractions for integer environments [10, Theorem 8.2.17]. Bagnara et al. [1, §6] give efficient algorithms for computing the tight closure of a DBM. It consists in using a *tightening* operation before strengthening. The tightening operation is defined by:
Definition 3.14 [Tightening] Let $B$ a DBM with elements in $\mathbb{Z}$. We define $\sharp T(B)$ be the DBM with elements in $\mathbb{Z}$ such that, for $u, v \in V_\pm$:
$$
\sharp T(B)_{uv} = \begin{cases}
B_{uv} - 1 & \text{if } u = \overline{v} \text{ and } B_{uv} \text{ is odd} \\
B_{uv} & \text{otherwise}
\end{cases}
$$
The following theorem gives the essential property of the tightening operation:
Theorem 3.15 Let $B$ a weakly closed DBM with elements in $\mathbb{Z}$. We suppose that $\forall u \in V_\pm, 0 \leq \sharp T(B)_{uu} + \sharp T(B)_{u\overline{u}}$. Then $\sharp T(B)$ is weakly tightly closed.
Proof. See, e.g., [10, Theorem 8.2.18]. □
This theorem has two consequences. First, as already explained by Bagnara et al. [1, §6], it gives an efficient algorithm to compute tight closure: one would compute the closure of the input matrix, then tighten it and finally strengthen it. Second, our sparse algorithms need only small adjustments when used with integer environments: instead of maintaining the DBMs weakly closed, we just have to make them weakly tightly closed by tightening them after each operation.
Note, however, that tightening does not address the case of mixed environments, where some variables are known to have integer values, and some others can have an arbitrary real values. To the best of our knowledge, there is no known efficient closure algorithm supporting this use case, even in the dense setting.
4 Conclusion
In this paper, we presented new algorithms for the Octagon abstract domain, which preserve the sparsity of the representation of octagons. These algorithms are as
precise as the usual ones, and rely on a weaker invariant over difference bound
matrices, called weak closedness. We have shown that these algorithms can be used
in the context of rational or real environments as well as in the context of integer
environments.
We implemented and formally verified in Coq these algorithms in the context of
the Verasco static analyzer [9,10,18]. The use of these new algorithms improved the
performances of the Octagon abstract domain by at least one order of magnitude.
There are still possible improvements to these algorithms: in particular, we think
that it could be profitable to sparsify difference bound matrices as much as possible
after each abstract operation, while still maintaining them weakly closed. Indeed,
abstract operations may infer bounds in difference bound matrices that can actually
be deduced from non-relational bounds, therefore missing opportunity of sparsity.
We think the reduction algorithm presented by Bagnara et al. [1] can be adapted
to compute reduced difference bound matrices using only weakly closed difference
bound matrices. This would lead to a simpler widening algorithm based on a semantic
definition as described by Bagnara et al. [1, §4.2]. We believe the implementation
of these new algorithms in state-of-the-art static analyzers, by using, for example,
the framework developed by Singh et al. [17] would lead to a significant performance
improvement.
References
LNCS 8858 (2014), pp. 296–313.
327.
[8] Cousot, P. and N. Halbwachs, Automatic discovery of linear restraints among variables of a program,
(2016).
VMCAI, LNCS 5403 (2009), pp. 229–244.
[12] Logozzo, F. and M. Fähndrich, Pentagons: a weakly relational abstract domain for the efficient
|
{"Source-Url": "https://inria.hal.science/hal-01406795/file/jourdan2016sparsity.pdf", "len_cl100k_base": 10322, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 51930, "total-output-tokens": 12303, "length": "2e13", "weborganizer": {"__label__adult": 0.0005207061767578125, "__label__art_design": 0.0007028579711914062, "__label__crime_law": 0.0007615089416503906, "__label__education_jobs": 0.0015687942504882812, "__label__entertainment": 0.00012135505676269533, "__label__fashion_beauty": 0.0003039836883544922, "__label__finance_business": 0.0005435943603515625, "__label__food_dining": 0.0005183219909667969, "__label__games": 0.0011806488037109375, "__label__hardware": 0.0015230178833007812, "__label__health": 0.0017528533935546875, "__label__history": 0.0005564689636230469, "__label__home_hobbies": 0.00021076202392578125, "__label__industrial": 0.0009522438049316406, "__label__literature": 0.0004732608795166016, "__label__politics": 0.0005459785461425781, "__label__religion": 0.0008511543273925781, "__label__science_tech": 0.274169921875, "__label__social_life": 0.0001608133316040039, "__label__software": 0.0079803466796875, "__label__software_dev": 0.70263671875, "__label__sports_fitness": 0.0005059242248535156, "__label__transportation": 0.0009899139404296875, "__label__travel": 0.0003006458282470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38606, 0.03483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38606, 0.41565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38606, 0.86]], "google_gemma-3-12b-it_contains_pii": [[0, 905, false], [905, 3327, null], [3327, 6812, null], [6812, 8464, null], [8464, 11486, null], [11486, 14075, null], [14075, 17150, null], [17150, 20174, null], [20174, 22416, null], [22416, 25418, null], [25418, 28642, null], [28642, 32083, null], [32083, 34744, null], [34744, 38063, null], [38063, 38606, null]], "google_gemma-3-12b-it_is_public_document": [[0, 905, true], [905, 3327, null], [3327, 6812, null], [6812, 8464, null], [8464, 11486, null], [11486, 14075, null], [14075, 17150, null], [17150, 20174, null], [20174, 22416, null], [22416, 25418, null], [25418, 28642, null], [28642, 32083, null], [32083, 34744, null], [34744, 38063, null], [38063, 38606, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38606, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38606, null]], "pdf_page_numbers": [[0, 905, 1], [905, 3327, 2], [3327, 6812, 3], [6812, 8464, 4], [8464, 11486, 5], [11486, 14075, 6], [14075, 17150, 7], [17150, 20174, 8], [20174, 22416, 9], [22416, 25418, 10], [25418, 28642, 11], [28642, 32083, 12], [32083, 34744, 13], [34744, 38063, 14], [38063, 38606, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38606, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
061df604f5408c2bbfb1100f77c49b44dc454e39
|
Automated embedding of dynamic libraries into iOS applications from GNU/Linux
Marwin Baumann
marwin.baumann@os3.nl
Leandro Velasco
leandro.velasco@os3.nl
Research Project II
Supervised by:
Cedric van Bockhaven
Deloitte
July 21, 2017
Abstract
The need for mobile application security assessments has increased since more iOS applications are released every day. Multiple incidents have become public where applications violated the user’s privacy, or provided functionality not allowed by the license agreement. Dynamic analysis of applications is used by security researchers because the source code is often not publicly available. Dynamic analysis enables monitoring the behavior of an application while it is being executed. A jailbroken iOS device is usually required in order to conduct the analysis. To overcome this limitation a dynamically linked library can be embedded into the target iOS application. This library enables the dynamic analysis for that application on a non-jailbroken device. This method is often used since jail-breaks for the latest iOS version are not always available. Moreover, it has become increasingly common that developers implement jail-break detection in their applications. The process of embedding dynamic libraries is mostly implemented by macOS native tools, is time consuming, and information about the inner workings of this process is scarce.
The work presented in this paper is aimed at overcoming these limitations by porting and automating this process from GNU/Linux. This would make security assessments more efficient and accessible to the open source community. Besides, by removing the requirement of macOS, security researchers could work with their preferred platform or run a GNU/Linux Virtual Machine. To accomplish our goal, we performed a theoretical analysis of the current state of the art, identified the different steps of the embedding process, and explored the way of implementing and automating each of the steps in GNU/Linux. In this report, we present an open sourced automated solution that we have developed and discuss about its applicability and limitations. We conclude that it is possible to automate from GNU/Linux the process of embedding a dynamic library into an existing iOS application.
Contents
1 Introduction .................................................. 3
1.1 Research Question ........................................ 4
1.2 Report Structure ........................................ 4
2 Related Work .................................................. 5
3 Approach and Methods .......................................... 6
3.1 Hardware Used ........................................... 6
4 Application Acquisition ....................................... 7
4.1 iOS App Store Package Extraction ....................... 8
5 Executable Modification ....................................... 10
5.1 Practical Implementation ................................ 11
6 Re-signing the iOS App Store Package ....................... 12
6.1 Code Signing Implementation ............................ 13
7 Provisioning Profile Generation ............................. 15
7.1 Individual / Enterprise Developer account ............. 16
7.2 Free Apple account ...................................... 16
8 Install the iOS App Store Package .......................... 18
8.1 Deployment Implementation .............................. 18
8.2 Running the modified application ...................... 19
9 Automation ...................................................... 20
10 Discussion and Future Work ................................ 22
11 Conclusion ..................................................... 23
Appendices ....................................................... 29
A Clutch .......................................................... 29
1 Introduction
All iDevices (e.g. iPhone, iPod Touch, and iPad) can access and download applications from the App Store. Apple scrutinizes each submitted application, but not always sufficiently. Multiple incidents have become public in which applications violated the user’s privacy [1] [2], or provided functionality not allowed by the license agreement [3]. Therefore, it is important for users and society to continuously monitor new applications.
Besides, with more mobile applications released every day the need for security assessments has increased [4]. In order to conduct such security assessment, dynamic analysis of an application is often used by security researchers. The reason is that the source code is not always available. Dynamic analysis enables monitoring the behavior of an application while it is being executed. Moreover, dynamic analysis is used to monitor the invocation of functions, track how data is propagated through the application, and to modify the behavior of the application [5]. It can also be useful for developers because this technique is capable of exposing subtle flaws in the source code.
To conduct the dynamic analysis of iOS applications two methods can be used. The first method consists in installing a special application on an iDevice which can monitor any application installed. The second method is to embed a dynamically linked library into the application to be monitored [6]. The first method is the easiest, but requires that a jailbroken iDevice is used. This restriction is imposed by the sandbox model of Apple [7] that enforces a mechanism in which applications are isolated from the rest of the system. The advantage of the second method is that no jailbroken iDevice is required. This method is often used since jail-breaks for the latest iOS versions are not always available, and developers increasingly implement jail-break detection [8]. In order to conduct the dynamic analysis using the second method the dynamic libraries provided by the Frida [9] and Cycript [10] project are often used. These dynamic libraries, known as “gadgets”, are instrumentation tools that allow the inspection and debugging of applications during runtime [11].
In order to embed a dynamic library, four steps need to be taken as shown in Figure 1. Firstly, the iOS application needs to be extracted from the device. iOS applications are stored in the iOS Application Archive (IPA) format and are often encrypted and protected by Digital Rights Management (DRM). Therefore, the IPA file should be decrypted and the DRM needs to be removed before continuing the process. The second step is to link the dynamic library to the application and to rebuild it. The third step consists of re-signing the application because modifying the application makes the original signature invalid. It is possible to re-sign the application by a different person than the original developer. The only requirement is a provisioning profile which can be obtained from Apple using a valid Apple account. Finally, the signed IPA is pushed back to the device and then the application can be used.

Due to the closed nature of Apple’s ecosystem, most of the steps in this process can only be executed using a computer running macOS [8]. This limitation oblige security researchers to use an Apple computer when performing security evaluations on iOS applications. One way to overcome this limitation would be to run macOS in a Virtual Machine (VM), however there
are strict limitations to run macOS on non Apple hosts [12]. Using GNU/Linux would be more convenient, as this operating system does not have legal constraints to run as a guest in a VM. Moreover, GNU/Linux distributions are often chosen by security researchers because they are open source and because they provide a comprehensive set of security assessment tools.
The whole process of embedding dynamic libraries into iOS applications is time consuming because it consists of many steps to be executed. Moreover, the inner workings of this process are poorly documented. Exploring ways to execute all of the mentioned steps on GNU/Linux in an automated fashion is needed in order to make mobile app security assessments more efficient and accessible to the open source community. Currently there are some tools available for GNU/Linux that implement re-signing and deploying of iOS applications. But whether these tools still function after embedding a dynamic library into the application is unclear. In addition, to what extent these tools can be automated is unclear. No tools are available that enable the embedding of the dynamic library into the application without the use of macOS. Therefore, the aim of this study is to explore the feasibility of porting and automating the process of dynamic library embedding into iOS applications from GNU/Linux.
1.1 Research Question
Our main research question is: **Is it possible from GNU/Linux to automate the process of embedding dynamic libraries into iOS applications?**
To answer this question we will investigate in depth how the different steps of dynamic library embedding in iOS applications work and subsequently if it is possible from GNU/Linux to fully port and automate these steps.
1.2 Report Structure
In Section 2, we portray the related work done on this topic. In Section 3 we describe the approach of the project in order to answer the research questions. The different steps of the dynamic library embedding process are elaborated from Section 4 until Section 8. These sections start with an in depth explanation about the step and continue by describing the work done to port this step to GNU/Linux. In Section 9 we explain how to automate all the steps from GNU/Linux. Finally, in Section 10 and Section 11 the limitations and conclusion are presented.
2 Related Work
Embedding a dynamic library into an application is not a new concept. This technique is also used for other applications running on other operating systems such as GNU/Linux, Windows, and macOS [13]. For example, to record different classes of function calls, such as API calls to the Windows API, or system calls for Linux [5]. Since jailbroken devices are not always available, during the last years this technique has been developed and refined for applications running on iOS.
In October 2014, Jonathan Zdziarksi wrote an article [14] in which he explains how an attacker can obtain an iOS application, embed external code in the binary executable, and then install the patched application on a non-jailbroken device. These techniques can be leveraged to embed dynamic libraries into iOS applications. In February 2015, Carl Livitt wrote a series of articles [15] that expanded the work done by Zdziarksi. In his articles he covers the essentials of adding dynamic libraries to iOS applications and describes the tools required to execute this process on non-jailbroken devices. In October 2016, Adrian Villa published an article [8] in which he describes the procedure to enable instrumentation of iOS applications on non-jailbroken devices. This procedure consists of embedding the Frida dynamic library into an iOS application, re-sign the patched binary with a provision profile, and then deploy the modified application to the jailed device.
During the Black Hat conference of 2016, Nishant Das Patnaik presented a framework called Appmon [16]. This software suite was designed to monitor and inspect system API calls of native apps on macOS, iOS and android. To do so, it relies on the Frida project. For the case of non-jailbroken iOS devices, Appmon provides a way to embed the Frida gadget into the target application.
A limitation that the different aforementioned publications share is that a macOS system with the Xcode framework installed is required. Xcode is an integrated development environment used to write and compile software for macOS, iOS devices, the Apple Watch, and the Apple TV [17]. The embedding process is mostly implemented by Xcode and other Apple native tools and little is documented about the inner workings of them. Our work presented in this paper is aimed at overcoming these limitations by investigating how the different steps of dynamic library embedding in iOS applications work and try to porting and automating these steps from GNU/Linux.
3 Approach and Methods
In order to answer our main research question, many challenges need to be addressed. Firstly, a theoretical analysis of the current state of the art needs to be performed. During this analysis the different steps and the requirements for dynamic library embedding into iOS applications will be identified. Moreover, a detailed investigation of the different file formats, procedures, and protocols involved in each of the steps will be done. For instance, the macOS executable file format and the iOS App Store Package (IPA) format will be covered.
Next, the internals of the iOS application’s building process will be explored. This includes provisioning profile generation, code signing and application deployment. In order to get a better understanding of the internals, we will use common analysis techniques such as reverse engineering, source code review and network packets analysis.
Once the steps and requirements of the embedding process are clear, we will analyze the different possibilities to re-implement the process in GNU/Linux. This will be done by exploring tools already ported to Linux, porting when possible the tools that are only for macOS, or writing new tools. Finally, we will investigate how the embedding process can be automated. In the case that automation is feasible, a proof of concept will be implemented.
In order to verify the correctness of the process and the developed proof of concept, several tests will be executed. These tests will be executed using two iOS devices as explained in the following section. Also we will verify the proof of concept using different iOS applications.
3.1 Hardware Used
For this research project we will use two laptops and two iOS devices in order to execute our experiments. The first laptop will be a MacBook Pro running macOS 10.12. This system will be used to investigate the internals of the dynamic library embedding process. Moreover, this laptop will have the Xcode 8.3.3 framework installed and will be used to analyze the components and tools involved in the process. The second laptop will be running the GNU/Linux distribution Fedora 25. This laptop will be used to explore the tools that are already ported. Furthermore, on this system we will conduct the experiments needed to port the steps that are currently bound to macOS.
Finally, in order to verify that our experiments are successful, we will use two non-jailbroken devices: an iPad Mini 3 running iOS 10.2.1 and an iPhone 6s running iOS 10.3.2. In the event of discovering that it is not possible to execute a certain experiment from a non-jailbroken device, we have limited access to a jailbroken iPhone 6 running iOS 10.2.
4 Application Acquisition
As previously mentioned, the first step of the dynamic library embedding process is the acquisition of the iOS application. iOS developers can use two formats to publish or distribute their applications. These are the app bundle (.app) format and the iOS App Store Package (IPA) format [18].
The app bundle consists of resource files and at least one executable binary file. The resources are everything an application needs besides the code itself, for example, storyboards, images, and audio files. The executable binary file, contains the machine code and it conforms to the mach-o executable file format [19]. An app bundle always contains the following resources: the Info.plist file, the Base.lproj folder, and the _CodeSignature folder. The Info.plist file, also called the information property list file, lists the metadata of the application such as the bundle name, bundle version, and the requirements [20]. The Base.lproj folder contains all the storyboard (.storyboard) or XML Interface Builder (.xib) files in the development language [21]. These files store the visual representation of the user interface of an iOS application [22]. The _CodeSignature folder contains the signature of the app bundle [23]. This is done to guarantee the integrity of the files. Finally, if the application is not distributed through the app store, but for testing purposes an embedded.mobileprovision file is included in the app bundle [24]. This is the provisioning profile, that specifies the permissions of the application as well as the signer identity.
```
Application.ipa
|- Payload
|- Application.app
|- Application
|- Base.lproj
|- Info.plist
|- _CodeSignature
|- CodeResources
|- embedded.mobileprovision
|- AppIcon29x29.png
... |
|- iTunesMetadata.plist
| |
|- iTunesArtwork
```
Figure 2: Tree structure of a general iOS App Store Package
The iOS App Store Package (IPA) is a compressed directory (ZIP archive) [25] containing the app bundle and additional resources needed for App Store services. The general structure of an IPA is shown in Figure 2. The IPA archive consists of the files iTunesArtwork, and iTunesMetadata.plist, and a Payload directory containing the app bundle. The iTunesArtwork file is a PNG image, containing the app’s icon for showing in iTunes and the App Store [26]. The iTunesMetadata.plist file is used to provide extra information to iTunes about an iOS appli-
cation, such as genre, supported iOS devices, and required device capabilities. This file is not included in the IPA, if the app is distributed via ad hoc distribution, e.g. distributed for testing.
Apple uses two types of IPA: the universal and the thinned type. A universal IPA is a compressed app bundle that contains all of the resources to run the app on any device. In this case the executable file will contain multiple binaries with different ARM architectures, and is also called a “fat” binary in this case. A thinned IPA is a compressed app bundle that only contains the resources needed to run the app on a specific device type [27]. This optimization, called App Thinning, produces a IPA that only supports a single architecture and is therefore smaller. For App Store apps, the thinned IPA is downloaded to devices running iOS 9 or later and the universal IPA is downloaded to devices running iOS 8 or earlier. For example, when downloading an application on the iPhone 6 running iOS 10 the executable file will only contain the binary for the 64 bit ARMv8-A architecture [28].
Furthermore, the IPA is protected by the Digital Rights Management (DRM) technology called FairPlay [29]. When a user downloads an application from the App Store, this application is encrypted and signed by Apple. To protect the application even more, Apple injects a 4196 byte long header into the executable file within the application. This header is encrypted with the public key associated with the Apple account of the user [30]. When the application is installed the iOS device will decrypt the header with the private key of the user, which will succeed if the application was downloaded from the App Store with matching user credentials. This technique prevents users from installing the application on a device with an different Apple account associated.
4.1 iOS App Store Package Extraction
Apple provides users two ways of making a backup of their device; via iCloud or via iTunes. Using the second option makes it possible to extract the IPA files from this backup. A number of tools provide functionality to extract IPA files from iOS backups: i-FunBox [31], iMazing [32], and the archive functionality of ideviceinstaller [33]. All these tools can be installed on Windows and macOS, but the only one that can also be installed in GNU/Linux is ideviceinstaller. However, due to a change in the way applications are backed up in iOS 9 and higher, these tools can only extract the IPA files from backups from iOS 8 devices and lower.
Nowadays, only the user data is stored in the backup and not the applications themselves. When restoring the iOS device from a backup the installed apps are downloaded again and only the user data is being restored from the backup [27]. Our hypothesis is that this decision is made due to App Thinning and to improve the overall security [34] [35]. First, App Thinning would impose limitations in Apple’s back-up system by preventing users from restoring applications in an iDevice with a different architecture. The second reason is that by only backing up the user data and not the application, when restoring the backup the last version of the app will be installed. Thus improving the overall security of the iOS ecosystem by avoiding the usage of outdated applications.
Since iOS 9 the only official way to obtain an IPA is through the “Download Purchased Application” functionality provided by iTunes [36]. Using this method a user can download purchased applications for backup purposes. Whereas on the iOS device the thinned IPA is downloaded and installed, using the iTunes functionality the universal IPA is downloaded. However, the universal IPA is still protected by Apple using FairPlay. Since, the IPA archive is encrypted and iTunes is not available for GNU/Linux another approach is needed to acquire the application.
To overcome the encryption limitation, the following process can be used [37] [38]. First the application needs to be installed and launched on an iOS device. When an iOS application is launched, the loader decrypts it and loads it into memory. In order to retrieve this decrypted version from memory the size of the payload needs to be calculated first. Next the memory loading address of the application needs to be found, the decrypted portion of the application
dumped using a debugger such as the GNU Project debugger, and then the encrypted area of
the application executable overwritten with the dumped data. Note that only the executable file
is encrypted by Apple, the resource files are never encrypted. Finally, the cryptid flag in the
executable needs to be changed to 0. cryptid specifies if an application needs to be decrypted
before loading into memory, when this value is 0 then iOS won’t decrypt the application again.
However, since normal users do not have enough rights to dump data from memory, a jailbroken
device is needed to in order to execute this process.
The steps can be executed manually as described above, but are also automated by applications
like Clutch [39]. Clutch can be executed on GNU/Linux, but requires a jailbroken device running
iOS 8 or later. Clutch works by executing a binary on the iOS device which communicates to a
host machine using SSH over USB. It hooks into the device runtime to dump the application from
memory and into an unsigned application. By using Clutch we extracted five applications, using a
jailbroken iPhone 6. We extracted a battery life app, two QR-code scanner apps, the 9292 travel
app, and the Wikipedia app. See Section 9 for more details about the extracted applications and
refer to the Appendix A for an overview of Clutch usage.
5 Executable Modification
An important step of the embedding process is the linking of the selected dynamic library (dylib) file to the main executable file. First the iOS executable file format needs to be studied, in order to know how and where to link this file.
The iOS app binary interface (ABI) uses the mach-o format as the standard to save binaries and libraries [19]. This file format consists of different regions each having a special purpose. When multi-architecture support is needed, developers can aggregate multiple mach-o files into a single executable file. These special binaries, called fat or universal binaries, contain a header that identifies the file as a fat file and indicates the amount of architectures contained in the binary. When an executable is compiled for a single architecture, this results in a thin binary file. This type of executable consist of a single mach-o file without extra headers. Figure 3 depicts a diagram of the mach-o file format.
![Figure 3: Mach-o file format [19]](image)
The first region of a mach-o file is the header structure. This defines the file as a mach-o file and indicates the target architecture. Moreover, the header structure contains the binary flags and information about the file type (dylib, executable, bundle, etc). The fields that are more relevant for the embedding process are the \texttt{ncmds} and \texttt{sizeofcmds}. Where \texttt{ncmds} indicates the number of load commands, and \texttt{sizeofcmds} indicates the bytes occupied by the load commands region.
The second region is the load command region. Here, the layout and linkage properties of the mach-o file are defined. Every load command starts with the same two fields, the \texttt{cmd} that specifies the command type and the \texttt{cmdsize} that indicates the size of the command data. Depending on the command type, the rest of the command structure and total size can vary. An example of a common load command is the \texttt{LC_LOAD_DYLIB} command. This load command is used to link an executable file with a dynamic library and consists of three fields. The command type which is \texttt{LC_LOAD_DYLIB}, the command size which is set to \texttt{sizeof(dylib command)}, and a dylib data structure. This data structure specifies the attributes of the shared library that an executable file links against. Moreover, the dylib data structure will be accessed by the dynamic linker at runtime, to locate the shared library.
The last region is the data region. Here the code and data are stored as specified by the load commands. This region is organized in segments that, depending on the type, can contain zero or more sections. The exact number and layout of segments and sections is specified by the load commands and the file type defined in the header structure. An important segment that can be found in every executable file is the _TEXT segment. This segment contains executable code and other read-only data. Additionally, it can include sections such as _const and _cstring where initialized constant variables and strings are placed.
To embed a dynamic library into an already existing binary, one needs to manipulate the mach-o file. This is done by inserting a LC_LOAD_DYLIB command into the load command region. Since the amount of commands changes, the ncmds and sizeofcmds values in the header structure are no longer valid. To fix this issue, during the mach-o manipulation, these attributes need to be recalculated.
5.1 Practical Implementation
The aforementioned steps required to embed a dynamic library are implemented by a series of open source projects. These are node_applesign [40], optool [41], and insert_dylib [42]. However, these three project are only implemented for macOS. Since the most popular one is insert_dylib, we decided to analyze it and then port it to GNU/Linux.
The insert_dylib project provides a command line tool for inserting the “dylib load command” into mach-o files. It does so by adding to each mach-o in a fat binary file a LC_LOAD_DYLIB load command to the end of the load commands region. Then it increments the mach-o header’s ncmds and adjusts its sizeofcmds. Furthermore, since after the modification of the mach-o the binary signature is no longer valid, insert_dylib provides a mechanism to strip the code signature blob.
In order to port insert_dylib first we analyzed the code to identify the operations that were bound to macOS. The result of this analysis indicated that the macOS exclusive components of the code were the mach-o declaration headers and the copyfile function.
To implement the file manipulation needed to embed a dynamic library, insert_dylib requires to be aware of the internal structure of the mach-o file format. This is done by including in the program the header files fat.h and loader.h that declares the mach-o structure. Although these headers files can be downloaded from the Apple web site as they are open source, not all the declarations are included. When we downloaded from Apple all the available headers and tried to compile the software in GNU/Linux, errors regarding missing declaration headers were raised. To overcome this problem, we downloaded the header files provided in the cctools GitHub project [43] and included these at compilation time. The cctools open source project provides to the GNU/Linux community a means to cross compile macOS binaries. This includes all the mach-o declarations needed to port insert_dylib to GNU/Linux.
Finally, since the copyfile.h was not included in the GitHub cctools project, the copyfile function could not be compiled. This issue was solved by re-implementing the functionality using GNU/Linux native operations.
Once the mach-o declaration was fixed and the copyfile function was re implemented, we were able to successfully compile insert_dylib on GNU/Linux. The work done to port this tool can be downloaded from our GitHub repository.
\[1]\text{https://opensource.apple.com/tarballs/cctools/}
\[2]\text{https://github.com/LeanVel/insert_dylib}
6 Re-signing the iOS App Store Package
The third step of the embedding process is resigning the IPA archive. This is needed because in the previous step the signature was invalidated by modifying the executable binary and adding a new dynamic library file to the IPA archive. Apple’s code signing mechanism is mandatory and is used by iOS to verify an application’s integrity and the developer’s identity [23]. If an application does not pass the signature verification, the iOS kernel will prevent execution of the application [44]. This ensures that the code was not tampered with between the release of the application and the installation of it.
When developers or organizations want to install their own applications without using the App store, they need to sign the application and install the corresponding provisioning profile on their devices before the application can be started. A provisioning profile identifies the developer as a signer and indicates to iOS that applications signed by that developer are allowed to run on the device. This is not needed for applications in the App Store because those are already signed by Apple. The procedure of generating a provisioning profile is elaborated in Section 7.
When signing an application Apple distinguishes three type of components, the nested code, the main executable, and resources. The nested code are all the helper tools, dynamic libraries, plug-ins, frameworks, and other code that the main executable depends on. Everything in an application bundle that is not explicit code (nested code or the main executable) is a resource. Depending on the type of the component, the signature will be generated and stored in a different way. First the nested code is recursively signed. This is done by signing the corresponding files at the deepest level of the dependencies tree and then continuing upwards. This is because, the signature of a nested code file is used while signing a file higher in the dependencies tree. The result of this process is then stored in the file `.CodeSignature/CodeResources` within the IPA archive. Next, all the resources are individually signed and the signatures are stored alongside the nested code signatures. Finally, the main executable file is signed. However, the way this signature is generated differs significantly from the rest of the files.
For each mach-o file embedded in the main executable file, the signature is calculated and stored within each mach-o file using the `LC_CODE_SIGNATURE` load command [45]. This mach-o signature consists of a series of blobs, each having a special purpose. The first blob is the “Code Directory”, in this directory the hashes of all the file pages are stored in individual slots. Moreover, the auxiliary data hashes such as the entitlements, and resource directory are added as special slots within the directory. Then, the signature has the “Requirement set” blob. This blob contains statements that will be used by iOS at the moment of verifying whether the code is validly signed and satisfies the constraints of the requirement [46]. The next blob is the “Entitlement” blob, which consists of the entitlements granted to the signed executable file. This blob will be used by iOS to decide whether to grant access to system resources. The last blob is the “Blob wrapper” where the signature of the aforementioned blobs is stored together with the corresponding certificates. Listing 1 depicts an example of the code signing blob embedded in an executable file.
Blob at offset: 6805776 (38521 bytes) is an embedded signature of 38521 bytes, and
4 blobs
Blob 0: Type: 0 #44: Code Directory (33431 bytes)
Version: 20200
Flags: none (0x0)
CodeLimit: 0x67d910
Identifier: com.rbtdigital.Battery-Life (0x34)
CDHash: 8d7546dea0858cd773bb57b50542cf6923b6c39b (computed)
# of Hashes: 1662 code + 5 special
Hashes @191 size: 20 Type: SHA-1
Entitlements blob: 78560f9a3aad787ebe5d177e2da8a87fc9bcd1ab
Application Specific: Not Bound
Resource Directory: 3cb738270c6d2ab4d46469454eeb1146147a4d7c
Requirements blob: c56c48072543843ba0d272cc635d6b8bbc3c5f14
Bound Info.plist: 817672dee672300d34c11debe708b005dd277c939
Slot 0 (File page @0x0000): f1f2c9fb0bef596de47db865d078783ba8327747
Slot 1 (File page @0x1000): 958eafbf8bc9821c91f30f3206c2783a763e22ae...
Slot 1660 (File page @0x67c000): f342b2a4f84d69d21fda97056e4c66449c24900
Slot 1661 (File page @0x67d000): 61c41b459b8774ee4d89423748173d4eab3f7
Blob 1: Type: 2 #33475: Requirement Set (208 bytes) with 1 requirement:
0: Designated Requirement (128, 168 bytes): SIZE: 168
Ident: ("<APP/uni2423Identifier>") AND Apple Generic Anchor
Blob 2: Type: 5 #333683: Entitlements (474 bytes)
Blob 3: Type: 10000 #34157: Blob Wrapper (4364 bytes) (CMS (RFC3852) signature)
CA: Apple Certification Authority CN: Apple Root CA
CA: Apple Worldwide Developer Relations CN: Apple Worldwide Developer Relations
Certification Authority
CA: Apple Certification Authority CN: Apple Root CA
CA: Apple Certification Authority CN: Apple Root CA
Listing 1: Signature blob embedded in mach-o file as shown by Jtool
6.1 Code Signing Implementation
For iOS developers that use the Apple Xcode framework, the code signing mechanism is transparent. Xcode uses the tool codesign to perform the code signing needed. Many open source projects such as node-applesign [40], iReSign [47], and Sigh (part of the fastlane project [48]) implement code signing. However, most of these projects are built on top of the macOS native tool codesign. Nonetheless, there are two projects that implement Apple code signing without the usage of codesign.
The first project is Jtool [49]. This tool re-implements the functionality of a set of macOS native tools such as otool, codesign, atos, and dyldinfo. Since it is compiled for a variety of operating systems, including GNU/Linux, it was relevant to our research project. Although it implements some features of codesign, there are still some limitations in the way that code signing is done. The most important limitation is that it does not generate the “Requirement set” blob needed to evaluate the signature on the device. Moreover, it only supports signing of executable files. This means that for the nested code and the resources another tool should be used. Since the source code of this software is not available, we have raised a feature request to the developer about the empty “Requirement set” blob issue. This request was answered and in the next release of the tool the issue will be approached. Nevertheless, as this tool provides a clean view of the different blobs involved in the signature, it was a key component to understand the internals of the code signing process.
The second project is iSign [50]. This open source software is designed to re-sign iOS applications, without proprietary Apple software. This tool takes as the input an IPA file or application bundle and re-signs the complete application. In order to perform this, it requires that the signer’s
private key, certificate, and provisioning profile are correctly set up in the host. Since this tool was implemented to re-sign applications, it does not support unsigned binaries. The way that iSign works is by overwriting the existing signature with the new one [51]. It reuses parts of the old signature such as the load command in the mach-o file that points to the code signature structure.
As explained in Subsection 5.1, after modifying the executable file we stripped out the existing signature as it became invalidated. This action produces an unsigned executable file that can not be signed by iSign. We overcame this limitation by using a recent fork of iSign [52] that is able to sign an executable file without reusing any pre existing structure, in other words it implements signing from scratch. Even though this version of iSign supports unsigned binaries, there was still a problem at the moment of signing the application plugins, also known as application extension (appex). An app extension, such as a widget, is used to provide extra functionality to an application. These are frequently used by iOS developers[53]. As a workaround this could have been solved by removing from the IPA the directory “Plugins”. However, we troubleshooted the issue and contacted the developers. After providing error traces and further testing of the software, the issue was fixed.
7 Provisioning Profile Generation
As mentioned in Section 6, a valid provisioning profile is required in order to sign an application. A provisioning profile is a file signed by Apple that lists the certificates, devices, and the entitlements granted to applications [44]. When this provisioning profile is installed on one of the listed devices, it will present to the operating system the certificates that are allowed to sign executables.
To generate a provisioning profile many steps are needed which involve generating key pairs, issuing certificates, creating app IDs, and registering devices. There can be some limitations with the provisioning profile, depending on the Apple developer program membership [54]. Apple distinguishes three membership options. The first is the free membership, and this includes everybody with an Apple account that is not a developer. With a free membership it is possible to create a provisioning profile which has to be renewed every 7 days. Besides, the user has to provide a list of device UUIDs to which the provisioning profile will be deployed. The second membership is the individual developer. This membership requires a yearly fee, and allows the creation of provisioning profiles which expire in 365 days. The last membership is the enterprise developer. This paid membership enables companies to generate provisioning profiles that do not require a list of devices. Table 1 on the next page depicts an overview of the relevant differences between the memberships.
An overview of the provisioning profile generation process is shown in Figure 4. The grey components represent the input requirements and the process itself is visualized in blue. In order to generate a provisioning profile, first the developer needs to authenticate to Apple using a valid AppleID and password. Next, a 2048 bit RSA key pair needs to be generated by the developer, which consists of a private key and a public key. The private key needs to be stored by the developer and is never sent to Apple. The key pair is then used to generate a Certificate Signing Request (CSR) [55]. A CSR is a combination of the public key and identifying information such as the organization name, common name (domain name), locality, and country. The CSR is then submitted to Apple which acts in this case as the Certificate Authority. This CSR is used by Apple to generate an identity certificate for the developer that proves the ownership of the public key. After the CSR has been submitted, the Universally Unique Identifier (UUID) of an iDevice needs
to be registered. These are the devices in which the provisioning profile will be installed. It is important to note that this is not necessary for enterprise developers, since this type of developer is not restrained to a limited amount of devices.
Next, the developer needs to generate an **AppID** value, in order to register the application. This can be done by using an explicit name or a “wildcard” name. When using a wildcard name the provisioning profile can also be used for other applications on the registered device. The wildcard functionality is only available for individual and enterprise developer accounts, free Apple users need to generate a new provisioning profile for every new application. Finally, the provisioning profile can be requested and used for signing.
<table>
<thead>
<tr>
<th>Membership Type</th>
<th>Expiration of provisioning profile</th>
<th>Devices in provisioning profile</th>
<th>Access to Developer Portal</th>
</tr>
</thead>
<tbody>
<tr>
<td>Free Apple account</td>
<td>7 days</td>
<td>List of Devices UUIDs</td>
<td>No</td>
</tr>
<tr>
<td>Individual Developer</td>
<td>365 days</td>
<td>List of Devices UUIDs</td>
<td>Yes</td>
</tr>
<tr>
<td>Enterprise Developer</td>
<td>365 days</td>
<td>-</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Table 1: Relevant differences between Apple developer subscriptions
### 7.1 Individual / Enterprise Developer account
As shown in Table 1 individual and enterprise developers have access to the Apple Developer Portal. This portal can be accessed via the web browser, and can be used to generate the provisioning profile as described in Figure 4. It is also possible to directly use the Apple developer API, used by the Apple Developer Portal and Xcode. We identified one open source project able to use these API directly using a HTTP client. The **fastlane** project [48] provides a Ruby library called **spaceship** which is able to do this. By using this library we developed a script called genProvisioningProfileDev which can automatically perform all the steps mentioned in Figure 4. First the API https://idmsa.apple.com is used to authenticate and get a valid session. Then the API https://developerservices2.apple.com is used to register devices. Submitting the CSR, registering the application, and requesting the provisioning profile is done using the API https://developer.apple.com. Currently, the script can only be used for individual and enterprise developer accounts, because **spaceship** can only handle accounts that are enrolled into a team with an active membership, thereby having an **TeamID**.
### 7.2 Free Apple account
No open source tools were identified which could be used to retrieve the provisioning file using a free Apple account. Moreover, the online developer portal is not available for users with a free account. The only tool besides Xcode, which is able to sign applications using a free Apple account is Cydia Impactor. Therefore, we explored the way Xcode 8.3.3 (the current version at moment of writing this report) and Cydia Impactor are able to generate the mobile provisioning file. In order to do so, we used the Burp suite to setup a proxy and monitor the incoming and outgoing traffic. In addition, we installed the Burp’s Certificate Authority (CA) certificate as a trusted root in the computer, thereby performing a man-in-the-middle attack which allowed us to inspect the encrypted HTTP traffic.
Only when Xcode is opened for the first time, the user has to authenticate using a valid AppleID and password. To capture this moment we created a new AppleID and tried to authenticate for the first time. However, while the Burp proxy was enabled Xcode refused the authentication and
alerted that “an unknown error has occurred”. Since after disabling the proxy, authentication took place without issues, our hypothesis is that Xcode 8 uses certificate pinning during the authentication. Afterwards, we re-enabled the proxy and generated a provisioning profile through Xcode. This resulted in a number of calls to the Apple API developerservices2.apple.com. First the TeamID is obtained. Because Apple requires that every developer is part of a team, Xcode enrolled our test account into the Xcode Free Provisioning Program, and added it to a new team called “<firstname> <last name> (personal team)”. Next all active development certificates, and the currently associated applications are listed. If a new application needs to be associated, a new AppID is generated and is associated with the AppleID. Finally, the mobile provisioning file is downloaded using the TeamID and AppID. Since we could not capture the network traffic during the authentication process, it is unknown how the development certificates are generated and how Xcode enrols the new Apple accounts to the Xcode Free Provisioning Program.
In order to explore the missing information we also monitored the traffic of Cydia Impactor in the same way we monitored Xcode. Cydia Impactor uses the same API used by Xcode in order to register and retrieve the information needed to download the provisioning profile. First the user has to provide a valid AppleID and password. Using this information in combination with a static AppIDKey value, the authentication takes place via the idmsa.apple.com API. It is important to note that the AppIDKey value is bound to the application that is authenticating to the API, and is not the identifier of an iOS application. In this case the application is Cydia Impactor. The result of the authentication procedure is a 575 character string called myacinfo. This string is subsequently used as an authentication token each time a message is sent to the developerservices2.apple.com API. This API is used for submitting the CSR, registering the device, registering the application, and retrieving the provisioning profile. Listing 2 depicts an example of a HTTP header sent to the developerservices2.apple.com API. This listing shows that Cydia Impactor presents itself as a Xcode version 7.0 client.
```
Host: developerservices2.apple.com
Content-Type: text/x-xml-plist
X-Xcode-Version: 7.0 (7A120f)
Cookie: myacinfo=DAWTKNV2b444b2323aa07dff9d559ae6fa86b63abfa653e889280ce69...
Accept-Language: en-us
Accept: text/x-xml-plist
Content-Length: 1145
Connection: close
User-Agent: Xcode
```
Listing 2: Example header of a captured Cydia Impactor HTTP packet while generating a mobile provisioning profile
By using the packets captured from Xcode and Cydia Impactor we implemented a bash script called genProvisioningProfileFree that mimics the provisioning profile generation process for free apple accounts. This helper script uses the AppleID and password of the user, in combination with the UUID of the iDevice to request the provisioning profile.
The helper scripts implemented for both developer and free Apple accounts can be downloaded from our GitHub repository 3.
3https://github.com/LeanVel/iInject
8 Install the iOS App Store Package
The final step of the embedding process is installing the modified IPA on the iDevice. In order to provide communication between the host machine and an iDevice, generally iTunes is used. This software can be run on a macOS or Windows machine, which is in charge of establishing the connection by means of the lockdown protocol. The lockdown protocol, provides pairing, activation, FairPlay certificate handling, and delegates communications to other services [56] [57]. lockdown runs on port 62078 and can accept connections via USB or via WiFi over TCP.
When connecting the iDevice via USB, the general USB protocol is used to provide generic access to the iDevice. On top of this protocol the USBmux protocol provides the multiplexing of several TCP connections over one USB pipe [58] [59].
After the lockdown service establishes the pairing, other protocols are used to provide access to different areas of the iDevice. For example the AFC (Apple File Connection) protocol can be used to exchange files between the iDevice and iTunes. Another example is the installation_proxy protocol, which is used to install and list applications. All the protocols are run as daemons on the iDevice and a client program is used on the host machine to connect with the corresponding daemons. Jonathan Zdziarski has published an overview of the known protocols involved [56].
8.1 Deployment Implementation
Besides iTunes itself, the aforementioned protocols are also implemented by the libimobiledevice open source project [60]. Libimobiledevice is a cross-platform software library that supports Windows, macOS, and GNU/Linux. This library allows access to the filesystem, and can be used to retrieve details about the connected device, create backups, manage installed applications, and synchronize music, video’s, address books, calendars, notes, and bookmarks from and to the iDevice. As shown in Figure 5, the aforementioned protocols are implemented by the libimobiledevice library.

Figure 5: Overview of communication between iDevice and Host Machine
The libimobiledevice library only implements the protocols. In order to make use of the protocols provided by libimobiledevice, other tools are provided by the project which are built upon this library. For instance, to install, upgrade, uninstall and enumerate installed applications ideviceinstaller is used. This tool interacts with the installation_proxy daemon of an iOS device.
Another well known tool for installing applications is Cydia Impactor [61]. This is a cross-platform tool that supports Windows, macOS, and GNU/Linux. The main differences with ideviceinstaller are that Cydia Impactor is closed source, provides only a Graphical User Interface (GUI), and can also sign applications besides installing. By running the Linux `strings` command on the binary of Cydia Impactor, it was possible to see that Cydia Impactor also uses the libimobiledevice library in order to communicate with the iDevice and to install applications. Even though Cydia Impactor can sign applications, it does so using “Entitlements” that do not allow application debugging. If an application does not need to be started in debug mode, this is not an issue, but this may impose a severe limitation for applications that need to be started in this mode. An example of a scenario where debug mode is needed is when the Frida Gadget dynamic library is embedded in an application. Although the Gadget will be loaded when the application is started, to begin the instrumentation of the application the Gadget first needs to attach to it. If the application is not started in debug mode, iOS will not allow any process to attach to the application. Since the goal of our project is to embed dynamic libraries like the Frida Gadget, Cydia Impactor is not a suitable solution. Therefore we use ideviceinstaller as the solution to install the modified IPA to the iDevice.
8.2 Running the modified application
Before an application can be run, the provisioning profile needs to be deployed to the iDevice. The tool ideviceinstaller will automatically install this profile, which makes it possible for iOS to run the security checks needed before an application will run. If the installation goes well, the app should launch on the device by tapping the application icon.
As mentioned before, some dynamic libraries, such as the Frida dynamic library need to be started in debug mode [62] [8]. In order to activate this mode, the debugging symbols need to be loaded first on the iDevice. These can be loaded by mounting the developer disk image for the right iOS version. The `DeveloperDiskImage.dmg` file can be copied from any macOS system with Xcode installed. This file can be found under `/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/<iosversion>/`. This file can also be downloaded from various sources on the internet. After this file is retrieved, macOS is not needed anymore. On GNU/Linux, the tool `ideviceimagemounter`, part of the libimobiledevice project, can be used to mount this file to the iDevice. After this file is mounted the application can be started in debug mode by using the tool `idevicedebug`, also part of the libimobiledevice project.
9 Automation
In order to answer our last research question, we developed a proof of concept tool which we call iInject. This command line tool takes as input an IPA file and a dynamic library file. The tool performs the application modification, code signing, and deployment in an automated fashion. In Figure 6 we present a diagram of the inner workings of iInject. Since it is not possible to acquire the application from a non-jailbroken device, this step of the process is not included in iInject. Therefore, an unencrypted IPA is required as part of the input.
First of all, iInject verifies if a valid signing identity and provisioning profile is set up on the host system. In other words, it checks if there is a valid private key with the corresponding certificate signed by Apple, and it verifies that the provisioning profile is not expired and includes the target device in the device list. If this check fails, iInject will suggest to the user to generate a provisioning profile using one of the helpers scripts introduced in Section 7.
Once the provisioning profile is correctly set-up, iInject uncompresses the IPA archive in a work directory. After the IPA is correctly uncompressed, the tool copies the selected dynamic library file into the “<application name>.app” directory. Then, the program `insert_dylib` is called to insert the `LC_LOAD_DYLIB` load command into the application’s executable. Since the modification of the executable invalidates the old signature, we use the “code signature stripping” feature of `insert_dylib` to remove the old signature from the binary.
The next step is signing the application with the installed provisioning profile. Before signing iInject creates a new IPA file by compressing the “Payload” directory. This directory by now contains a modified executable and the selected dynamic library file. When the new IPA is ready, the program `iSign` is called. As explained in Subsection 6.1, iSign signs the application’s nested code, resources, and main executable file. Moreover, it adds to the IPA archive the “embedded.mobileprovision” file required by iOS to verify the code signature. This file is the provisioning profile used during the signing process.
Finally, iInject deploys the new IPA to the target device. This is done via the program `ideviceinstaller`. This program implements the protocols needed to communicate with the device, pushes the IPA file, and launches the installation proxy on the device to install the application. The installation instructions, as well as additional technical details of the tool, can be found in our GitHub repository).
To verify the correctness of iInject we have tested the tool using two non-jailbroken devices: an iPhone 6s running iOS 10.3.2 and an iPad mini 3 running iOS 10.2.1. Moreover, we used ten different IPAs during the tests. These applications were acquired from either a jailbroken iPhone running iOS 10.2 using Clutch or downloaded from the iosninja.io website. iosninja.io provides IPA files without Fairplay protection.
---
4 https://github.com/LeanVel/iInject
5 https://iosninja.io/ipa-library
In Table 2 we present the BundleID of the applications, the origin of them, and the result of the tests. Both the IPAs that did not pass the test successfully, had problems in the code signing step. Our hypothesis is that some of the framework files is in a format that is not supported by iSign. Therefore, we have escalated the issues to the iSign developers to find out the root cause of the problem.
<table>
<thead>
<tr>
<th>Name</th>
<th>Application Bundle ID</th>
<th>Origin</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>BatteryLife</td>
<td>com.rbtdigital.Battery-Life</td>
<td>Clutch</td>
<td>Success</td>
</tr>
<tr>
<td>QR Scanner</td>
<td>com.wenstudio.free.f3.Scanner6</td>
<td>Clutch</td>
<td>Success</td>
</tr>
<tr>
<td>QR Free</td>
<td>com.ihandysoft.barcode.qr.free</td>
<td>Clutch</td>
<td>Failure</td>
</tr>
<tr>
<td>9292</td>
<td>nl.9292.9292</td>
<td>Clutch</td>
<td>Success</td>
</tr>
<tr>
<td>Wikipedia</td>
<td>org.wikimedia.wikipedia</td>
<td>Clutch</td>
<td>Failure</td>
</tr>
<tr>
<td>YouTube++</td>
<td>com.google.ios.youtube.noads</td>
<td>Web Site</td>
<td>Success</td>
</tr>
<tr>
<td>FilesBrowser</td>
<td>com.highcaffeinecontent.Files</td>
<td>Web Site</td>
<td>Success</td>
</tr>
<tr>
<td>Kodi 16 Ja</td>
<td>rvisorg.xbmc.kodi-ios</td>
<td>Web Site</td>
<td>Success</td>
</tr>
<tr>
<td>BatteryLifeApp</td>
<td>com.rbt.batteryLifeApp</td>
<td>Web Site</td>
<td>Success</td>
</tr>
<tr>
<td>FlappyBird</td>
<td>com.dotgears.flap</td>
<td>Web Site</td>
<td>Success</td>
</tr>
</tbody>
</table>
Table 2: Applications details and test results
10 Discussion and Future Work
In this project we have shown that the process of embedding a dynamic library into an existing iOS application can be performed from a GNU/Linux system in an automated way. Nevertheless, some limitations need to be addressed.
First, as explained in Subsection 4.1, to acquire the target IPA we need a jailbroken device or a non-jailbroken device running iOS 8 or lower. Currently, the last jailbreak was released in January 2017 for iOS 10.2, and this could be one of the last jailbroken iOS versions [63]. The reason for this is that it has become increasingly difficult to crack an iOS release, due to the security enhancements rolled out by Apple. Besides, even when a vulnerability is found which could be used for a jailbreak, this vulnerability is often sold to Apple or other high paying third parties. In the context of this project, we assumed that the security researcher interested in embedding a dynamic library has the means to acquire the IPA file. Hence, this limitation could also be overcome by contacting the developers of the application and retrieving the IPA this way.
Secondly, the scope of this project did not cover the development of dynamic libraries. This is not always an issue, since projects such as Frida and Cycript provide pre-compiled dynamic libraries. But, in order to develop and compile self-written dynamic libraries from a GNU/Linux system, further research needs to be done to port the full tool-chain for cross compilation.
Thirdly, we implemented a helper script called \texttt{genProvisioningProfileFree} that is able to retrieve the provisioning profile for free Apple accounts. Since we could not capture the authentication traffic from Xcode version 8.3.3, we mimicked the inner workings of Xcode version 7.0 in this script. If Apple decides to stop the support for Xcode 7.0, the script would not be functional anymore. For the script that we have developed to generate provisioning profiles using individual and enterprise developer accounts, this is not a limitation since it uses the Apple Developer Portal.
Furthermore, iInject and all the helper scripts were developed as a proof of concept. This means that to keep the implementation simple, we did not take all scenarios into account and we assumed that the requirements of our tool were properly fulfilled. However, we implemented basic checks to guide the user of the iInject tool to establish the right setup. Additionally, the tool can be improved to make the it more secure and efficient.
Although the tool was designed to embed any dynamic library into any iOS application, during the implementation of the tool only the Frida Gadget dynamic library was tested. Nonetheless, there are no indications that the tool would fail at the moment of embedding another dynamic library.
Finally, the tool was tested with two non-jailbroken devices running iOS 10.2.1 and 10.3.2. Since all the steps performed are independent from iOS 10, from a theoretical point of view iInject should work with lower iOS versions. During the development of iInject, ten different iOS applications were used to verify the behaviour of the tool. In order to improve the robustness of the tool, a more representative population of iDevices, iOS versions, and IPAs should be tested.
11 Conclusion
Our research focused on automating from GNU/Linux the process of embedding dynamic libraries into iOS applications. This process is mostly implemented by Apple native tools and little is documented about the inner workings of them. Therefore, to accomplish our goal, we performed a theoretical analysis of the current state of the art, identified the different steps of the embedding process, explored the way to implement each of the steps in GNU/Linux, and implemented a proof of concept that executes the steps in an automated fashion.
To begin with, we identified four steps in the embedding process: application acquisition, executable modification, application signing, and application deployment. Then, we studied in detail the different files and procedures involved in each of the steps. This includes the IPA archive structure, the mach-o binary file format, the code signing procedure, the provisioning profile generation, and the communication protocol between a host and an iOS device. The theoretical analysis revealed to us the requirements needed to embed a dynamic library into an already compiled iOS application.
After the theoretical analysis, we identified the tools that could implement each of the aforementioned steps. During this investigation many tools were found, however most of them were implemented for macOS. With the knowledge gained in the previous analysis, and by using additional open source projects, we ported and adapted the tools needed to implement from GNU/Linux all steps involved in the embedding process. Furthermore, by analyzing the network packets interchanged between Xcode and the Apple Developer Portal API, we identified the requirements and procedure to generate provisioning profiles without a macOS system.
Finally, we explored different ways of automating the complete embedding process. By leveraging the functionality of the tools identified in the practical investigation, we implemented a command line tool called iInject. This proof of concept takes as an input a target IPA and a dynamic library file and then performs the executable modification, application signing, and application deployment to the iDevice. Since the application acquisition requires a jailbroken device, we did not integrate this step in the proposed automated solution. Thereby allowing iInject to work on non-jailbroken devices. To fine-tune the parameters passed to the underlying tools we performed several tests on different iOS versions with different IPA files. Furthermore, we collaborated with the corresponding tools’ developers to fix features and functionality needed for this project. In addition to implementing iInject, we developed two standalone scripts capable of generating a provisioning profile by just providing a valid free Apple account or paid developer account.
To conclude, the process of embedding dynamic libraries into iOS applications can be performed from a GNU/Linux system. We have shown with our proof of concept that this process can be automated and can be executed on non-jailbroken devices.
References
DigiDNA SARL. iMazing — iPhone, iPad & iPod Manager for Mac & PC. https://imazing.com/, June 2017. [Online; accessed 06-July-2017].
Appendices
A Clutch
In order to obtain an unencrypted .ipa, Clutch can be used. This tool can be downloaded from the GitHub repository: https://github.com/KJCracks/Clutch. Although the building of the binary requires macOS, pre-built binaries are released along every major update: https://github.com/KJCracks/Clutch/releases/latest. Using the pre-built binary GNU/Linux can be used to execute all the required steps. In order to communicate with the binary iProxy can be used. iProxy is part of the usbmuxd project: https://cgit.sukimashita.com/usbmuxd.git/ and is also forked to the libimobiledevice project: https://github.com/libimobiledevice/usbmuxd. usbmuxd (USB multiplexing daemon) is a socket daemon to multiplex connections over USB from and to iOS devices. usbmuxd can be built from source using the repository or can be installed using a package manager by installing the following two packages: libusbmuxd.x86_64 and libusbmuxd-utils.x86_64. In order to retrieve an .ipa file, we executed the following steps:
1. Download or build the Clutch binary.
2. Install OpenSSH onto the iDevice (e.g. using Cydia).
3. Connect the iDevice to the host machine via USB.
4. Start the iProxy by running the following command in the terminal: iproxy 2222 22. This will forward all traffic from port 2222 to port 22 over USB.
5. Copy the Clutch binary to /usr/bin/ on the device by running the following command in a terminal on the host machine: scp /path/to/Clutch root@localhost:/usr/bin/
6. Open another terminal and connect to the iDevice by running ssh: ssh -p 2222 root@localhost
7. As shown in Listing 3, Command Clutch -i lists the installed apps and shows their bundleID.
8. As shown in Listing 4, Command Clutch -d <bundleID> will retrieve the .ipa files of an app.
```
Host:/usr/bin root# ./Clutch-2.0.4 -i
Installed apps:
1: QR Code <com.wenstudio.free.f3.Scanner6>
2: 9292 <nl.9292.9292>
3: Battery Life: check device’s runtimes <com.rbtdigital.Battery-Life>
4: Termius - SSH Shell / Console / Terminal <com.crystalnix.ServerAuditor>
Listing 3: List of installed apps and their bundleID
```
```
Host:/˜ root# Clutch-2.0.4 -d com.wenstudio.free.f3.Scanner6
Zipping Scanner6.app
ASLR slide: 0x100010000
Dumping <Scanner6> (arm64)
Patched cryptid (64bit segment)
Writing new checksum
Finished dumping com.wenstudio.free.f3.Scanner6 in 2.6 seconds
Listing 4: Retrieving the .ipa of QR-code scanner app
```
|
{"Source-Url": "http://ipv4.delaat.net/rp/2016-2017/p50/report.pdf", "len_cl100k_base": 13788, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 64981, "total-output-tokens": 19076, "length": "2e13", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0005173683166503906, "__label__crime_law": 0.0007042884826660156, "__label__education_jobs": 0.0006871223449707031, "__label__entertainment": 0.00010257959365844728, "__label__fashion_beauty": 0.00027441978454589844, "__label__finance_business": 0.00029921531677246094, "__label__food_dining": 0.00023424625396728516, "__label__games": 0.0011320114135742188, "__label__hardware": 0.004024505615234375, "__label__health": 0.00025463104248046875, "__label__history": 0.00026726722717285156, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.0003888607025146485, "__label__literature": 0.0002543926239013672, "__label__politics": 0.00023818016052246096, "__label__religion": 0.0004353523254394531, "__label__science_tech": 0.029083251953125, "__label__social_life": 9.709596633911131e-05, "__label__software": 0.018890380859375, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.00023162364959716797, "__label__transportation": 0.0003676414489746094, "__label__travel": 0.00012254714965820312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 78014, 0.0535]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 78014, 0.2971]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 78014, 0.88864]], "google_gemma-3-12b-it_contains_pii": [[0, 238, false], [238, 2275, null], [2275, 3856, null], [3856, 7443, null], [7443, 9774, null], [9774, 12280, null], [12280, 14979, null], [14979, 17542, null], [17542, 21887, null], [21887, 23230, null], [23230, 25702, null], [25702, 29280, null], [29280, 32798, null], [32798, 36279, null], [36279, 37665, null], [37665, 40234, null], [40234, 44108, null], [44108, 47341, null], [47341, 49839, null], [49839, 52629, null], [52629, 55756, null], [55756, 57121, null], [57121, 60423, null], [60423, 63507, null], [63507, 66332, null], [66332, 69236, null], [69236, 72066, null], [72066, 75092, null], [75092, 75495, null], [75495, 78014, null]], "google_gemma-3-12b-it_is_public_document": [[0, 238, true], [238, 2275, null], [2275, 3856, null], [3856, 7443, null], [7443, 9774, null], [9774, 12280, null], [12280, 14979, null], [14979, 17542, null], [17542, 21887, null], [21887, 23230, null], [23230, 25702, null], [25702, 29280, null], [29280, 32798, null], [32798, 36279, null], [36279, 37665, null], [37665, 40234, null], [40234, 44108, null], [44108, 47341, null], [47341, 49839, null], [49839, 52629, null], [52629, 55756, null], [55756, 57121, null], [57121, 60423, null], [60423, 63507, null], [63507, 66332, null], [66332, 69236, null], [69236, 72066, null], [72066, 75092, null], [75092, 75495, null], [75495, 78014, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 78014, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 78014, null]], "pdf_page_numbers": [[0, 238, 1], [238, 2275, 2], [2275, 3856, 3], [3856, 7443, 4], [7443, 9774, 5], [9774, 12280, 6], [12280, 14979, 7], [14979, 17542, 8], [17542, 21887, 9], [21887, 23230, 10], [23230, 25702, 11], [25702, 29280, 12], [29280, 32798, 13], [32798, 36279, 14], [36279, 37665, 15], [37665, 40234, 16], [40234, 44108, 17], [44108, 47341, 18], [47341, 49839, 19], [49839, 52629, 20], [52629, 55756, 21], [55756, 57121, 22], [57121, 60423, 23], [60423, 63507, 24], [63507, 66332, 25], [66332, 69236, 26], [69236, 72066, 27], [72066, 75092, 28], [75092, 75495, 29], [75495, 78014, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 78014, 0.05202]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
6508f71311086fe4ce0881e37ac2fab3efa0890e
|
A Rule-based Model for Customized Risk Identification and Evaluation of Task Assignment Alternatives in Distributed Software Development Projects
Ansgar Lamersdorf
University of Kaiserslautern
Kaiserslautern, Germany
a_lamers@informatik.uni-kl.de
Jürgen Münch
University of Helsinki
Helsinki, Finland
Juergen.Muench@cs.helsinki.fi
Alicia Fernández-del Viso Torre
Indra Software Labs
Madrid, Spain
afernandezde@indra.es
Carlos Rebate Sánchez
Indra Software Labs
Madrid, Spain
crebate@indra.es
Markus Heinz
University of Kaiserslautern
Kaiserslautern, Germany
m_heinz@informatik.uni-kl.de
Dieter Rombach
University of Kaiserslautern and Fraunhofer IESE
Kaiserslautern, Germany
Dieter.Rombach@iese.fraunhofer.de
Distributed software development imposes new project risks that are very different from the ones in collocated development and are overlooked easily. At the same time, they depend to a large extent on project-specific characteristics. Therefore, new methods for identifying these risks in distributed projects have to be developed. This article presents a model for identifying these risks at the beginning of a project. The model systematically captures experiences from past projects in a set of logical rules describing how project characteristics influence typical risks in distributed development. Thus, it is able to assess risks individually for each project. In addition, the model can be used for evaluating different task assignment alternatives, which makes it possible to allocate tasks systematically. An instance of the model was developed by applying qualitative content analysis to 19 interviews with practitioners. An evaluation using expert interviews showed that the risks identified by the model matched the actual experiences in 81% of the cases; of these, 40% had not been regarded at project start.
This is the peer reviewed version of the following article: Ansgar Lamersdorf, Jürgen Münch, Alicia Fernández-del Viso Torre, Carlos Rebate Sánchez, Markus Heinz and Dieter Rombach (2012), JOURNAL OF SOFTWARE: EVOLUTION AND PROCESS 2012; 24:661–675, doi: 10.1002/smr.576, which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/smr.576/abstract. This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving http://olabout.wiley.com/WileyCDA/Section/id-817011.html"
1. Introduction
The literature on distributed or global software development (GSD) is full of failure stories [1] [2] [3] caused by the inherent characteristics of GSD. In contrast to collocated software development, GSD consists of several teams that have to cooperate across various barriers (e.g., language and cultural differences, time zone barriers, different backgrounds with respect to expertise and knowledge). As a result, decreases in productivity [4] [5] or increases in the number of defects [6] have been reported. Their causes include communication problems between sites [7] [8] [9], insufficient knowledge at one of the sites [6] [10], mistrust between sites [11], or decreased workforce motivation due to the fear of job loss [12]. Therefore, they represent a set of GSD-specific project risks that might be relevant in addition to the typical project risks.
This indicates that risk management in global development should specifically address the risks caused by the distributed nature of GSD projects. In practice, however, GSD-specific risks are often not considered at project start [13]. Instead, when distributed projects are initiated, they often focus only on possible benefits such as low labor cost rates, while neglecting the problems of distributed development [14] [15].
Knowing risks and potential problems together with their typical causes already at project start would help to initiate customized countermeasures (e.g., plan in extra resources, increase trainings, or initiate travelling) and therefore reduce risks. In addition, this knowledge could also be used to make systematic decisions regarding the distribution of work to different development sites: If it is known which characteristics might cause certain problems (e.g., low expertise level, high turnover rate) and if these characteristics are known for the involved sites (e.g., the expertise level and turnover rate at each site), the decision on how to allocate work can take into account the possibility of specific problems at each site and weigh this against the potential benefits (e.g., a low labor cost rate).
In this article, we present a model for identifying and predicting GSD-specific project risks as well as its instantiation. The instantiation is based on a detailed qualitative content analysis of 19 interviews with practitioners regarding their experiences in distributed and global software development. This was done by applying systematic coding to the transcriptions of the interviews. From the interview analysis, we derived a set of rules that describe under which circumstances certain problems can occur. The rules use a set of influencing factors as independent variables that represent characteristics of the software development project environment. This allows for assessing the risks individually for any project: By setting the influencing factors according to the project-specific characteristics and the distribution of work, every rule can be evaluated and the corresponding risk can be assessed.
The remainder of the article is structured as follows. First, related work in risk identification for GSD is discussed. Section 3 presents an overview of the model concepts. In Section 4, the interview study and content analysis method used for model development are described in detail. In addition, the model evaluation within a Spanish software development company is presented. Section 5 sketches the integration of the model into an approach for systematic task allocation, followed by a discussion of the results and an outlook on future work.
2. Related Work
According to the Project Management Body of Knowledge (PMBOK) [16], risk identification addresses the question “Which risks might affect the project?” In the following, we will only focus on the risk identification aspect of risk management.
There exists a large body of research on risk management and risk identification for software development projects [17] [18] [19]. However, these approaches usually do not consider distributed and global development. Consequently, we will concentrate on specific approaches for risk identification in GSD.
Prikladnicki et al. suggest a process for risk management that is integrated into processes for distributed software development [20][21]. This approach mainly delivers a generic process without giving guidelines on how to identify the specific risks based on project and site characteristics. It can thus be seen as a generic process framework that needs to be filled with specific risk models for GSD.
Ralyte et al. present such a specific model for GSD, which includes a fixed set of risks that may occur due to the distributed nature of GSD projects [22]. Their risk framework is divided into the two dimensions distance (geographical, temporal, socio-cultural, organizational, technological, knowledge) and activity (communication, coordination, control, development, maintenance). For each combination of these two dimensions, they list specific problems that may occur in a project. For applying the framework to specific projects, the project-specific risks and solutions have to be selected from the proposed list.
A similar approach is given by Ebert et al. [23]. Here, several problems and risks that may occur in GSD projects are categorized into four drivers of global distribution: efficiency, presence, talent, and flexibility. The approach names a large number of possible problems and mitigation strategies and it is left to the user to identify which problems might occur in a specific project.
Smite [24] presents a risk identification approach that is more suited for identifying specific risks for an individual project situation. It is possible to identify project-specific risks if the individual threats for each project are known. This approach relies on very detailed historical data and does not give an explanation on why a specific threat might lead to certain problems or consequences.
In general, current research on risk identification in GSD focuses very much on providing lists of possible problems and risks while giving no explanations or rules as to which problems might occur under which circumstances or in which environments. However, this is very important in assessing project-specific risks, as significant project risks can be the result of certain characteristics and constellations: Research shows, for example, that, depending on maturity, geographical distance between sites is seen very differently by project managers, from “no problem at all” to “a major barrier” [6], and that the consequences of staff turnover depend on the type of development project [25]. Therefore, we need risk identification approaches that consider the causal relationships between project characteristics and problems and that can assess risks individually for a specific project situation.
3. Model Overview
3.1. Model Goal
Based on our previous work [25], we state the following two assumptions:
1) The specific problems and risks of distributed and global software development are often not known or underestimated at the beginning of GSD projects. 2) Most risks are not vital in all GSD projects but only under specific circumstances. 3) The assignment of work across sites in a distributed development project can have a significant impact on the risks of GSD projects. Therefore, the following goal was formulated for the risk identification model:
**Goal:** Develop a model that can be systematically used for the identification of risks in specific global software development projects as well as for the risk analysis of different task assignment alternatives. The model should be based on previous experiences of practitioners in distributed and global development.
As the goal is to identify project-specific risks, the model has to use the characteristics of a project environment as input for its predictions. Therefore, we decided to build the model as a set of rules stemming from interviews with practitioners in GSD and the experiences reported there. It can thus be seen as a formalized collection of lessons learned from previous projects.
3.2. General Concept
A general overview of the model concept is given in Figure 1. The main idea of the model is that unsystematic lessons learned are documented in a semi-formal way that allows for automatic evaluation. These lessons learned describe problems in GSD and the circumstances under which these problems can occur or be prevented.

The model development process consists of a transformation of the lessons learned into the risk model, which consists of influencing factors, risks, and logical rules. This transformation is done by formalizing each lesson learned as a semi-formal rule and identifying the corresponding influencing factors and risks described in the rule. Given the risk model, the model application process is able to predict the risks for a specific project by evaluating all rules of the model. In order to do so, the project is characterized according to the identified influencing factors and the analyzed task allocation.
3.3. Risks, Influencing Factors, Rules, and their Evaluation
As described above, the main elements of the model are (a) risks, (b) influencing factors, and (c) rules.
In our model, risks describe possible problems that might occur in a GSD project. For every risk, there exists a short textual description of its possible negative impact on the project (e.g., communication problems can decrease productivity).
Influencing factors describe characteristics of the project environment that have an impact on the existence of a certain problem. Influencing factors can be of different types:
- Characteristics of remote sites (e.g., the process maturity or the staff experience at the site),
- Relationships between sites (e.g., the cultural difference or the existence of previous working experience between two sites),
- Task characteristics (e.g., the complexity of a task),
- Relationships between tasks (e.g., the coupling of tasks assigned to different sites), or
- Characteristics of the overall project (e.g., the time pressure or the type of project).
Task and site characteristics can be different for every involved site and might have to be elicited for each site individually. Relationships between sites have to be determined for every combination of two sites that collaborate in a project. Based on the experience of the authors, characteristics of the product to be developed or maintained might also be relevant. We model these characteristics indirectly as task characteristics.
In our model, we concentrate on software development within one organization; thus, we only look at the characteristics of different sites that act as subsidiaries. However, the model could
also be used for evaluating risks in an outsourcing scenario (i.e., global software development between independent companies). In such a case, additional issues arising (such as legal problems [26]) could be modeled either as an additional category of influencing factors or within the existing types (e.g., the existence of certain contracts could be described as a relationship between sites).
Figure 2. Exemplary model
Rules formalize how the influencing factors may impact the risks. The influencing factors in every rule can be combined using the logical operators ! ("not"), & ("and"), and | ("or"). Figure 2 shows a graphical illustration of exemplary rules. In every rule, the + operator indicates that the rule describes an increase in the risk, while the – operator describes a decrease.
The assessment of risks for a future global software development project is done in two steps: First, the project and the involved sites have to be characterized by the responsible project managers. Afterwards, the model is able to identify risks by evaluating the rules according to their project-specific relevance. Both steps are described in the following.
In order to reduce the complexity of the model, we decided to use only one ordinal five-level scale (very low – very high) for every variable, which is in accordance with other well-known estimation models such as COCOMO [27]. Thus, a project can be characterized by selecting one out of five values for every variable of the model. Project factors are assessed for the entire project; site and task factors are assessed for the involved sites; and relationships between sites are assessed individually for every two collaborating sites.
For evaluating the rules, we chose to derive a relevance value for every rule on the same five-level scale. Again, this decision was made to prevent the model from becoming too complex, which might reduce its applicability. The relevance value is calculated using two simple functions, num and eval.
The function num converts the ordinal value into a number and is defined as follows:
\[
\text{num(very low)} = 0; \quad \text{num(low)} = 1; \quad \ldots; \quad \text{num(very high)} = 4
\]
The function eval recursively applies Boolean logic to the rules:
\[
\text{eval}(x \rightarrow r) = \text{num}^{-1}(\text{eval}(x)) \\
\text{eval}(a \& b) = \min(\text{eval}(a), \text{eval}(b)) \\
\text{eval}(a \mid b) = \max(\text{eval}(a), \text{eval}(b)) \\
\text{eval}(\neg a) = 4 - \text{eval}(a) \\
\text{eval}(\text{factor}) = \text{num}(<\text{factor}.\text{value}>)
\]
In other words, a rule is evaluated by recursively evaluating the combination of the values of its influencing factors. A logical “and” of two values is evaluated as the minimum (e.g., “factor1 & factor2 → risk X” with factor1="high" and factor2="low" has “low” relevance), a logical “or” of two values is evaluated as the maximum (e.g., “factor1 | factor2 → risk X”
with factor1="high" and factor2="low" has “high” relevance), and a logical “not” is evaluated as the complementary value (e.g., “!factor1 \( \rightarrow \) risk X” with factor1="high” has “low” relevance).
<table>
<thead>
<tr>
<th>Project characteristics:</th>
<th>Site relations:</th>
</tr>
</thead>
<tbody>
<tr>
<td>Proc. mat.: medium</td>
<td>Cult. Diff.: high</td>
</tr>
<tr>
<td></td>
<td>Prev. exp.: high</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Rules relevance</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Cultural differences ( \rightarrow + ) Communication problems (Rule relevance: high)</td>
</tr>
<tr>
<td>2: Cultural differences & !(previous experiences) ( \rightarrow + ) Lack of trust (high & !high = high & low = low)</td>
</tr>
<tr>
<td>3: (Process maturity)</td>
</tr>
</tbody>
</table>
Figure 3. Assessment of influencing factors and evaluation of rule relevance for two sites
Figure 3 gives an example of the assessment of the influencing factors and the evaluation of the rules. It can be seen that rule 2 has the lowest relevance (due to high previous experiences), whereas the relevance for rules 1 and 3 is high.
Based on the rule evaluation, the rules can then be ordered according to their relevance and the project-specific risks and problems can be identified.
### 3.4. Evaluation of Different Task Assignment Alternatives
The example given in Figure 3 already shows how risk assessment is dependent on work distribution: If the work was assigned to a different remote site with (for example) low cultural differences but also low previous experience, the rules would be evaluated differently and other risks might have more relevance. In addition, a project might have more than two sites involved, resulting in different relevance values of each rule for every interface between two sites. For example, one site might collaborate with both a site with high and a site with low cultural differences. Thus, we use the model for analyzing different task assignment alternatives and assessing the risk for each task individually. This is done according to the following algorithm:
- **For every task T1 within the analyzed project:**
- **For every task T2 that is assigned to a different site than T1:**
- **For every rule R:**
- Evaluate R with respect to T1, T2 and the sites T1 and T2 are assigned to
Figure 4 gives an example of how three tasks, assigned to three sites with different cultural differences, are evaluated with respect to one rule (cultural differences increase communication problems). The example shows that changing the assignment of task 2 from site 2 to site 3 would decrease the risk of having communication problems: In the current assignment (see Figure 4), the risk is of high relevance with respect to the interface between task 2 and task 3, and it is of medium relevance with respect to the interface between task 2 and task 1. If task 2 were to be assigned to site 3, the risk would have no relevance with respect to the interface between task 2 and task 3 (because both tasks would be assigned to site 3), and it would be of low relevance with respect to the interface between task 2 and task 1. (In a real-world risk model, however, other rules would have to be regarded, too).
As a result of this task-specific evaluation, a set of risks is identified for each task and each assignment individually. This can be used for analyzing different task assignment alternatives. After the task assignment decision, specific risks for every involved site can be identified by
summing up the risks for all tasks assigned to the site. These risks can then be communicated to the responsible site manager.

**Figure 4. Individual assessment of a risk for every task in a specific assignment**
### 3.5. Maintenance of the Model
Evaluation and feedback are of central importance in experienced-based software project management [28]. Thus, a possibility to maintain and update the model based on experiences made while applying the model should be included. Maintenance of the model can, in principle, be performed by adding new rules, removing old rules, or changing existing rules. In addition, rules might become more credible once they were applied successfully. Therefore, we use the concept of storing experiences together with significance [29] and suggest adding a significance variable to every rule, describing its credibility. Initially, every rule has a significance of 2. After a project is finished, every rule is updated according to the following process:
- **If the predictions of the rule were correct** (i.e., if rule relevance was “high” or “very high” and the risk actually occurred or if rule relevance was “low” or “very low” and the risk did not occur), increase its significance by 1.
- **If the predictions of the rule were incorrect**:
- If an influencing factor not yet regarded caused the difference between actual and predicted risk, add the factor to the rule and reduce significance by 1.
- Otherwise, divide the significance of the rule by 2.
- **If the significance of the rule is below 0.5**, remove the rule from the model.
In addition, new rules describing new risks can be added after every project with a significance of 2. Figure 5 gives an example of different possibilities for updating a rule. In this model, the significance variable has two uses: On the one hand, it can be seen as a counter that provides notification when to remove a rule from the model because correctness is too low. On the other hand, it can be used as a guideline for project managers indicating the probability of a risk actually occurring.
The process for updating the model also implicitly handles the problem of contradictory rules: One conceivable case is that two contradictory rules are stored in the model (e.g., “time zone differences → - productivity” according to most experiences but “time zone differences → + productivity” according to publications on follow-the-sun). In this scenario, two cases are possible: Either the contradictions are based on other influences such as the type of project, the sites or regions involved, or the technologies used (e.g., time zone differences only increase productivity if the organization has a certain maturity level), or no additional influences can be found. In the first case, the update process would lead to two new rules with additional influencing factors that are not contradictory anymore (e.g., “(time zone differences) & !(process maturity) → - productivity” and “(time zone differences) & (process maturity) → + productivity”). In the second case, each application of the model in a new project would result in one rule increasing its significance level and the other rule decreasing it until finally one rule would be removed from the model.
4. Model Application and Evaluation
4.1. Instantiation Process
As a basis for the experiences captured in the model, we used a series of qualitative interviews with practitioners in global software development conducted between spring 2008 and fall 2009. Some of the interviews were conducted for a different study on task allocation practices in distributed development [25]. However, they also included questions on general experiences in distributed development and on factors causing problems in GSD.
In total, 19 interviews were conducted with experts from 14 different companies in the US, India, and Spain. The experts came from different domains such as aerospace, educational software, and custom software development for the financial industry. With the exception of two interviewees who reported from a researcher perspective, all of them came from management positions, with 9 being project managers and others holding positions such as quality manager, product manager, or CIO. All interviewees could report from several (up to 20) years of experience in distributed and global software development projects.
While in most cases, only one interview was conducted per company, four interviewees came from Indra Software Labs (ISL). ISL was later also used for evaluating the risk model (however, in a different interview session).
Each interview lasted for 30-75 minutes and was done (mostly) in person or over the telephone. With the exception of four interviews, all interviews were recorded and transcribed literally. For the other four, detailed notes were taken during and after the interview. This made it possible to analyze the interviews under various viewpoints. According to the basic model, the interviews were analyzed with respect to statements on risks in distributed development, on factors influencing these risks, and on rules that describe experiences regarding how the factors impact the risks positively or negatively. This was done using qualitative analysis [30] and coding [31]: The code categories “Risk” and “Influencing factor” were created and all interviews were searched for codes that fit into these categories. Afterwards, the interviews were analyzed again and where passages containing influencing factors and risks were identified, the experiences were extracted as rules combining the influencing factors and risks. Table 1 gives an example of how an interview passage was analyzed.
Table 1. Example of text analysis
<table>
<thead>
<tr>
<th>Original Passage</th>
<th>Influencing factors:</th>
</tr>
</thead>
<tbody>
<tr>
<td>"If you have a distributed team then it needs to be informed every time. If you have one single team, the management needs to inform only one team. [...] the more sites, the more the number of teams that have to be coordinated"</td>
<td>Number of sites</td>
</tr>
<tr>
<td>Identified Codes</td>
<td>Risk:</td>
</tr>
<tr>
<td>Influencing factors:</td>
<td>Coordination problems</td>
</tr>
<tr>
<td>Textual Rule</td>
<td>The more sites there are, the more people have to interact with each other in order to make any kind of decision and let the others know about it</td>
</tr>
<tr>
<td>Logical Rule</td>
<td>Number of sites $\rightarrow$+ Coordination problem</td>
</tr>
</tbody>
</table>
The first analysis of the interviews revealed a very large number of findings (42 influencing factors, 140 identified rules). Therefore, they and some of the factors and rules were removed based on the experience and evaluation of the authors and thus represent a threat to validity. However, as this was done following a defined process and documented throughout the process, the decisions were made transparent and can be traced back to the original findings in the interview transcriptions. As a result, we identified 31 influencing factors and 9 problems, which are used in 46 rules formally describing the collected experiences on problem enablers and barriers in GSD.
4.2. The Instantiated Model
In the following, a short overview of the identified model will be given. In order to make the model applicable in a specific environment, it was further customized and simplified. Table 2 shows all identified factors. They are categorized into relationships between the sites, characteristics of the site, characteristics of the task, relationships between tasks, and project characteristics.
Table 2. Identified factors
<table>
<thead>
<tr>
<th>Type</th>
<th>Factor</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Relationships</td>
<td>Time zone difference</td>
<td>Differences between time zones at the sites</td>
</tr>
<tr>
<td></td>
<td>Language difference</td>
<td>Differences in language or dialects in language (e.g., UK – India)</td>
</tr>
</tbody>
</table>
An example of the identified rules was already given in Table 1. The complete set of identified rules is presented in the appendix together with a textual description of each rule. Based on the identified factors and rules, the model was implemented in Microsoft Excel.
### 4.3. Evaluation
In the following section, we describe the evaluation of the prototype model with respect to the context and evaluation process, the results, and the threats to validity.
#### 4.3.1. Context and Evaluation Process
The model was evaluated at Indra Software Labs (ISL), where four of the interviews for building the model were conducted (see Section 4.1). For the evaluation, we did not use the individual application of the model to different tasks and assignment alternatives as described in Section 3.4. Instead, we applied the model only to projects with one local and one remote site in order to simplify the evaluation process.
ISL is the network of software labs of Indra that develops customized software solutions for Indra’s markets. It has 20 development sites, half of which are located in Spain and the others
in Latin America, Slovakia, and the Philippines. Most of the software development projects at ISL are distributed either within Spain or globally. Therefore, there exists a lot of experience at ISL regarding working in GSD projects and related risks and problems.
We evaluated the model in interview sessions with five practitioners at ISL. Four of the interviews were conducted in person at an ISL site in Madrid, while the last one was done during a videoconference with the interviewee being located at Ciudad Real, Spain. The persons interviewed for the evaluation were different from the ones interviewed for the model development (see Section 4.1) and reported from different projects. There was thus no threat to validity, which might have arisen from using the same experiences for model development and evaluation.
Of the five interviewees, three were project managers, one was a director at ISL (responsible for one business area), and one was working in the quality department. From their perspective, they all had insights into various distributed projects in different constellations and had several years of experience. They were thus highly experienced in distributed software development.
The evaluation process was done as follows:
1. A questionnaire was sent to the interviewees in advance, asking them to recall one specific historical distributed development project. For this project, they were asked to characterize it and identify values for the 23 influencing factors.
2. In the interview, the model was used and all factors were set to the values named by the interviewees in the questionnaire. This resulted in an evaluation of the 36 rules with respect to the project characteristics.
3. Every rule that was identified as relevant (where relevance was either “high” or “very high”) by the model was presented to the practitioner (both as a logical rule and with the textual description) asking (a) whether the rule complied with the project experience (i.e., if its described impact on risks and problems could be observed) and – in the event the rule did comply – (b) whether this rule was known at project start (i.e., if the project manager was aware of the phenomenon described by the rule).
4. Finally, the interviewees were asked if they found the use of such a model helpful and whether they would like to use it for future projects.
4.3.2. Evaluation Results
Table 3 shows the results of the evaluation. It shows that on average, one third of the 36 rules were relevant for each historical project. This indicates again that only a subset of the phenomena and problems of GSD described in the literature can be applied to a specific distributed development project.
A wide majority of the rules (81.4%) that were predicted as relevant could actually be observed in the projects. However, some rules were identified as irrelevant, as they could not be observed in most of the projects: Rule 11 stated that a certain product size decreases the risk of losing intellectual property. This could not be confirmed by the practitioners because in their opinion, loss of intellectual property was never an issue at ISL, independent of project size. Rule 32 stated that if the coding phase is transferred to another site, project risks will decrease, since coding tasks usually come with very detailed specifications. This could not be confirmed, as the practitioners could report about various problems that also occurred when coding was transferred to another site. With the exception of these two rules, nearly all relevant rules could be confirmed in each of the five projects.
Of the rules that complied with the real project experience, 59.5% were considered at project start by the project management. This means that the project managers were aware of a majority of the experiences stored in the model. One reason for this might be the fact that the interviews were conducted with highly experienced project managers who were aware of most of the risks and problems in distributed development and were able to incorporate these
experiences into their project planning. In less experienced environments, this number would therefore presumably have been lower.
Table 3. Results of evaluation
<table>
<thead>
<tr>
<th>Project No</th>
<th># relevant rules</th>
<th># rules confirmed (from the relevant ones)</th>
<th># rules considered at project start (from the ones observed)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>14</td>
<td>12</td>
<td>8</td>
</tr>
<tr>
<td>2</td>
<td>16</td>
<td>12</td>
<td>5 out of 6</td>
</tr>
<tr>
<td>3</td>
<td>10</td>
<td>9</td>
<td>7</td>
</tr>
<tr>
<td>4</td>
<td>9</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>10</td>
<td>9</td>
<td>3</td>
</tr>
<tr>
<td>∑</td>
<td>59</td>
<td>48</td>
<td>25 out of 42</td>
</tr>
</tbody>
</table>
* The quality manager did not know for all rules if they were considered at project start or not.
However, a rate of 40.5% still demonstrates that a significant proportion of the experiences stored in the model were not systematically regarded at project start. In these cases, an application of the model at project start would probably have helped, as it would have drawn attention to the described risks and made it possible to consider them in project management and initiate countermeasures.
This hypothesis was also supported by the practitioners’ answers to the applicability of the model: All of the five interviewed persons stated that they found the model useful and would like to use it in future projects. Even the managers who had already considered most of the experiences stored in the model (e.g., in projects 1 and 3) found the model very helpful: They reported that it was sometimes difficult for them to formulate their experiences and their predictions about possible risks and problems in meetings and discussions with other managers. In their opinion, such a model would help them demonstrate and communicate their experiences to others. Other managers stated that the model could also be used to identify and demonstrate project risks during project planning sessions with a customer.
Another advantage that was pointed out by one interviewed manager was the fact that this model can be used for evaluating different allocation scenarios: By inserting the characteristics of different remote sites into the model and assessing the predicted experiences and risks, the decision on how to select one site from a number of different sites for a project could be supported.
4.3.3. Threats to Validity
A threat to the internal validity might be the fact that the interviewees did not understand the rules correctly while applying them for their projects. However, this threat was reduced by explaining every rule to the practitioners.
Conclusion validity is relatively low due to the small number of analyzed projects. To obtain higher significance, a larger study should follow. However, the results seem to indicate a general trend, as the degree of compliance (81%) is relatively high.
Construct validity might be threatened by the fact that the evaluation was conducted by the same person who developed the model and the interviewees might have been biased towards giving pleasant answers. Particularly, the question of whether the interviewees would like to use the model in later projects might have produced biased results.
However, the significant rate of rules not considered at project start (40.5%) supports the usefulness of the model. External validity might be threatened by the fact that all evaluation was done within one company. Therefore, it should be repeated at different organizations.
However, as most of the interviews for model development (15 out of 19) were done in companies other than ISL, the evaluation results can probably be generalized.
5. Integration of the Model into an Approach for Systematic Task Allocation
In Section 3.4, we already described how the model can be used for evaluating different task assignment alternatives as part of a systematic task allocation decision. It is the overall goal of our current research to support task allocation decisions using experiences from previous projects that are systematically stored in models. During this research, we developed two other models for decision support: an assignment suggestion model [32] [33], and an effort overhead model [34]. Describing the other two models in detail is beyond the scope of this article. However, in a different publication [35], we describe how these models as well as the risk model can be integrated into one coherent approach for systematic task allocation.
In order to demonstrate the use of this approach, Figure 6 shows a scenario of a task allocation decision in a GSD project: In this project, five tasks (i.e., the development of five sub-components) are to be assigned to sites in Cologne, Frankfurt, London, and Bangalore. While component 1 is already assigned to Frankfurt and component 5 is assigned to London, it has to be decided where components 2 to 4 shall be assigned to.

Figure 6. Tasks and sites of the example scenario.
Based on the project characteristics, the assignment suggestion model (which applies the weighted rules of the risk model) suggests the assignments shown in Table 4 (see [33] for details on the suggestion algorithm), and the cost estimation model predicts effort and cost for each assignment alternative. Due to the predictions of effort and cost, the decision maker decides to follow the second assignment suggestion and therefore assigns components 2 and 3 to Frankfurt while assigning component 4 to Bangalore.
The remaining risks can now be predicted by the risk model. Table 5 gives an overview of the risks predicted for component 1 in the selected assignment. It shows that the risk with the highest relevance is an increase in communication problems with respect to the interface between component 1 and component 4. This is caused by the language differences between Frankfurt (where component 1 is assigned to) and Bangalore (where component 4 is assigned to). The same rule is relevant with respect to the interface between component 1 and component 5, but here the relevance is lower because component 5 is assigned to London and the language differences between London and Frankfurt are not as high as the language differences between Bangalore and Frankfurt.
Table 4. Suggested assignments with cost and effort predictions
<table>
<thead>
<tr>
<th>No</th>
<th>Suggested assignment</th>
<th>Effort</th>
<th>Cost</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>F B B B L</td>
<td>366</td>
<td>1,742,464</td>
</tr>
</tbody>
</table>
Table 5. Risks for component 1
<table>
<thead>
<tr>
<th>Relevance</th>
<th>Rule</th>
<th>Other involved Task</th>
</tr>
</thead>
<tbody>
<tr>
<td>Very high</td>
<td>Language differences →+ communication problems</td>
<td>Component 4</td>
</tr>
<tr>
<td>High</td>
<td>Language differences →+ communication problems</td>
<td>Component 5</td>
</tr>
<tr>
<td>High</td>
<td>Time zone differences →- motivation</td>
<td>Component 4</td>
</tr>
<tr>
<td>High</td>
<td>Time zone differences →- productivity</td>
<td>Component 4</td>
</tr>
</tbody>
</table>
6. Conclusions and Future Work
In this article, we presented a model for assessing project-specific risks and problems of distributed software development. The model was described in its basic concept, its instantiation based on a systematic qualitative analysis of 19 interviews with practitioners, and its evaluation at Indra Software Labs.
The evaluation showed that the model was able to make predictions that complied with the experiences in historic projects and that its applicability in practice was strongly supported by highly experienced managers. However, the model as presented in this article still has some deficiencies that can be the basis for future work:
1) While the model already provides a relatively clear definition of the concept of “influencing factors” and categorizes them into four groups, the risks and problems are not yet specified on a detailed basis.
2) The current set of rules can be improved.
3) It is not clear if and how the current set of rules can be reused by other organizations.
4) Time considerations are difficult to address with the model. It is not clear how the risk model can be used for analyzing and predicting the impact of work distribution and task allocation on the duration of the project or on the probability that deadlines will be met.
Acknowledgements
The authors would like to thank all participants in the interview and evaluation studies. Some of the work was done during a stay at the Fraunhofer Center for Experimental Software Engineering, Maryland and was financially supported by the Otto A. Wipprecht Foundation. The authors also thank Sonnhild Namingha for proofreading the paper.
References
[10] Herbsleb JD, Paulish DJ, Bass M. Global software development at Siemens: Experience from nine projects. *International Conference on Software Engineering (ICSE)* 2005; 524-533
[12] Casey V, Richardson I. Uncovering the reality within virtual software teams. *International workshop on global software development for the practitioner*, 2006
Appendix: Identified rules
Table 6. Identified rules (Excerpt)
<table>
<thead>
<tr>
<th>Logical rule</th>
<th>Textual description</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>Time zone difference & (cultural difference</td>
</tr>
<tr>
<td>14</td>
<td>![Process knowledge) & size) → +Communication problems A bigger project needs more communication and coordination. If there is a manager without experience in managing and coordinating a project correctly, there are a lot more problems in communication.</td>
</tr>
<tr>
<td>18</td>
<td>![Requirements stability) & ![communication infrastructure)</td>
</tr>
<tr>
<td>23</td>
<td>![Communication infrastructure) & ![personal relations)</td>
</tr>
<tr>
<td>34</td>
<td>Time pressure & ![personal relations) → +Communication problem If people are under pressure, they focus more on their work and are less willing to communicate. This is aggravated by a large distance and the lack of trust. So it is even more unlikely for them to communicate with the other site.</td>
</tr>
</tbody>
</table>
[34] Lamersdorf A, Münch J, Rombach HD. Estimating the Effort Overhead in Global Software Development, *Fifth International Conference on Global Software Engineering* 2010
|
{"Source-Url": "http://www.researchgate.net/profile/Juergen_Muench/publication/257272132_A_Rule-based_Model_for_Customized_Risk_Identification_and_Evaluation_of_Task_Assignment_Alternatives_in_Distributed_Software_Development_Projects/links/0c960524c113b41bec000000.pdf", "len_cl100k_base": 9087, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 50043, "total-output-tokens": 11294, "length": "2e13", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.0004014968872070313, "__label__crime_law": 0.0003614425659179687, "__label__education_jobs": 0.0029087066650390625, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.0001862049102783203, "__label__finance_business": 0.0006818771362304688, "__label__food_dining": 0.00035572052001953125, "__label__games": 0.0005369186401367188, "__label__hardware": 0.0004758834838867187, "__label__health": 0.0004076957702636719, "__label__history": 0.0002028942108154297, "__label__home_hobbies": 0.00010150671005249023, "__label__industrial": 0.00036978721618652344, "__label__literature": 0.0002815723419189453, "__label__politics": 0.00026726722717285156, "__label__religion": 0.0003960132598876953, "__label__science_tech": 0.0032329559326171875, "__label__social_life": 0.00014197826385498047, "__label__software": 0.00373077392578125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.0004711151123046875, "__label__travel": 0.00021982192993164065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49871, 0.0212]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49871, 0.43505]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49871, 0.93931]], "google_gemma-3-12b-it_contains_pii": [[0, 2426, false], [2426, 6582, null], [6582, 10579, null], [10579, 13269, null], [13269, 16212, null], [16212, 19727, null], [19727, 21833, null], [21833, 24348, null], [24348, 27606, null], [27606, 28721, null], [28721, 32789, null], [32789, 36909, null], [36909, 39889, null], [39889, 42586, null], [42586, 46414, null], [46414, 49871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2426, true], [2426, 6582, null], [6582, 10579, null], [10579, 13269, null], [13269, 16212, null], [16212, 19727, null], [19727, 21833, null], [21833, 24348, null], [24348, 27606, null], [27606, 28721, null], [28721, 32789, null], [32789, 36909, null], [36909, 39889, null], [39889, 42586, null], [42586, 46414, null], [46414, 49871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49871, null]], "pdf_page_numbers": [[0, 2426, 1], [2426, 6582, 2], [6582, 10579, 3], [10579, 13269, 4], [13269, 16212, 5], [16212, 19727, 6], [19727, 21833, 7], [21833, 24348, 8], [24348, 27606, 9], [27606, 28721, 10], [28721, 32789, 11], [32789, 36909, 12], [36909, 39889, 13], [39889, 42586, 14], [42586, 46414, 15], [46414, 49871, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49871, 0.17886]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
31412b23a1a5756cf0f733f2c20800d5fb9dbe23
|
Pragmatic Approach to Modeling and Generating Mobile Cross-Platform Applications
Mohamed Lachgar, Khalid Lamhaddab, Abdelmounaim Abdali and Khalid Elbaamrani
1LAMAI Laboratory, FSTG, Cadi Ayyad University, Marrakesh, Morocco
2TIM Laboratory, ENSA, Cadi Ayyad University, Marrakesh, Morocco
Article history
Received: 20-09-2018
Revised: 24-11-2018
Accepted: 27-03-2019
Corresponding Author:
Mohamed Lachgar
LAMAI Laboratory, FSTG, Cadi Ayyad University, Marrakesh, Morocco
Email: lachgar.m@gmail.com
Abstract: As a result of the ubiquity of smartphones, the number of mobile applications is extensively growing. In order to build native apps that reach all devices, developers should deal with many different operating systems, SDKs, development tools and programming languages, which implies serious effects on cost, time and success of the mobile project. In this study, the main objective is to propose a pragmatic approach for modeling and generating native cross-platform mobile applications, respecting a multi-layer architecture. The proposed approach is an MDA based technique which combines UML formals and DSL. The paper is illustrated with the modeling of a typical CRUD based app.
Keywords: MDA, DSL, UML, Mobile Applications, Code Generator, Native Code
Introduction
The mobile application development industry knows recently an uprising growth, due to the intensive use of mobile apps, where the bulk of them operate under Android, iOS and Windows Phone operating systems. However, the development of applications designed for mobile platforms requires more concerns, such as code efficiency, interaction with peripherals, as well as the speed of market invasion.
As a company, if we wish to create a mobile application; an important issue would be to be present on the various leading platforms of the market. However, what strategy should we adopt? Is it necessary to develop a specific application for each platform? And at what cost? Is it possible to develop an application and deploy it on multiple platforms? The answer to these questions is presented in (Lachgar and Abdali, 2017a). This paper offers a framework allowing companies to make up one’s mind on the approach to be adopted, to develop a multi-platform mobile application, the authors in (Lachgar and Abdali, 2017b) showed that the native approach has several advantages over other approaches. Further, as the name implies, native apps are built using platform-specific SDKs and development tools provided by the platform vendors. The advantages of native mobile apps, of course, are (Jobe, 2013):
- Complete access to the device hardware and APIs available on each platform
- Seamless integration with native operating system
- Updates are formal through app stores
On the other hand, the native applications are very expensive to implement, being limited to a particular mobile platform, they require a collection of in-depth knowledge and varied programming languages to be put in place. The Model-Driven Architecture (MDA) approach aims to provide an easy and effective practical solution to this problem, by developing a cross-platform application. In addition, the MDA approach has proven itself to be successful for enterprise application development and can contribute considerably in mobile applications development. The MDA approach can help us ensure the sustainability of the know-how and increase the productivity while responding to the problems of fragmentation of the platforms. The Model-Driven Architecture (MDA) approach (Paige et al., 2016) brings significant advances in the control of the development of computer applications and peculiarly it enables productivity gains, increased reliability, significant improvement in sustainability and better agility in the face of changes.
The present work suggests a new approach to mobile application design, by defining a platform independent of the model and adopting the MDA approach to generate the different layers of a mobile application (presentation layer, application layer, business layer And data access layer) following a set of transformations and projections.
This paper is organized as follows: The first section presents the model engineering and the layered architecture, the second part presents some related works. The adopted approach is described in the third section. The fourth section shows the applicability of the proposed approach through a case
study. The fifth section exhibits some limitations of the work. The final section concludes the paper and opens the gates for new perspectives.
Background
Model Driven Engineering
The Model-Driven Engineering (MDE) is a modern software engineering approach that proposes to elevate models to the rank of first-class concept (Paige et al., 2016). It is a generative form of engineering, characterized by a rigorous process, whereby everything is generated from a model, which shifts the models from contemplative to productive.
The results gathered over the last few years had shown the advantages of the MDE compared to the traditional approach of development, in terms of quality and productivity (El Hamloui, 2015):
- **Quality**: An overall reduction of 1 to 4 times; as well as reduction of the number of anomalies yields an improvement of 3 times during maintenance phase. The overall cost of quality has also cut down due to reduced inspection and testing times
- **Productivity**: A productivity improvement of 2 to 8 times in terms of lines of source code
Model Driven Architecture (MDA)
The MDA approach has been proposed by Object Management Group (OMG) in 2001. This is a peculiar view of model driven development (Hailpern and Tarr, 2006). This latter, unlike the MDA, does not abide by the OMG standards and it is a flexible paradigm for defining development processes that considers models and transformations as key artifacts of this process. According to (Kapos et al., 2014) it is simply the notion that it is possible to construct the model of a system, in order to be able to transform it automatically or semi-automatically into a real thing. The Model Driven Development (MDD) artifacts are used to specify, simulate, verify, test and generate the final system.
Unlike the MDD, the MDE goes beyond development activities and encompasses other tasks based on a software engineering process (e.g., model-based evolution) (Cabot, 2015). The basic idea of MDA, using OMG standards, is that the functionalities of the system to be developed are initially defined in a Computational Independent Model (CIM) that is used to create a model Platform Independent Model (PIM). The latter, supported by a Platform Description Model (PDM), allows the (semi-) automatic generation by transformation of one or a set of Platform Specific Models (PSM) (Fig. 1 for more details). The roles of each of these models are:
- **CIM**: Model independent of any computer system that uses a vocabulary familiar to the project’s owner. It allows having a vision of what is expected of the system, without neither going into the details of its structure, nor going into its implementation. The technical independence of this model allows to keep its interest over time. It is modified only if the knowledge or business needs a change
- **PIM**: Model which describes the business logic and operation of bodies and services. It is a model that does not contain information about technologies that will be used to deploy the application
- **PDM**: Model that describes the software architecture of the execution platform. It contains information for transforming models to a specific platform. The BluAge Forward (BLU AGE, 2010) and AndroMDA (Franky et al., 2016) tools define this model as a replaceable generation cartridge based on the runtime platform. This cartridge, called BluAge Bluage Shared Plug-ins (BSPs), is available for the most commonly used frameworks such as: Struts, Spring, hibernate, .Net, Java, etc.
- **PSM**: Model depending on the technical platform specified by the architect. It basically serves as the basis for generating executable code to the target technical platform(s). There are several levels of PSM. The first one comes from the transformation of a PIM, while the others are obtained by successive transformations till the generation of code in a specific language (e.g., Java, Swift, C#, etc.)
**Fig. 1**: Example of using models in forward engineering
Architecture of a Multi-Platform Mobile Application
The principle key of building a cross-platform application is to create an architecture that maximizes code sharing across platforms and allows code reuse. The principles of object-oriented programming help us to build a well-structured application, these principles include:
- **Encapsulation**: It ensures that classes and even architectural layers exhibit only a minimal Application Programming Interface (API) that performs their alleged functions and hides the details of implementation (Armstrong, 2006):
- At the class level, this means that objects behave like "black boxes" and that the consumer code does not need to know how they perform their tasks
- At the architectural level, this implies the implementation of a model as a facade that encourages a simplified API, that orchestrates more complex interactions, in the name of code in more abstract layers. This means that the User Interface (UI) code should only be responsible for displaying screens and accepting user inputs; and never interacts directly with the database. Similarly, the data access code should only read and write to the database, but never interact directly with buttons or text fields
- **Separation of responsibilities**: It ensures that each component (At the level of architecture and class) has a clear and well-defined objective. Each component must perform only its defined tasks and expose this functionality through an API accessible to other classes (layers) that must use it
- **Polymorphism**: Programming to an interface (or abstract class) that supports multiple implementations, means that the base code can be written and shared across platforms while interacting with platform-specific functionality (Armstrong, 2006)
The natural result is a modeled application based on abstract entities with distinct logical layers. Separation of layers makes applications easier to be understood, test and maintain. It is recommended that the code for each layer be physically separated (in directories or even separate projects for very important applications) and logically separated (using name-spaces or packages).
Typical Layers of a Mobile Application
The most common architecture pattern is the layered architecture pattern, otherwise known as the n-tier architecture pattern (Richards and Ford, 2018). In this paper and in the case study, the authors refer to the following six layered application (Fig. 2): Description of the different layers:
- **Data Layer**: Non-volatile data persistence, likely to be a SQLite database, but can be implemented with XML or JSON files or other appropriate mechanism
- **Data Access Layer**: It provides simplified access to data stored in the data layer. It represents a centralized location for all calls into the database and thus; makes it easier to port the application to other database. It contains everything related to persistence:
- Object Relational Mapping (ORM): Which contains all information and mapping techniques regarding to the database system
- Data Access Object (DAOs): Entities which model how the data is managed, generally they define all the Create, Update, Delete (CRUD) actions, etc.
- **Business Logic Layer**: Is defined as any logical application that is concerned with the retrieval, processing, transformation and management of application data; application of business rules and policies; and ensuring data consistency and validity. To maximize reuse opportunities, business logic components should not contain any behavior or logical application that is specific to a use case or user story
- **Service Access Layer**: Used to access services in the cloud: from complex Web services (REST, SOAP, etc.) to simple retrieval of data and images from remote servers. It encapsulates network behavior and provides a simple API to be consumed by the application and the UI layers
- **Application Layer**: Typically platform-specific code (usually not shared across platforms) or application-specific code (usually not reusable). A good test to determine whether the code should be placed in the application layer, with respect to the user interface layer would be either (a) to determine whether the class has actual display controls, or (b) or it can be shared among multiple screens or devices (for example, iPhone and iPad)
- **User Interface Layer (UI)**: The user-facing layer contains screens, widgets and controllers that manage them
This architecture has many advantages compared to the traditional way used to set up computer applications, among these we state:
- **Improved Scalability**: Due to the distributed deployment of application servers, scalability of the system is enhanced since a separate connection from each client is not required whereas connections from few application servers are sufficient (Richards and Ford, 2018)
- **Enhanced Re-usage**: A similar logic can be sustained in many clients or applications
- **Improved Data Integrity**: Data corruption through client applications can be eliminated as the data passed in the middle tier for database updates, which ensures its validity
- **Enhanced Security**: Through the implementation of several layers, enhances the data security on a service-by-service basis
- **Reduced Distribution**: The layered architecture enables to update only the application servers, not all distributed clients in case of a modification in the business logic
- **Hidden Database Structure**: The actual structure of the database often remains hidden from requester enabling any change of the database to be transparent
- **The maintenance of the data is independent of the medium physical storage**
- **Simplified Process maintenance**: Team members work on the data access layer, another can work on the business layer or on the GUI without disrupting the work of others
- **Facility of managing processing from the presentation layer**
- **Optimal Teamwork** (Richards and Ford, 2018)
- **Relative straightforwardness of moving from one graphic environment to another**
Nonetheless, an application may not necessarily contain all layers. For example, the service access layer would not exist in an application that does not access network resources. A very simple application can merge the data layer and the data access layer, since operations are extremely basic.
**Common Design Models in Mobile Development**
Models are a proven way to capture recurring solutions to common problems. There are few key models that are useful to understand the building mobile applications, which are maintainable and understandable:
- **Model, View, Controller (MVC)**: A common and often misunderstood model, MVC is most often used
when creating user interfaces and allows a separation between the actual screen definition - the user interface - View), the engine that manages the interaction (Controller) and the data that fill it (Model). The model is actually a completely optional part and so the core of understanding of this model lies in the view and the controller (Plakalovic and Simic, 2010)
- **Business Facade**: Provides a simplified entry point for complex jobs. For example, in a project tracking application, you can have a Project Manager class with methods such as findAll(), findById(id), create(project) and so on... The Project Manager class provides a front-end to the internal operation of saving/retrieving project objects (Jiang and Mu, 2011)
- **Singleton**: The Singleton model provides a mean allowing a single instance of a particular object to exist (Stencel and Wegrzynowicz, 2008). For example, when using SQLite in mobile applications, you only want one instance of the database. Using the Singleton model is a simple way to ensure this
- **Abstract factory**: A model for reusing code across applications. Shared code can be written to an abstract interface or class and platform specific concrete implementations are written and transmitted when the code is used (Sarcar, 2016).
- **Data Access Object pattern (DAO)**: Ensures the link between the business layer and the persistent layer to centralize the mapping mechanisms between the storage system and the business objects (Castillo et al., 2013)
- **Async**: The Async pattern is used when a long task has to be executed regardless of the user interface or the current process. In its simplest form, the Async model simply describes that long-time tasks must be run in another thread (or similar thread abstraction, such as a task), while the current thread continues processing and listening response process in the background, then updates the user interface when the data and/or status is returned (Kang et al., 2016)
- **Reactive programming**: Is a programming paradigm oriented around data flows and the propagation of change. This means that it should be possible to express static or dynamic data flows with ease in the programming languages used and that the underlying execution model will automatically propagate changes through the data flow (Salvaneschi and Mezini, 2014)
**Related Works**
Several research projects have been carried out in order to speed up the development of native mobile multi-platform applications, some work focuses on the generation of a specific code to certain blocks of application (sensor code, CRUD code, GUI code, BLE code, etc.) Others focus on generating an application that combines all the features and components of a mobile application. In this perspective, the authors in (Veisi and Strouli, 2017) have defined a general architecture for Android applications running on physical BLE devices. Then, using JetBrains MPS, they developed a modeling language that describes the components of an application working with these devices and finally they have developed a framework that allows Android developers to generate code for their application in a simple and efficient way. The code generated by AHL is fully functional and requires no modification. This means that developers should not learn how to implement or modify these components, because their use does not require knowledge of how they work.
The authors in (Benouda et al., 2016a; 2016b) proposed an approach based on model engineering, that aims to generate graphical user interfaces of Android applications. To do this the authors used the class diagram to define the PIM, the QVT (Query/View/Transformation) to realize various transformations on the PSM-Android and Acceleo for code generation. This work aims to accelerate and facilitate the development of Android applications. It takes into account the generation of graphical interfaces, without considering access to resources, embedded sensors, more complicated graphical interfaces and event handlers, etc. An MDA approach has been implemented in (Sabraoui et al., 2013), with the aim of modeling and generating the graphical interfaces of the mobile platforms. This approach consists of four main steps:
- Modeling the graphical interface in UML, using an object diagram
- Transformation of the diagrams obtained to a simple XMI schema using the JDOM API
- Transformation of the new XMI model to the target platform-specific model
- Generation of the graphical interface on the basis of the MDA approach, by projecting in Templates implemented with Xpand
This method has the advantage of automatically generating the graphical interfaces for several mobile platforms from a UML model. Nevertheless, developers in this approach cannot design the user interface in a simpler and user-friendly way, especially in the case where the application requires multiple screens. Moreover, the use of the object diagram for modeling the graphical user interface takes a long time. The approach presented is limited to the generation of user graphical interfaces fails to consider the native functionalities offered by smartphones (e.g., GPS, camera, sensors, etc.), also does not allow the generation of applications according to the principle of separation layer.
Furthermore, the authors in (Heitkotter et al., 2013) proposed the MD2 Framework, which is based on a DSL adapted to the field of mobile applications. This tool allows developing applications by describing the application model using the DSL and then a set of transformations are carried out to generate the native source code specific to the target platform. Applications created with MD2 follow the MVC model. The MD2 allows:
- Define data types and access operations
- Define CRUDs for updating data
- Implement user interface with a variety of components
- Define input validators on the data
- Access to native features such as GPS
The limitations of this solution are as follows:
- It is still a prototype
- Focuses on a single category of mobile applications: business-oriented applications
- Focuses on generating mobile applications that do not support reuse of existing source code
- Focuses on code generation for tablets
- The authors did not describe the basic meta-model
- With the DSL, we cannot generate a complete mobile application, as well as an application that respects the programming in layer
The majority of the approaches presented below are used to generate data-driven mobile applications. Also, some allow producing applications that respect the MVC pattern; others offer mechanisms to connect to local databases. However, the generation of complete mobile applications that follow good software engineering practices, such as separating software layers is not supported, also the design of complicated interfaces is not supported neither. In the current work, the authors combined between the UML language and the DSL to improve the quality of the applications generated, by respecting good software practices taking into account all the functionalities of a mobile application.
**The Proposed Approach**
The proposed approach revolves around using a pragmatic modeling technique which combining UML diagrams and exploiting a dedicated DSL language. From the UML diagrams, in particular the class diagram, we can generate business classes, the classes allowing access to the data, the classes allow to define the basic operations for a SQLite data base, such as the creation of the tables and the deleting tables. Concerning the dedicated language, it will serve us to model the graphical user interfaces consequently the generation of the presentation layer and the logical layer, also using a dedicated language we will be able to generate the service access layer.
**Generation of DAL, BOL and DL Layers**
In order to generate the following layers: Data Access and Business Logic layers, the authors will mainly use UML meta-models represented as a class diagrams. Indeed, using a model of classes annotated by some stereotypes that are specific to the PSM model, the authors will be able to generate BO business objects and data access object DAO according to each PSM (e.g. java, C, C++, C#, Objective C, Swift, etc.).
Moreover, we can generate the build script in order to create the SQLite data base. With the class diagram, it’s possible to generate the traditional graphical interfaces of updating and querying the data, using the previously generated CRUDs. The architecture for the generation of the DAL, BOL and DL layers is presented in the Fig. 3.
To realize the different transformations, the language ATL was used. In fact, this language of hybrid transformation is both declarative and imperative, which makes it more expressive and gives it the possibility to express any kind of transformations. As for ATL performance in most cases it runs faster than QVT (Adopted in some works such as (Benouda et al., 2016a; 2016b) due to two main reasons; the first: It is easier to reduce the matching with the WHERE clause in the rules; the second one: Due to the fact that ATL is compiled and executed on a virtual machine. ATL makes it possible to carry out transformations between the source and target models, by means of a set of correspondence or mapping rules written in this language. In ATL, we can create modules to perform model-to-model transformations. However, for model-to-text transformations the Xtend language is used, which allows to project the data into templates from an XMI instance of PIM Bean or PIM DATABASE result of the model-to-model transformations carried out with the ATL. A snippet of code to load an XMI file is shown in the Fig. 4.
The Fig. 5 illustrates the different stages of the proposed approach for the generation of the DAL, BOL and DL layers:
(a) Modeling an Application Using a Class Diagram
(b) Transform to an instance of the PIM Bean
(c) Projection in templates for generating business classes and standard graphical interfaces from an instance of PIM Bean
(d) Transforming an instance of PIM Bean to an instance of PIM DataBase
(e) Projection in Templates for Generating Data Access Classes, Database from an instance of PIM DataBase
Generating GUI, BLL and SAL Layers
In order to generate the following layers: UI and Application layers, the authors have used a certain meta-model based on DSL Language (a detailed description in (Lachgar and Abdali, 2017a)). This meta-model lists all the essential components for designing a mobile application, such as graphic components (e.g., button, text box, lists, containers, menus, etc.), navigation between screens, sensor specification which will be used in the application (e.g., Compass, Accelerometer, Orientation, Light sensor, etc.) and the specification of the native functions requested in an application (e.g., Camera, SMS, Telephony, Storage, Alert, Vibration, Geolocation, Contacts, etc.). Moreover, the proposed metamodel supports also other key features like Networking services.
The architecture for generating the GUI, BLL and SAL layers is shown in Fig. 6.
```
Class generator {
def static void main (string[] args) {
new generator().generate("model.xmi")
def dispatch generate(List<Beans> beans)'''
...
}
```
Fig. 4: Generating code with Xtend from a non-text model
Fig. 5: Different steps for the generation of DAL, BOL and DL layers
To realize the different transformations in this second step, the Xtext language was used for the creation of DSL, the Xtend 2 language and the generation of the different layers. The code generation with Xtend2 is faster than by Xpand (Adopted in some works such as (Heitkotter et al., 2013)), because the templates in Xtend2 are compiled in advance and not interpreted as in Xpand. Xtext is the pillar of the creation of an external textual DSL, it is a solution of Eclipse Modeling Project for the implementation of textual DSLs and their associated editors. Xtext is the considered solution to allow the formalization of the mathematical logic of the executable models, as well as the input of the logical expressions associated to the definition of a conditional sequence. In the case of Xtext, the meta-model of the data structure is inferred from the syntax description of the DSL; it is therefore easier to change a language, since the implications on the data structure are immediate. Xtend 2 offers a flexible and modular specification of the generated code through the management of imports and aspects. Besides, the generation rules for each model entity support the polymorphic dispatch. This is an extension of the visitor design template allowing an object to visit the function suited to its type. In the case of the polymorphic dispatch and unlike the visitor, no intrusive artifact is needed in the model code to achieve this behavior. It is the visited methods themselves that define the type of object they support. This is particularly useful in a compiler where an intermediate representation is often described by an abstract syntax tree, whose nodes are specializations of a single abstract definition.
Fig. 6: Architecture for the generation of GUI, BLL and SAL layers
Fig. 7: Extract from the UML Meta-model of a class diagram
Transformation and Generation of Business Classes, Data Access Classes, Database and CRUD Interfaces
Source Meta-Model
The class diagram presents a way to model an application in a business point of view. This diagram describes the relationships between the different objects that interact with each other to build a particular information system. Thus, the authors used the UML meta-model as a source for a platform-independent model to present the different business classes (Beans) of a mobile application via model-to-model transformations. Once the new template is created model-to-text transformations are applied to generate Java business classes for Android, C# for Windows Phone and Swift for iOS. An extract of UML meta-model used is presented in the Fig. 7.
Target Meta-Model for Generation the Business Layer
For the business layer, the authors were based on the meta-model, detailed below as targets. And using the ATL language the authors have carried out the various model-to-model transformations from the UML metamodel to the suggested PIM Bean meta-model. Then, Model-to-Text transformations are implemented to generate the native code for the business layer and presentation layer (CRUD interfaces).
The target meta-model is shown in the Fig. 8:
(a) The model-to-model transformation rules are presented below:
- For each UML model instance, a PIM Package instance must be created
- Their names must match. The package name contains the full path information. The path separation is a point (.)
- For each instance of UML class, a PIM Bean instance must be created
- Their names must match
- The package reference must match
- Bean modifiers must be public
- For each instance of UML attribute, a PIM Field instance must be created
• Their names must match
• Their types must match
• Modifiers must be private if the class is not a base class and protected if the class is a base class (encapsulation principle)
• For each UML Operation instance, a PIM Method instance must be created
• Their names must match
• Their types must match
• Modifiers must match
• For each UML Association instance, a PIM Association instance must be created
• Their names must match
• If the association is a generalization, we affect the Boolean value true to the extend property. If the association is an aggregation or a composition we affect the Boolean value true to the association property
(b) The model-to-text transformation rules are presented below:
• For Android:
• For each PackageBean PIM instance, a folder tree will be generated in the main package, the separation between each folder is identified by the “.” In the package name
• For each PIM Bean instance, a JAVA class must be created
• Their names must match
• Package names must match
• Modifiers must match
• Fields must match
• For each field two methods must be generated (Setters and getters), with a public modifier
• This class must contain two constructors, one to initialize all the fields and one without parameters
• The methods must match and generate in the class
• For each PIM Association:
• If the value of extend = true, it means that the source class inherits (extends) from the target class. Thus, the manufacturer of the derivatives must call the base class constructor
• If the value of association = true, it means that the target class is included in the source class (declare a target class object in the source class and apply the rules as in case of fields)
• For Windows Phone:
• For each PackageBean PIM instance, a folder tree will be generated in the main package, the separation between each folder is identified by the “.” In the package name
• For each PIM Bean instance, a C# class must be created
• Their names must match
• The names of packages and namespaces must match
• Modifiers must match
• Fields must match
• For each field the getters and setters must be generated
• This class must contain two constructors, one to initialize all the fields and one without parameters
• The methods must match and generate in the class
• For each PIM Association:
• If the value of extend = true, it means that the source class inherits (:) from the target class. Thus, the manufacturer of the derivatives must call the base class constructor
• If the value of association = true, it means that the target class is included in the source class (declare a target class object in the source class and apply the rules as in case of fields)
• For iOS:
• For each PackageBean PIM instance, a Swift module is associated
• For each Bean PIM instance, a Swift class must be created
• Their names must match
• The names of the modules takes the last word after the “.” In the PIM PackageBean name
• Modifiers must match
• Fields must match
• For each field the getters and setters must be generated
• The class must contain the init () method without parameters and the init
(parameters) method to initialize the various fields.
- The methods must match and generate in the class.
- For each PIM Association:
- If the value of extend = true, it means that the source class inherits (:) from the target class. Thus, the init () method of the derived class must call the init () method of the base class (super).
- If the value of association = true, it means that the target class is included in the source class (declare a target class object in the source class and apply the rules as in the case of fields).
**Target Meta-Models for the Generation of the Data Layer and the Data Access Layer**
For database generation, the authors applied model-to-model transformations from the PIM Bean metamodel to the relational PIM DataBase metamodel presented in Fig. 9.
The various transformations rules applied are described below:
- For each PIM Bean instance, a table must be created:
- Their names must match
- The primary key for each table must be an INTEGER, auto-increment named id “TABLE NAME”
- Primitive type attributes are transformed into columns, their names must match and their types will be (INTEGER, TEXT or REAL). Each primitive type must be converted into one of these three types
- For object types are transformed into foreign keys, their types and the type of the corresponding attribute converted to one of the types: INTEGER, TEXT or REAL.
In the case of an inheritance association, the primary key of the child class should not be auto-increment and will also play the role of a foreign key that reference to the corresponding parent table.
**(a) Generation of Classes and Interfaces**
For the generation of the data access layer and the data layer, the authors were based on the previously obtained DataBase PIM. The model proposed target template is shown in the Fig. 10. The SQLiteHelper class is used to define the database name, the database version and the database creation queries. As well as, requests for the deletion of tables of the database in case of update (Fig. 10 for more details).


The projection rules are defined as follows:
- In the SQLiteHelper class:
- The following constants are defined:
- The name of the database is identical to the name of the project
- The version of the database is identical to the version of the application
- A request for the creation of each table named: CREATE_TABLE_«table.name»
- In the createDatabase() method, we execute the creation requests previously defined
- In the upgradeDatabase() method, we delete the database if the new version is increased compared to the old version, then we call the createDatabase() method to recreate the database
For the generation of DAO classes, the authors propose the target meta-model that is based on the pattern “data access object pattern” presented in the Fig. 11.
Description:
- The generic interface (IDao) defines the standard operations to be performed on a model object
- The concrete class (DaoImpl) that implements the IDao interface is responsible for obtaining data from a data source that can be a database/xml file or any other storage mechanism
- For each PIM Bean instance, a dao class will be created
- The name of the dao class is generated as follows: «Bean.name>Service.
- This class contains the constants declaration:
- The name of the associated table
- The fields of the table
- A table that stores the fields of the table
- Method definitions (CRUD) generated using specific templates for each platform
- Exchange of data between SQL database server and mobile application
- General principle:
- The application builds HTTP requests (type GET or POST) URL = http://serveur/script? Parameters parameters = select conditions, e.g.: id = 1
- The client application (Mobile) sends this request to the SQL server and waits for the response
- The server script executes the query and then returns the result encoded in JSON to the application
- The mobile application decodes the result and displays it
• Each CRUD method will be associated with a specific server-side script
• The server-side application is generated from PIM Bean model, this application respects a layered architecture (in PHP 5) and the data exchange is done with JSON
• Generation of CRUD GUIs associated with different Beans:
– For each PIM Bean instance, three graphical user interfaces are generated:
• Add-up-Interface: This interface uses the create() method of the data access layer. After the adding the user will be redirected to a second interface to display the list of data
• Listing-data-Interface: This interfaces uses the findAll() method of the data access layer. After selecting an object from the list, two actions are presented to the user: Either he can remove the selected object by using in a chained way the following methods; findById and delete. Or he will be moved forward to the update view
• Update-interface: This interface makes it possible to modify the result object of the selection from the list by calling the findById() and update() methods and after the change the user will be redirected to the data list
• The navigation between the different interfaces in Android is generated in a menu main file redirecting to the different list screens associated with each bean. In the case of iOS by defining additional connections (called outlets and actions) between the views in the storyboard and the view controller source code files, etc.
• Some correspondences between the different target languages:
– Typology and declarations of variables,
In the Table 1, some matches between the different target languages in terms of topology and declarations of variables are given
– Protocols,
In Table 2, the correspondence between the different target languages in terms of protocols is illustrated
– Classes and Genericity
In Table 3, few matches between the different target languages in terms of object-oriented programming concepts (e.g., classes, constructors, inheritance, etc.) are presented
– Conditions, loops and functions,
In Table 4, some matches between the different target languages in terms of basic programming concepts (e.g., Conditions, loops, etc.) are given
| Table 1: Syntax of variable declaration and types according to the three mobile platform language |
|-----------------------------------------------|-------------------|-------------------|
| | Swift | C# | Java |
| Boolean | Bool | bool | Boolean |
| Constant | let | const | final |
| Declaration | var | var | (no equivalent) |
| Float | Float, Double | float, double | float, double |
| Integer | Int | int | int |
| String | String (value) | String (reference) | String (reference) |
| Table 2: Syntax of protocols according to the three mobile platform language |
|-----------------------------------------------|-------------------|-------------------|
| | Swift | C# | Java |
| Protocol | Protocol | Interface | Interface |
| Implements | : | : | implements |
| Table 3: Syntax of classes and genericity according to the three mobile platform language |
|-----------------------------------------------|-------------------|-------------------|
| | Swift | C# | Java |
| Constructor | Init | Constructor | Constructor |
| Class | Class | Class | Class |
| Inheritance | : | : | extends |
| Access | private, public | private, public, protected, internal | private, public, protected, default |
| Self | Self | this | this |
| Object | AnyObject, Any | Object | Object |
| Parent | : | : | super |
| Generics type | generic types | generic types | generic types |
| Generics function| generic functions | generic functions | generic functions |
Table 4: Syntax of conditions, loops and functions according to the three mobile platform language
<table>
<thead>
<tr>
<th></th>
<th>Swift</th>
<th>C#</th>
<th>Java</th>
</tr>
</thead>
<tbody>
<tr>
<td>Iterating Over Array</td>
<td>for item in arr {</td>
<td>for each (var item in arr) {</td>
<td>for (type item: arr) {</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
<tr>
<td>Is Array Empty?</td>
<td>if arr.isEmpty {</td>
<td>if (arr.Length == 0) {</td>
<td>if (arr.length == 0) {</td>
</tr>
<tr>
<td></td>
<td>/ / array is empty</td>
<td>/ / array is empty</td>
<td>/ / array is empty</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
<tr>
<td>For Loops</td>
<td>for var i = 0; i <= 5; ++ i {</td>
<td>for (var i = 1; i <= 5; i ++) {</td>
<td>for (int i = 1; i <= 5; i ++) {</td>
</tr>
<tr>
<td></td>
<td>/ / do something with i</td>
<td>/ / do something with i</td>
<td>/ / do something with i</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
<tr>
<td>Conditional statements</td>
<td>if i > 6 {</td>
<td>if (i > 6) {</td>
<td>if (i > 6) {</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>} else if i > 3 && i <= 6 {</td>
<td>} else if (i > 3 && i <= 6) {</td>
<td>} else if (i > 3 && i <= 6) {</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>} else {</td>
<td>} else {</td>
<td>} else {</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
<tr>
<td>Switch statement</td>
<td>var word = "A";</td>
<td>String word = "A";</td>
<td>String word = "A";</td>
</tr>
<tr>
<td></td>
<td>switch word {</td>
<td>switch (word) {</td>
<td>switch (word) {</td>
</tr>
<tr>
<td></td>
<td>case "A":</td>
<td>case "A":</td>
<td>case "A":</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>} break;</td>
<td>break;</td>
<td>break;</td>
</tr>
<tr>
<td></td>
<td>case "B":</td>
<td>case "B":</td>
<td>case "B":</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>} default:</td>
<td>default:</td>
<td>default:</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
<tr>
<td>Functions</td>
<td>func sayHello (name: String) -> String {</td>
<td>string sayHello (string name) {</td>
<td>String sayHello (string name) {</td>
</tr>
<tr>
<td></td>
<td>/ / do something</td>
<td>/ / do something</td>
<td>/ / do something</td>
</tr>
<tr>
<td></td>
<td>}</td>
<td>}</td>
<td>}</td>
</tr>
</tbody>
</table>
Transformation and Generation of Custom Graphical Interfaces, Treatment Classes and Classes Access to Services
Source Meta-Model
The meta-model published in (Lachgar and Abdali, 2017b) allows to create basic models allowing the generation:
- Graphical user interfaces
- Configuration files containing:
- Information about the project (domain, icon, version, author, etc.)
- The declaration of activities
- The specification of permissions
- The specification of embedded sensors
- etc.
- Navigation between different screens
- Navigation menus
- Classes of data processing with events on graphic components
- Access to different native APIs
- Access to embedded sensors
In this part, the authors contribute to an extension of their meta-model by adding the possibility of modeling the basic screens (e.g., LoginScreen, MapScreen, MediaScreen, etc.), to generalize the styles applied on the screens, to offer a mechanism to model access to previously defined web services. The added classes are marked in green. The source meta-model is shown in the Fig. 12.
Case Study: Product Management
In order to prove their approach, the authors have developed a case study through which they tried to focus on the business part of the mobile application. The goal is to have a complete android prototype for a product management app. The following class diagram describes the business classes of this application (Fig. 13 for more details).
The structure of the generated app under Android Studio is illustrated in Fig. 14.
Fig. 12: Extension of DSL Mobile Meta-model
- **Sensor**
- Accelerometer
- Compass
- Orientation
- Gravity
- Pressure
- Proximity
- Temperature
- Magnetometer
- Ambient Temperature
- Linear Accelerometer
- **Event**
- onTouch
- onClick
- keyUp
- keyDown
- **Resource**
- Camera
- PIM
- Network
- External Storage
- Sms
- FileSystem
- Bluetooth
- Geolocation
- Telephony
- Microphone
- **ListBoxItem**
- value: String
- event: Event
- target: String
- **RadioButton**
- checked: boolean
- **WidgetGroup**
- RadioGroup
- **ListBox**
- 0..*
- **Spinner**
- 0..*
- **Button**
- type: InputType
- text: String
- icon: String
- target: String
- **CheckBox**
- **GridLayout**
- **ScrollView**
- checked: Boolean
- title: String
- icon: String
- target: String
- **Submenu**
- **Menu**
- **Application**
- root: String
- title: String
- version: String
- author: String
- icon: String
- backgroundImage: String
- **items**
- **DataBase**
- version: int
- name: String
- **dataBases**
- 0..*
- **sensors**
- 0..*
- **resources**
- 0..*
- **LoginScreen**
- **Style**
- 1..*
- Items
- **Resource**
- **Sensors**
- **Statue**
- **Value**
- **InputType**
- **Input**
- type: InputType
- text: String
- icon: String
- target: String
- **Date**
- High
- LOW
- MEDIUM
- LOW
- **Layout**
- LinearLayout
- RelativeLayout
- **RestController**
- url: String
- methodName: String
- action: String
- nameSpace: String
- **screens menu**
- **0..1**
- **services**
- **Screen**
- title: String
- orientation: String
- **Application**
- root: String
- title: String
- version: String
- author: String
- icon: String
- backgroundImage: String
The navigation diagram describes in a concise way the navigation between all the application’s screens (for more details, Fig. 15). Therefore, the generated navigation menu allows for a more fluid navigation between screens.
The generated UI part specific to the Android platform is described in Fig. 16. A navigation menu is generated, it allows switching between activities in a more fluid way. The generated GUIs targeting the Android platform are presented in Fig. 16.
Fig. 15: Diagram of navigation between the screens of the “Product Management” application
Fig. 16: Some screens of the application “Product management”
Table 5: Comparison between the proposed approach and the traditional approach
<table>
<thead>
<tr>
<th></th>
<th>According to the proposed approach</th>
<th>According to the traditional approach</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of files in the project</td>
<td>23 files</td>
<td>23 files</td>
</tr>
<tr>
<td>Duration</td>
<td>10 min</td>
<td>30 min</td>
</tr>
<tr>
<td>Number of files edited by the developer</td>
<td>0 files</td>
<td>23 files</td>
</tr>
</tbody>
</table>
In this case study, the CRUD methods and interfaces of a simple product management application were generated. The manipulated data is stored in an embedded SQLite database. The table compares the proposed approach to the traditional approach. The results presented below are collected during the examination of end of training, on mobile web programming, at the Institute Specialized in Information Technologies and Offshoring of Marrakesh, Morocco (for more details, Table 5).
Limitations
The suggested code generator still suffer from some limitations and shortcomings. Namely, it just takes into account the generation of mobile business applications, the generation of CRUDs and the simple associated graphical interfaces, without taking into account applications with existing code and with more complicated graphical interfaces (e.g. games, etc.). However, it will be improved by integrating other UML diagrams, such as the sequence diagram and the activity diagram as source models. Further, the definition of other transformation rules, in order to produce genuine mobile applications can be made.
Conclusion and Future Work
In this paper an approach for the generation of multiplatform mobile applications with respect to a multilayer architecture is presented. For that, a combination of UML modeling with the dedicated Mobile DSL language is considered. This new approach allows the generation of business classes, data access classes, service access classes, configuration files, web services for standard CRUD functions, etc. A case study is carried out in order to validate this approach and the CRUD functionality for a simple Android application is generated. As a perspective, we look forward to implement the code generator to generate applications for other mobile platforms (e.g., iOS, Windows phone, etc.). The future directions of this work is to extend the proposed approach to better model the dynamic and business view of a mobile application. In other words, how to model business logic in an efficient way, especially if the business rules involve several business entities and requires several calls to complex services. Here the Object Constraint Language (OCL) can be used, which provides constraint and object query expressions on any Meta-model.
Future works will also focus on other aspects such as:
- **Implementing Flexible Data Model:** Relational database are still a good choice, if an app requires strong data consistency. But, when these requirements can be relaxed, NoSQL databases such as CouchBase, Firebase or Realm offer much greater flexibility
- **Data sync:** It is important to have the ability to control how the system syncs. This includes replication strategy, conditional replication and replication filtering
- **Secure data at rest and in motion:** Authentication should be flexible and allow the use of standard, public and custom authentication providers
Acknowledgment
We thank the reviewers for their careful reading of the paper, their insightful comments and suggestions that greatly improved the manuscript.
Author’s Contributions
Mohamed Lachgar and Khalid Lamhaddab: Contribute in writing and formatting of the manuscript and the analysis, development and testing of the application.
Abdelmounaim Abdali and Khalid Elbaamrani: Advise research project and design the research plan and contributed to the paper writing.
Ethics
This article is original and contains unpublished material. The authors confirm that are no conflict of interest involved.
References
|
{"Source-Url": "https://thescipub.com/pdf/10.3844/jcssp.2019.416.434", "len_cl100k_base": 11422, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 53714, "total-output-tokens": 13814, "length": "2e13", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0003180503845214844, "__label__crime_law": 0.00024819374084472656, "__label__education_jobs": 0.0007152557373046875, "__label__entertainment": 4.863739013671875e-05, "__label__fashion_beauty": 0.00017082691192626953, "__label__finance_business": 0.0002028942108154297, "__label__food_dining": 0.0002999305725097656, "__label__games": 0.0004425048828125, "__label__hardware": 0.0008034706115722656, "__label__health": 0.0003616809844970703, "__label__history": 0.0002181529998779297, "__label__home_hobbies": 6.794929504394531e-05, "__label__industrial": 0.0003139972686767578, "__label__literature": 0.00020825862884521484, "__label__politics": 0.000240325927734375, "__label__religion": 0.00040340423583984375, "__label__science_tech": 0.004150390625, "__label__social_life": 6.645917892456055e-05, "__label__software": 0.0027790069580078125, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00028777122497558594, "__label__transportation": 0.0005269050598144531, "__label__travel": 0.0002033710479736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58046, 0.0124]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58046, 0.43609]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58046, 0.8489]], "google_gemma-3-12b-it_contains_pii": [[0, 4431, false], [4431, 8423, null], [8423, 12899, null], [12899, 15147, null], [15147, 20440, null], [20440, 23740, null], [23740, 26552, null], [26552, 28407, null], [28407, 30173, null], [30173, 33414, null], [33414, 35580, null], [35580, 37563, null], [37563, 41678, null], [41678, 46876, null], [46876, 48677, null], [48677, 49151, null], [49151, 49305, null], [49305, 53994, null], [53994, 58046, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4431, true], [4431, 8423, null], [8423, 12899, null], [12899, 15147, null], [15147, 20440, null], [20440, 23740, null], [23740, 26552, null], [26552, 28407, null], [28407, 30173, null], [30173, 33414, null], [33414, 35580, null], [35580, 37563, null], [37563, 41678, null], [41678, 46876, null], [46876, 48677, null], [48677, 49151, null], [49151, 49305, null], [49305, 53994, null], [53994, 58046, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58046, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58046, null]], "pdf_page_numbers": [[0, 4431, 1], [4431, 8423, 2], [8423, 12899, 3], [12899, 15147, 4], [15147, 20440, 5], [20440, 23740, 6], [23740, 26552, 7], [26552, 28407, 8], [28407, 30173, 9], [30173, 33414, 10], [33414, 35580, 11], [35580, 37563, 12], [37563, 41678, 13], [41678, 46876, 14], [46876, 48677, 15], [48677, 49151, 16], [49151, 49305, 17], [49305, 53994, 18], [53994, 58046, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58046, 0.12705]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bd9e4b68a74ca33bc39a01e05110f33287cbf5c4
|
Hybrid Controller Synthesis for the IoT
Arthur Gatouillat, Youakim Badr, Bertrand Massot
To cite this version:
HAL Id: hal-01644356
https://hal.archives-ouvertes.fr/hal-01644356
Submitted on 8 Feb 2019
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Hybrid controller synthesis for the IoT
Arthur Gatouillat
Univ Lyon, INSA Lyon
LIRIS, UMR5205
arthur.gatouillat@insa-lyon.fr
Youakim Badr
Univ Lyon, INSA Lyon
LIRIS, UMR5205
youakim.badr@insa-lyon.fr
Bertrand Massot
Univ Lyon, INSA Lyon
INL, UMR5270
bertrand.massot@insa-lyon.fr
ABSTRACT
The Internet-of-Things designates the interconnection of a variety of communication-enabled physical objects, and IoT-based systems and devices must operate with a deterministic behavior and respect user-defined system goals in any situation. We thus defined hybrid controller synthesis for decentralized and critical IoT-based systems relying on a set of rules to handle situations with asynchronous and synchronous event processing. This framework defines a declarative rule-driven governance mechanism of locally synchronous sub-systems enabling the hybrid control of IoT systems with formal guarantees of the satisfaction of system-wide QoS requirements. In order to prove the practicality of our framework, then applied if to a critical medical Internet-of-Things use case, demonstrating its usability for critical IoT applications.
CCS Concepts
• Computer systems organization→Dependable and fault-tolerant systems and networks
• Computer systems organization→Embedded and cyber-physical systems.
Keywords
Adaptive IoT; Hybrid controller synthesis; Rule-based control
1. INTRODUCTION
The Internet of Things (IoT) paradigm designates the interconnection of a variety of communication-enabled physical objects (e.g. sensors, actuators, robots, wearable devices, etc.) integrated into wide-scale systems. In many IoT-based systems for critical applications (e.g. healthcare, traffic control, building automation, etc.), connected objects have very limited hardware and their usages require a continuous control in an always evolving physical world. More particularly, IoT-based systems and devices must operate with a deterministic behavior and respect user-defined system goals in almost any situation (e.g. device failure, loss of data packages, low power consumption, etc.).
In response to such requirements, self-adaptation software frameworks were developed [10, 17, 18]. Notably, self-adaptive software systems (SAS), which designates the study of adaptation of centralized or distributed applications in response to changes in digital environments. These changes are mainly due to human interventions and systems must maintain an appropriate quality-of-service and safe behavior. Such systems are typically based on closed feedback loops to adjust their behaviors to either internal changes (such as changes in software architectures and available services), or external changes (such as changes in user loads and contextual information). In order to enable self-diagnostics capabilities for adaptive systems, monitors are implemented to sense internal and external contextual information that can be used to trigger self-adaptation strategies. These strategies aim at the guarantee of expected functional and non-functional requirements. From the IoT perspective, self-adaptation is a salient property of connected devices. It allows smart objects to be configured and adapted to extreme conditions while preserving the target system requirements in terms of automation, security and safety goals. Self-adaptation mechanisms driven by adaptation goals dynamically modify smart objects behavior. Discrete controllers for IoT-based applications have also been proposed to ensure that they evolve following predefined state transition automata [20, 21]. Nevertheless, they rely on the use of synchronous programming languages and event processing, and assume that controlled systems should satisfy the synchrony hypothesis, by which all required events should simultaneously be available to trigger a transition from one state to another. As result, the computing time to react in response to events should be negligible in comparison with the rate of events generated by the system itself [9]. Otherwise the reactive system will fail to timely respond to changes. While this hypothesis holds for small-size IoT-based systems, it becomes invalid for complex systems (i.e., systems of systems) because of the large number of generated events and the difficulty of their synchronization. This mandates the investigation of hybrid discrete controllers and adaptation strategies to handle synchronous and asynchronous event processing and to ensure secure behavior based on state transitions, in order to control large-scale IoT-based systems.
Yet another important issue in self-adaptive systems is the specification of the monitoring logic in adaptation strategies. A monitoring and adaptation logic can be expressed with either imperative programming or imperative programming approaches. The imperative programming approach, implemented in languages such as Java, C, Perl and many others specialized languages (i.e. LNT [1] and BZR [6]), defines the control of the sequence flow of instructions to be executed. However, a purely manual imperative approach for IoT-based control is not appropriate. Indeed, IoT-based systems are highly distributed, heterogeneous and might account hundreds or thousands of devices. The description of such systems in purely imperative languages will lead to a massive and difficultly maintainable codebase, resulting in the need of investigating a declarative and decentralized approach to specify the monitoring logic in self-adaptation strategies.
The main advantage of a declarative approach relies on its ability to not specify directly the sequence flow of instructions to be executed by the system in response to changes. SQL queries, functional languages, business rules and production rules are few examples of declarative programming. Particularly, rule-based controls have recently gained interest for home-automation and IoT environments with a special focus on monitoring and adaptation strategies expressed in terms of IF-Condition-Then-Action rules [2, 3, 19]. Conditions are logic expressions over events generated by the IoT systems and/or contextual information whereas actions are operations that must be trigger in order to self-adapt the IoT system. Rules-driven controllers in large scale and critical IoT-based systems lacks formal verification mechanisms that can avoid conflicts, deadlocks and inconsistent situations. As a matter of fact, the translation of a declarative based logic into an imperative logic is necessary to cope with both the verification capabilities of imperative approaches and the expressiveness and modularity of declarative approaches.
In this paper, we propose a hybrid controller synthesis for decentralized and critical IoT-based systems. Our hybrid controller relies on a set of rules to handle situations with asynchronous and synchronous event processing. In the synchronous scheme, all
events must simultaneously be available in a near real-time manner in order to check whether a rule’s condition holds and then triggers its corresponding action. In the asynchronous scheme, events are queued upon their arrival. Once all events are available, the controller checks whether any rule’s condition holds. Our hybrid controller synthesis emphasizes on non-functional properties to express its self-adaptation behaviors. Monitors on quality of service (QoS) of non-functional properties generate streams of events. In order to validate our controller, we develop an e-health continuous monitoring use-case, where IoT-based systems are used to remotely monitor risk patients. We also implement a declarative rule-driven governance mechanism of locally synchronous sub-systems enabling the hybrid control of smart homes and smart objects to guarantee the satisfaction of QoS requirements specified in service level agreements (SLA). SLAs specify end-user requirements in terms of functional and non-functional properties, such as safety, health awareness and resource awareness. By ensuring separation of concerns for adaption objectives, context monitoring and adaptation strategies, our system is able to handle changing user requirements and to redeploy the appropriate controllers if necessary.
The remaining paper is organized as follows: section 2 describes related works on self-adaptation in software systems, classical control and home-automation. Section 3 briefly introduces the e-Health use-case applied to our self-adaptation system, focusing on the safety property in the context of healthcare. Section 4 introduces the notion of layered SLAs, the global QoS ontology and our rule grammar. The implementation of our hybrid controller and its experiments are described in section 5. Eventually, research perspectives and conclusions about our work are given in section 6.
2. RELATED WORKS
The work described in this paper is at the intersection of three fields of study: self-adaptation in software systems, classical control and home-automation. In fact, software adaptation contributions study the integration of techniques enabling better software reaction to a changing digital environment. In most contributions, variations of monitor analyzer planner executor and knowledge feedback (MAPE-K) loops as detailed in [11]. In this feedback loop, monitors (i.e., sensors) are used to trigger system adaptation deployed using executors (i.e. actuators) using analyzers and planners provided with shared knowledge about the system. Because of its genericity, MAPE-K feedback loops can be adapted to deal with various self-adaptation concerns. For instance, the DYNAMICO adaptation framework [15–17] introduces a self-adaptation framework based on three distinct but communicating MAPE-K loops, each of the loop being used to control a specific aspect of software adaptation (i.e., adaptation of the monitoring infrastructure, adaptation of the control objectives and finally system adaptation). Formal adaptation frameworks based on MAPE-K loops have also been proposed in [10, 18], where adaptation strategies are modeled as plan automata. However, for both these contributions, adaptation strategies must be specified manually by the end-user, and such approach lacks expressivity and is thus difficult to apply to wide scale systems, where global adaptation strategies can be very complex. Moreover, typical DYNAMICO implementations (i.e. SMARTERCONTEXT monitoring infrastructure with the QoS-CARE/FRASCATI middleware [15]) are not relevant to distributed smart objects with limited resources.
Automated controller synthesis was studied in the control community, more particularly under the field of discrete controller synthesis (DCS). In such approach, controllers are synthesized automatically from a labeled transition system description of the functional elements of the system to be controlled and a set of control objectives (also called a control contract) usually specified as rules [4–6, 20, 21]. The DCS community relies on the use of synchronous languages (e.g. SIGNAL [13] or Heptagon/BZR [6, 7]) to specify target systems and control objectives. Synchronous languages enable the specification of the components of the system as concurrent labeled transition systems. Labelled transitions systems model functional and non-functional behavior using two sets, one representing the states of the system and the other representing the transitions between the states. Transitions are associated with variables over functional or non-functional properties, which are categorized as either controllable or non-controllable in the discrete controller synthesis community. Controllable variables can be triggered externally by the controller in order to verify control objectives, while non-controllable transitions can only be triggered internally and the triggering of transitions associated with non-controllable variables cannot be forced. Such techniques have been successfully used to achieve functional control of smart houses in [20]. However, the study was limited to only a few sensors and actuators, and the scalability issue was not explored.
Globally asynchronous locally synchronous systems are a category of systems which exhibit a global asynchronous behavior with local subsystems adopting a synchronous behavior [14]. Considering the IoT still is mainly built around networks of gateways controlling smaller networks of devices, this model of computation is a good abstraction for such systems. Indeed, because the number of event in a gateway-controlled sub-network is limited because of the smaller number of devices connected to a single gateway, the synchrony hypothesis is verified. However, when a global view of the system is adopted, where numerous gateways are interconnected and communicate, the high number of event generated mandated an asynchronous approach. While this model of computation is typically used to describe very low-level systems [14], SystemJ, a higher-level system specification language, was developed [12]. Such language was used along with data-compression to specify IoT-based systems [8]. However, this language does not propose automated controller generation, which is a key aspect of controller design for the IoT. Indeed, the changing nature of IoT systems when sensors and actuators can be added or removed to the network at any moments mandates the presence of automation tools. This dynamic nature of IoT systems also calls for great maintainability, and is penalized by using centralized languages such as SystemJ.
Rule-based control strategies was widely studied by the home automation community. This field of study focuses on improving quality of life by instrumenting houses with a wide variety of sensors, actuators or gateways, in order to enable better monitoring and control of houses occupants on their environment. The ultimate goals of this community are broad, but they can be summarized as the enabling ambient intelligence to achieve better home lifecycle, and perform self-adaptation to address a variety of concerns such as energy efficiency, safety, security, comfort or remote patient monitoring [3]. More particularly, rule-based monitoring infrastructure were used to enable remote elderly adults monitoring and assistance [19] or to provide assisted decision-making in medical situations [2]. Unfortunately, such solutions typically lack any formal analysis or guarantees of non-functional properties, and potential devices’ failure are not considered, which limits their use for critical applications.
Our contribution is at the center of the contributions described in this section. By adopting a hybrid approach by using asynchronous rules as a driver of discrete controller synthesis of synchronous subsystems, and by adapting software adaptation tools enabling the management of changing monitoring infrastructure and control objectives, our approach is a comprehensive answer to the challenges of wide IoT-based systems control.
3. MOTIVATION CASE STUDY
As a motivation case study, we consider the remote monitoring of a set of patients at risk for cardiac malfunction. To successfully achieve this goal, patients are equipped with a variety of body-wearable biomedical sensors that continuously monitor a wide range of biomedical signals (e.g., cardiac and respiratory activity, physical activity, electrodermal activity). Such physiological sensors can be used to detect suspicious health events that can trigger medical response if deemed necessary. Such body-wearable sensors are battery operated, and feature limited processing and storage capabilities because of the energy consumption constraints brought associated with battery operation. Additionally, the living environment is also continuously monitored, using both battery operated and continuously powered sensors. As a result, the overall system is built around several instrument houses occupied by several instrumented patients. Consequently, our adaptation framework must be scalable and modular, and adding a patient and a house in our framework must be a transparent operation.
As in most IoT-based systems, devices used to monitor patients present with strong constraints in terms of resources and communication capabilities: the computing abilities of monitoring devices are very limited (CPU frequency up to a few hundreds of megahertz), as well as storage (up to a few megabytes) and volatile memory (up to a few hundreds of kilobytes). Strong resources constraints, especially in the case of battery operated devices, has implications on the communication protocols used by these wireless objects. Indeed, in order to maintain a good battery life, the wireless communication protocols used in such objects must be lightweight, both in terms of physical characteristics and software requirements, in order to avoid excessive communication overhead.
Considering the adaptation requirements of this medical IoT-based system, this case-study is of peculiar interest. Indeed, the adaptation goal is to guarantee robust and continuous monitoring of the patients, and it is achieved considering the qualitative safety quality of service property. To satisfy this goal, the adaptation strategy considers three quality of service factors: the resource-awareness factor in which adaptive behavior is triggered using devices resources monitoring, the resilience factor, which is verified through the substitution of failed objects with sub-optimal but functional alternatives, and the healthcare awareness factor (i.e., the definition of patient specific monitoring threshold used to trigger medical or technical intervention).
These adaptation goals mandate the implementation of safety-enabled smart homes for each patient. In each of these smart-homes, a flock of resources-aware sensors are deployed and used to satisfy self-adaptation requirements in terms of resource consumption, resilience and external assistance. The adaptation strategy is thus based on the behavioral modification of smart-sensors by remotely modifying their configuration parameters based on a set of control objectives, specified as a set of rules, or the triggering of external medical response if monitored health-parameters exceed specified thresholds. In the following sections, we limit ourselves with a few patients, equipped with identical biomedical and environmental sensors:
- A battery-operated and multi-function heart sensor, including heart rate (HR), heart rate variability (HRV) and respiratory measurement (RR). The sensor exposes streaming services to acquire these measurements. It also monitors its battery level and can determine if it is unattached. The sensor’s low-battery failsoft mode can be internally and remotely triggered to extend the battery-life. In the low-battery failsoft mode, the respiration measurements are stopped, as well as the computation of the HRV parameters. In this mode, the sensor will not be able to determine its attachment status, and the HR measurements are not streamed in real-time but they are rather sent every five minutes as an average value.
- A battery operated electrodermal activity (EDA) sensor, which exposes a single streaming measurement service. Similarly to the cardiac and respiratory activity sensor, it is equipped with self-battery monitoring capability and a failsoft mode that can be internally or externally triggered.
- Line powered ambient sensors such as position sensor (PO) and occupancy sensor (CO). The position sensor streams the coordinates of the monitored patient within the space whereas the occupancy sensor detects the presence of the patient in their living environment. These sensors can be remotely activated and turned-off. Since they are line powered, they do not require self-battery monitoring.
These ambient and critical medical sensors are embedded in the houses and are worn by the monitored patients. Fixed and mobile gateways, such as fixed Raspberry Pis or mobile smartphones, are wirelessly connected to the sensors using low-energy protocols (i.e., Bluetooth). They also are connected to service-oriented analytical and medical framework through an Internet-related protocol (i.e., HTTP RESTful API). The self-adaptation framework is implemented in the service-oriented analytical framework and the gateways. The global control architecture is described in the next section.
4. HYBRID CONTROLLER SYNTHESIS
4.1 Global Framework Description
In order to enable self-adaptation of decentralized and critical IoT-based systems, we introduce a hybrid self-adaptation framework as illustrated in Figure 1. The framework seeks to enable declarative rule-driven governance mechanism not only with respect to changes in the ambient environment, but also changes in control objectives or in the monitoring infrastructure. The framework extends the DYNAMICO reference architecture [17] to the realm of the IoT, taking into considering sensors’ resources awareness and decentralized nature of IoT-based systems. Indeed, DYNAMICO aims at designing and implementing self-adaptive software, where control objectives, adaptation strategies and the monitoring infrastructure are considered as three interacting but distinct feedback control loops. By ensuring separation of concerns for adaption objectives, context monitoring and adaptation strategies, DYNAMICO architecture and its MAPE-K control loops are able to handle changes in user requirements and to adjust itself accordingly.
The hybrid self-adaptation framework includes several components, namely the asynchronous rule engine, asynchronous controllers, synchronous monitors, and synchronous subsystems each of which comprises battery powered physical devices, line powered physical devices and gateways. These components interact through three closed loops; The higher-level loop is the control objectives feedback loop, which dictates the reaction of the system to changing control objectives (i.e. in our case, changing control in the SLA). The lower-level loop is the monitoring feedback loop, and it enables the IoT-based system with adaptation capabilities with respect to changing monitoring infrastructure. This feedback loop also infers context variables to be measured from the contracted QoS requirements as specified in the service level objectives of the SLAs, and adapts or redeloys relevant monitors with respect to updated QoS obligations. The last feedback control loop describes target system regulation strategies to preserve the contracted QoS.
In the IoT context, the monitoring feedback loop and the adaptation feedback loop are implemented in gateways, measuring and controlling a set of connected sensors and actuators. The control objectives adaptation feedback loop, because of its higher-level nature, is implemented in centralized servers, controlling distributed synchronous controllers.
The second interaction (ii) describes the communication between the monitoring feedback loop and the control objectives feedback loop. In an asynchronous hybrid context, this interaction is triggered when the monitoring feedback loop detects the necessity of a change in the control objectives. For instance, if a battery-operated sensor becomes unresponsive because of an emptied battery, an eventual control strategy should be applied to infer the health status of the monitored patient from environmental sensors.
The third interaction (iii) holds between the monitoring feedback loop and the adaptation feedback loop, and is used when abnormal monitoring events occur without mandating changes in the control objectives. This interaction typically predicts and adapts, where preemptive adaptation actions are taken in order to prevent later adaptation of critical situations. For example, if a battery level monitor becomes unresponsive, a control strategy should be applied to infer the health status of the monitored patient from environmental sensors.
The last interaction (iv) takes place between the adaptation feedback loop and the monitoring feedback loop. It represents streams of captured events from the internal context of the monitoring feedback loop. It also verifies the monitoring system consistency after an adaptation occurred. For example, it checks if sensors subsided to a failed sensor are in a functional state, to guarantee constant QoS across the whole adaptation process.
The articulation of these components through feedback loops is straightforward: the asynchronous rule-engine triggers both adaptive behavior or controller resynthesis if a control objective, and thus a control rule, changes. The newly synthesized controller is then deployed to the appropriate gateways at runtime. The system thus self-adapts without any execution interruption.
As described in Figure 2, the controller synthesis self-adaptation process relies on three ontologies, namely the SLA ontology, the failure ontology and the expert knowledge ontology. In this figure, the objectives analyzer, objectives controller and adaptation analyzer denotes elements of the objectives MAPE-K feedback loop and the adaptation MAPE-K feedback loop. These elements are embedded in the global MAPE-K loops described in Figure 1, and can be seen as standard adaptation-enabling elements.
4.2 Multi-Level SLA Adaptation
The complexity and the distributed nature of IoT systems mandate a hierarchical separation of SLAs to accurately represent functional and non-functional guarantees at different granularity levels. Since our use-case describes human-centric IoT applications, an SLA is required to capture expected level of services by patients and medical staffs. We describe how the SLAs are divided throughout the IoT-based system.
System-level SLAs designate contracts between end-users and services providers at the system level. Indeed, end-users do not need finer granularity to specify their requirements at sensors and actuators levels. Instead, they express system-level objectives and specify global functional and non-functional requirements. System-level SLA are then refined into fine-grained SLAs at the device level for further analysis.
Device-level SLAs represent guarantees provided by manufacturers about their devices’ functional and non-functional operations.
properties. These SLAs are closely related to physical and operational device characteristics.
It is worth noting the difference between smart devices and simpler devices when considering device SLAs. Indeed, smart devices exhibit capabilities to interact with their environments and adjust their configuration parameters. As a result, smart devices can be reconfigured even if their SLAs change over time. However, SLAs of simple devices remain static and their SLAs can only be slightly modified. Simple devices are thus black-boxes designed by manufacturers to have predefined functional and non-functional properties that cannot be reconfigured over time. The finer granularity of device-level SLAs enables optimization and reasoning at the system scale by providing precise system descriptions.
**Human-level SLAs** specify personal characteristics differentiating users (i.e., patients) in human-centric IoT systems. The presence of human in the control loop justifies the accurate description of the system properties (i.e., biological properties) being controlled or monitored. These system properties can greatly vary from one individual to another in terms of various pathologies that can impact physiological parameters.
Expressions in each of these SLAs can be mapped to QoS factors, as described in the ontology in Figure 3. For instance, the resource awareness QoS factor is typically a device-level SLA because of resources variability between devices (e.g., continuously powered sensors do not have the need for a low battery SLO obligations, while battery operated sensors do). The resilience QoS factor is typically a system level SLA, where resilience is specified at the system level. For example, if a sensor is failing, it is then subsided by other sensors in order to compensate for the loss of information caused by sensor malfunction. However, the health-awareness QoS factor is a human-level SLA. In our use-case, the cardiac activity is monitored in order to detect and prevent cardiac malfunctions. However, cardiac malfunction is associated with different diseases producing different effects on heart activities. In order to accurately detect a specific heart malfunction, a corresponding QoS factor must thus be adapted for each monitored patient, leading to the establishment of a human-level SLA.

From the self-adaptation perspective, we use the SLA as a system input. Rules and requirements specified in SLAs, especially, the set of rules provided as service level objectives (SLO) will be used to generate controllers, guaranteeing that the system behaves according to the SLA and adapts itself with respect to environmental changes.
In order to express system-wide requirements to be included in SLAs, we propose a rule-based language to specify control objectives. In the following sub-section, we present the rule grammar and its semantic. We then explain the generation of discrete synchronous controllers and their coordination with asynchronous controllers.
### 4.3 Modeling Rule-Based Control Objectives
Rules to specify control objectives follow the Event-Condition-Action (ECA) pattern. They are defined as a set of asynchronous rules each of which is activated in response to the evaluation of a condition (or a set of conditions) by executing the corresponding action. Rules describe adaptation strategies based on events generated by sensors and captured by monitors, related to QoS factors and predefined SLOs. Because rules describe adaptation strategies with respect to device-related monitored variable, they can be considered as control input and objectives. A rule has the following syntax:
```
Rule name
ON event
IF conjunctions of condition are found to be true
DO actions are executed
End
```
The basic structure is a list of conditions and actions. A condition denotes a constraint or a filter, acting on data and events in a specific domain of interest. Data and events are generated by sensors, actuators or object instances (i.e., complex data structures). Once the condition holds, its corresponding action is executed, taking the matching data or events as parameters.
Action refers to the execution of device services, taking as parameters events and data specified in the control strategy. The example below illustrates a rule. Its syntax follows the rule language grammar as illustrated in Figure 4.
```
SENSOR egcSensor TYPE ECGSensorType
SENSOR posSensor TYPE PositionSensorType
Rule "Sensor-Low Battery"
ON batteryLevelLow
IF
$e: ECGSensorType(batteryLevel < 20%)
$s: posSensor(batteryLevel < 10%)
DO
$e.setFailSoftMode();
End
```
The first line declares an instance of a device, called egcSensor, of type ECGSensorType. Similar to objects and classes in the object-oriented paradigm, the device type is a common data structure of similar devices. Each device is described by a set of attribute value pairs. Attributes may hold information about devices such as characteristics, contextual information, configuration parameters, and their sensing data from the physical environment.
As illustrated in the example, the rule starts with the keyword Rule followed by a string, denoting the rule’s name. The line in the left-hand side of the rule is the conjunction of logical predicates, each of which is written on a separate line. The predicate can be applied on individual device instances or on all instances of a given device type, defined with the keyword TYPE in the rule-based language (see Figure 4). The predicate works as a function with a condition (called also filter condition) as its input parameter. The filter condition is a logical expression on device attributes. The logical operator “And” between predicates is explicitly omitted. In the before mentioned example, there are two predicates:
- **The ECGSensorType(filter-condition)** predicate applies the filter_condition on all device instances, having the ECG sensor as their type. The filter condition is not more than a logical expression on ECG sensor attributes. All ECG sensors that have their batteryLevel attributes less than 20% are selected, making the rule’s condition a non-empty set of device instances (i.e., true). As a result, the corresponding rule’s action is thus executed. In the first rule, the service, setFailSoftMode(), for example, is executed to set the
sensor performance into the failsoft mode. The predicate here is applied on all instances of a given device type.
- The posSensor(filter-condition) predicate applies the filter_condition, batteryLevel < 20%, on the posSensor instance of the posSensor's type. The predicate here is applied on an individual device instance.
In the first rule, the $ prefix is called the bind operator, which binds a variable to a device type (i.e., $s: posSensor() binds the $ variable to the posSensor instance) or to a device instance (i.e., $e: posSensor() binds the e variable to the posSensor instance).
In order to trigger the rule, we need simultaneously all available device instances of ECGSensorType and the posSensor instance. Between ECGSensorType() and posSensor() predicates, an implicit AND operator is used to create the conjunction of predicates. Therefore, the rule’s condition is activated when there is at least one ECGSensor instance with an attribute batteryLevel less than 20% AND the posSensor sensor batteryLevel attribute that is less than 10%. The rule’s action is then triggered to adapt the ECGsensor sensors in response to low level of battery power.
In sum, predicates on device types are particularly useful to specify adaptation strategies at the system level while the adaptation must only be triggered in relevant situations. Predicates on device instances allow a fined grained control of adaptation at the device level.
Global variables
In order to interact with contextual data that is not in device attributes, we introduce the keyword GLOBAL to declare a variable and bind it to the environment surrounding devices. Global variables can refer to external services, cached data in memory or parameter values for setting up the rule engine at runtime. For example, the following statement declares the global variable BobHome of type Home, which is declare as an Object.
GLOBAL BobHome Home;
In our context, global variable can be used to save configured device states so that, if a controller resynthesis occurs, the newly synthesized controller can be deployed and starts its execution with the right sensor state.
Rule based language
The control objective describes the objectives feedback loop, where the reference control objectives are provided after the IF statement and the QUALITY statement is used to feed the monitoring feedback loop and the adaptation feedback loop.
The control rule describes the interaction between the adaptation feedback loop and the monitoring feedback loop. The statement after the IF describes the monitor that must be implemented in the control feedback loop, and the quality is the reference context input of this feedback loop. The statement after the DO describes the adaptation mechanism that occur in the adaptation feedback loop, and more specifically in the adaptation feedback loop controller. In the context of discrete controller syntheses, the rule-based language defined in this section will be used to provide the control objectives to the controller synthesizer. This rule based definition of the objectives allows for greater expressiveness, which allow external users to easily specify their desired control objectives.
© ACM 2018. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution.
The main advantages of using rules in the context of IoT do not only come from a small group of rules, but from a large, ever-changing group of rules that define the behavior of a complex system that requires sturdy development to maintain it operational when using imperative programming languages.
The rule-based language also enables the formulation of control objectives and strategies with respect to desired service-level objectives. Objectives are later used to synthesize synchronous controllers and deploy them in the gateways in order to control a specific sub-set of devices.
4.4 Synchronous Sub-Systems Modelization
The prime modeling framework for discrete controller synthesis relies on labeled transition systems (LTS) to model to-be-controlled sub-systems. Discrete controller synthesis typically uses synchronous programming languages embedded with control contract specifications to build correct-by-construction controllers. The strength of discrete controller synthesis stems from the produced code that is assumed to be correct. In fact, the code generation will fail if inconsistencies are detected, either in the control rules or in the models of the controlled sub-systems. The discrete controller synthesis thus makes possible to avoid further formal analysis, and thus saves development time.
Commonly speaking, labeled transition systems are defined as a tuple \((S, L, \rightarrow, s_0)\), with \(S\) a set of states, \(L\) a set of transition labels, \(\rightarrow \subseteq S \times L \times S\) a transition relation between states, and \(s_0\) an initial state. The set of transition labels is defined as \(L = (\text{events, actions, \_})\), where \(\_ \subseteq \text{events} \times \text{actions}\).
In our context, we use two LTS to represent a sensor: a functional LTS, describing the relationship between the different functional states, and a non-functional LTS which describes the relationship between the objects non-functional states. The two LTS are synchronized using the following syntax: the statement \("e | a\) can be interpreted as the control of event \(e\) by service \(a\) when event \(e\) during the firing of the transition.
As specified earlier, variables in the context of discrete controller synthesis are divided into two distinct sets: the controllable variables and the non-controllable variables. In our context, we chose to model controllability using the \("\$\) character. Particular attention to this variable separation problems must be paid when modeling the various devices included into the adaptation framework, because the correctness of the synthesized controller directly depends on what is defined as controllable and non-controllable.
Figure 5 introduces the LTS model of the cardiac and reperatory sensor used in our case-study. The model captures both the functional and non-functional evolution of the sensor. In this example, local and remote service calls, modeled under the form \("Se.service_call\)," are considered to be controllable. However, model inputs such as battery level (abbreviated as 'batt' in Figure 5), or the unattached flag (abbreviated as 'unatt,' and is true when the sensor auto-detects that it is unattached from the monitored patient) are defined as non-controllable. This is because these variables are related to the physical domain, and the physical world typically behave unpredictably. As a rule of thumb, all the external and physical monitored variables should be considered as uncontrollable because of the unpredictable nature of the physical world.
On the left side of the model, we introduce the model inputs, which are used in contract-based discrete controller synthesis as variables to be monitored. In our context, every model input must thus be assigned to a dedicated monitor in the monitoring feedback loop.
The model outputs are presented on the right side (see Figure 5). For conciseness purposes, outputs are abbreviated as functional and non-functional states, meaning that all states of the model are exposed to the controller synthesizer in terms of a set of mutually exclusive Boolean flags. Output are set to be true when the sensor is in the corresponding functional or non-functional state. Such outputs are used in control contracts, which are specified as first order logic rules in most of the synchronous languages enabled with discrete controller synthesis capabilities.

as sensors must be reattached either by the monitored patient or an external medical worker.
A low battery is the third symptom of failure, and two situations can result from such event. Either the sensor is equipped with a battery-saving failsoft mode (where the sensor typically streams less precise values, or where the stream occurs at a lower rate), or the battery is drained until total battery failure, where the sensor is stopped internally in order to protect the battery. This last case mandates external intervention, so it must be delayed as late as possible. Because of the loss of data quality causes, the battery failsoft mode must not occur too early during the battery discharge process, and a compromise must be found between lower data quality and battery life. Such compromise can be determined using externally provided expert knowledge.
Finally, the last symptom of failure is the interruption of the communication between the sensor and the gateway. This failure can have three causes. The first possible cause is that the sensor is out of range. Such failure can easily occur, especially if fixed gateways are used. Indeed, because low-power wireless communication protocols feature limited range (usually up to a few tens of meters), if a fixed gateway is used, the monitored patient can easily get out of the communication range. The next source of failure is a radio-dedicated component malfunction. As radio communication of connected objects is typically implemented onto specific radio integrated circuits, a malfunction of such chip can cause a loss of connectivity. The last source of failure is a gateway malfunction, which can also cause loss of connectivity between sensors and the gateway.

Using this ontology, models can be developed accounting for all the identified failure symptoms, and adaptations strategies can be derived for all the failure causes. This enables comprehensive adaptation process with guarantees that is accounts for all identified failure sources, thus providing robust global system behavior. Implementation of all the elements (from the rule engine to the discrete controller synthesis) of our hybrid self-adaptation framework for the IoT is detailed in the following section.
5. IMPLEMENTATION
In order to validate our hybrid self-adaptation framework and self-adaptation strategies, we developed a prototype using existing languages, controller synthesis and rule engines. The asynchronous rule engine is implemented using the asynchronous capabilities of the Drools rule engine [22]. This rule engine was developed as a business rule engine with a web-based control interface, along with an Eclipse plugin for further development. Rule evaluation is based on the Rete algorithm, and is distributed with an open-source license. We have developed a domain specific language compiler based on our language using Xtext, which provides us with a fully-featured and statically-typed programming language. The compiler outcome produces rules as expected by the Drools rule engine. By such, Drools support enable our self-adaptation rule-based language to be easily managed and monitor a massive and potentially changing set of rules. Such characteristic is suitable for scalable IoT purposes. The only limitation being the resources available for rule evaluation. Since this tool is implemented on external servers, the resources available are virtually unlimited when compared with devices’ resources.
The discrete controller synthesis is implemented using the Heptagon/BZR synchronous language for controller synthesis [6]. In this language, objects are modeled using a textual representation of labelled transition systems. The discrete controller synthesis is realized with respect to control contracts specified by using a simple grammar. Three contract keywords are defined: with, assume and enforce. The keyword with is used to specify the set of controllable variables that can be used by the controller for self-adaptive behavior, while the keyword assume describes a set of initial assumption to assist the controller synthesizer and to avoid certain locking behaviors and the keyword enforce is used to provide the controller synthesizer with a set of rules that the global synchronous system must observe. Such rules are provided as first order logic statements, and such statement is determined using a first logic translation of the business process rules specified in Drools. It is worth noting that it is not necessary to translate all rules specified in Drools, but only lower level rules that are relevant to a specific monitoring context. These rules are enabled in gateways to support adaptive behaviors.
6. CONCLUSION
In this paper, we present a hybrid controller synthesis framework for critical IoT systems. Our system presents self-adaptive behavior with respect to changing control objectives, evolving monitoring infrastructure and dynamic internal and external context by adopting separation of concerns and defining three distinct but communicating control loops: the objective feedback control loop, the monitoring feedback control loop and eventually the adaptation feedback control loops. This framework is equipped with an asynchronous rule engine and synchronous discrete controller synthesis capabilities in order to provide a hybrid self-adaptation framework for the IoT. The presence of discrete controller synthesis enables automatic controller generation from formally correct synchronous programming languages, providing functional and non-functional guarantees for critical IoT-based systems.
7. REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01644356/file/sac.pdf", "len_cl100k_base": 8652, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30054, "total-output-tokens": 10598, "length": "2e13", "weborganizer": {"__label__adult": 0.0004711151123046875, "__label__art_design": 0.0005588531494140625, "__label__crime_law": 0.0005359649658203125, "__label__education_jobs": 0.0008258819580078125, "__label__entertainment": 0.00011539459228515624, "__label__fashion_beauty": 0.00026488304138183594, "__label__finance_business": 0.000476837158203125, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0009222030639648438, "__label__hardware": 0.0036258697509765625, "__label__health": 0.00330352783203125, "__label__history": 0.0004239082336425781, "__label__home_hobbies": 0.0001709461212158203, "__label__industrial": 0.0008273124694824219, "__label__literature": 0.0004055500030517578, "__label__politics": 0.0004088878631591797, "__label__religion": 0.0006604194641113281, "__label__science_tech": 0.395263671875, "__label__social_life": 0.00010406970977783204, "__label__software": 0.01349639892578125, "__label__software_dev": 0.5751953125, "__label__sports_fitness": 0.00043892860412597656, "__label__transportation": 0.0008974075317382812, "__label__travel": 0.00026726722717285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51036, 0.03067]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51036, 0.76422]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51036, 0.90554]], "google_gemma-3-12b-it_contains_pii": [[0, 1062, false], [1062, 7963, null], [7963, 16013, null], [16013, 23549, null], [23549, 27289, null], [27289, 33731, null], [33731, 37361, null], [37361, 41827, null], [41827, 48714, null], [48714, 51036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1062, true], [1062, 7963, null], [7963, 16013, null], [16013, 23549, null], [23549, 27289, null], [27289, 33731, null], [33731, 37361, null], [37361, 41827, null], [41827, 48714, null], [48714, 51036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51036, null]], "pdf_page_numbers": [[0, 1062, 1], [1062, 7963, 2], [7963, 16013, 3], [16013, 23549, 4], [23549, 27289, 5], [27289, 33731, 6], [33731, 37361, 7], [37361, 41827, 8], [41827, 48714, 9], [48714, 51036, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51036, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f05090704c9718afb3f2047e6c877b311f83302e
|
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen, Damian Frölich, Mauricio Verano Merino, Joey Lai, Pierre Jeanjean, Tijs van der Storm, Benoit Combemale, Olivier Barais
To cite this version:
HAL Id: hal-03921387
https://inria.hal.science/hal-03921387
Submitted on 3 Jan 2023
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Language-Parametric Approach to Exploratory Programming Environments
L. Thomas van Binsbergen
ltvanbinsbergen@acm.org
University of Amsterdam
Amsterdam, The Netherlands
Damian Frölich
dfrlich@acm.org
University of Amsterdam
Amsterdam, The Netherlands
Mauricio Verano Merino
m.verano.merino@vu.nl
Vrije Universiteit Amsterdam
Amsterdam, The Netherlands
Pierre Jeanjean
pierre.jeanjean@inria.fr
Univ. Rennes, IRISA, Inria
Rennes, France
Tijs van der Storm
storm@cwi.nl
CWI, Amsterdam
University of Groningen, Groningen
The Netherlands
Joey Lai
joeylai96@hotmail.com
University of Amsterdam
Amsterdam, The Netherlands
Benoit Combemale
benoit.combemale@irisa.fr
Univ. Rennes, IRISA, Inria
Rennes, France
Olivier Barais
olivier.barais@irisa.fr
Univ. Rennes, IRISA, Inria
Rennes, France
Abstract
Exploratory programming is a software development style in which code is a medium for prototyping ideas and solutions, and in which even the end-goal can evolve over time. Exploratory programming is valuable in various contexts such as programming education, data science, and end-user programming. However, there is a lack of appropriate tooling and language design principles to support exploratory programming. This paper presents a host language- and object language-independent protocol for exploratory programming akin to the Language Server Protocol. The protocol serves as a basis to develop novel (or extend existing) programming environments for exploratory programming such as computational notebooks and command-line REPLs. An architecture is presented on top of which prototype environments can be developed with relative ease, because existing (language) components can be reused. Our prototypes demonstrate that the proposed protocol is sufficiently expressive to support exploratory programming scenarios as encountered in literature within the software engineering, human-computer interaction and data science domains.
CCS Concepts:
- Software and its engineering → Development frameworks and environments; Interpreters.
Keywords: Exploratory programming, protocol, IDEs, REPLs, notebooks, interpreters
ACM Reference Format:
1 Introduction
In traditional software development processes, a predefined set of requirements specifies what features the software must support under which circumstances. Exploratory programming1, however, is an open-ended activity with no upfront specification. Exploratory programming is a style in which code is used as a medium for prototyping, and in which the goal and solution are to be discovered together through experimentation [3, 47, 56]. An essential characteristic of exploratory programming is experimentation within a design space, trying out different design alternatives by extending or tweaking programs. Programming environments can support this programming style by allowing programmers to create, edit, and evaluate (partial) programs.
In conventional IDEs, experimentation is often limited by the edit-compile-run cycle, which does not offer the desired experience in terms of feedback and responsiveness.
1The term ‘exploratory programming’ was coined by Beau Shiel in 1986 [52] and is sometimes also referred to as ‘opportunistic programming’.
Command-line REPLs (Read-Eval-Print Loop), such as the Python interpreter and JShell REPL for Java, support incremental programming, in which a program is developed piecewise by entering and executing code fragments rather than full-fledged programs [60]. In REPLs, the effects of code fragments are immediately reported to the user, and, to varying extents, the current program state can be inspected, queried or accessed in various ways. As such, REPLs provide a better interface for experimentation than conventional IDEs. Computational notebooks, such as the MatLab environment [18] and Jupyter notebooks [26], go beyond REPLs by supporting re-executing, modifying, and/or copying previously executed code fragments stored in code cells, interleaved with documentation. Through documentation cells, literate programming [27] is combined with incremental programming, making it easier to communicate ideas with collaborators. However, computational notebooks are still limited from the perspective of exploratory programming (e.g., lack of feedback, reusability, and information about notebook’s execution state) [10, 16, 20, 22, 23, 46, 50].
Motivating Example. Figure 1 shows a prototype of an exploratory programming environment for QL, a DSL for specifying questionnaires. The left-hand side shows a code editor and a running application view, representing the currently active version of the code and the current run-time state of the rendered questionnaire, respectively. The right-hand side of the figure shows a tree-structured trace of all interaction with either the code or the running questionnaire, in the form of code cells containing commands.
The figure displays the end-result of the following steps:
- The programmer defines the first question “What is your age?”. Source edits are reflected in the REPL history as semantic deltas [61] (cell 3a58).
- She then tries it out by entering the value 42 in the run-time view, resulting in the command `age = 42`.
- To experiment with computed questions, she forks off a branch by pressing the right-arrowed button. In the sub-tree (headed by cell 7ead), she enters a question that computes `2 * age`.
- Satisfied with the effect, the programmer moves back to the main branch using the lightning button, jumping back to 2a88, the last command of the main line.
- The height question is then added, again resulting in a semantic delta in the history.
- Next, the programmer wants to experiment with conditional questions. Another temporary branch is created, with another computed question, which is conditional on `height > 200`.
- Finally, she returns to the main branch, which ended at the entry of the `height` question. This is the state that is shown on the left of the figure.
The QL prototype demonstrates various interesting features related to exploratory programming. Firstly, multiple branches of code execution can be explored, and previous states can be revisited, without losing work. In this way, multiple variants of a program can be developed simultaneously such that their effects can be compared. Secondly, the prototype shows that certain actions in the interface (e.g., entering an age) result in executed code that can (therefore) also be undone. Our goal is to reduce the effort required for engineering programming environments with features like these (for new) software languages, DSLs in particular.
Contributions. This work reports on the next step in a research line aimed at designing and implementing features that simplify exploratory programming in (general-purpose and domain-specific) programming environments. Previous work has demonstrated how software languages can be (re)designed to enable exploratory programming with ‘execution graphs’ encoding execution histories as a central data-structure [11, 33, 60]. This paper contributes by presenting a protocol for interacting with execution graphs alongside an architecture designed for prototyping exploratory programming features. We evaluate the protocol by discussing the extent to which it supports exploratory programming scenarios encountered in the literature. The design, efficient implementation, and evaluation of features for exploratory programming are future work. The main contributions are:
1. We extend the foundations provided by earlier work (Section 2) to support divergent programs and to improve the handling of program output (Section 3).
2. We present the Exploratory Programming Protocol (EPP, Section 4) and an architecture (Section 7) with a potential for reducing the effort of engineering prototype environments for exploratory programming.
3. We discuss the expressivity of the protocol (Section 6) in relation to various exploratory programming scenarios encountered in the literature (Section 5).
We discuss strengths and limitations in Section 8, related work in Section 9 before concluding in Section 10.
2 Background
The foundations of the protocol presented in this paper are provided by the principled approach to REPLs of [60]. In this section we describe the main concepts using a basic calculator language as an example object language defined in Rascal [25]. In the next section we extend the approach based on observations made in [11]. The (abstract) syntax of the calculator language is as follows:
```text
data Expr
= add(Expr lhs, Expr rhs)
| mul(Expr lhs, Expr rhs)
| assign(str x, Expr e);
| lit(int n)
```
In the approach, a language is defined by its abstract syntax, the syntax of configurations, an initial configuration,
A Language-Parametric Approach to Exploratory Programming Environments
Figure 1. Exploring QL with branching time and fine-grained versioning. Top left: the source input; bottom left: the running application; right: the exploration trace.
```plaintext
form Form {
"What is your age?" age: integer
"What is your height?" height: integer
}
```
Figure 2. Evaluation functions for the calculator language.
```plaintext
Config exec(expr(Expr e), Config c)
= <eval(e, c.store), c.store>;
Config exec(assign(str x, Expr e), Config c)
= <v, c.store + (x : v)> when int v = eval(e, c.store);
int eval(var(str x), Store s) = s[x];
int eval(lit(int n), Store s) = n;
int eval(add(Expr lhs, Expr rhs), Store s)
= eval(lhs, s) + eval(rhs, s);
int eval(mul(Expr lhs, Expr rhs), Store s)
= eval(lhs, s) * eval(rhs, s);
```
Figure 3. An execution graph for the calculator language.
```
expr(add(lit(5), lit(2)))
expr(mul(var("x"), lit(6)))
assign("x", add(lit(5), lit(2)))
```
and a definitional interpreter. Configurations contain all results produced by executing a program and all contextual information needed to execute a program. For example, in the calculator language, configurations contain a result value and a store mapping variables (strings) to (integer) values.
```plaintext
alias Store = map[str var, int val];
alias Config = tuple[int result, Store store];
Config initial = <0, {}>;
```
Evaluation functions are given in Figure 2.
A definitional interpreter is a function that simultaneously defines and applies the operational semantics of a language. In the example, the function `exec` is a candidate with the type `Config (Cmd, Config)`, i.e. it yields a configuration given a command (program) and a configuration. A definitional interpreter can also be seen to assign to every program of the language a function from `Config` to `Config`, referred to by Van Binsbergen et al. [60] as the effect of the program. The effects of programs can be chained by applying the definitional interpreter repeatedly to programs and using the output of one program’s effect as the input of the other’s. Such chaining describes the incremental style of programming of REPLs. The soundness of the approach depends on the assumption that all relevant contextual information is recorded in the configurations. The practical implications of this assumption are discussed in Section 8. By defining the definitional interpreter, the language designer determines how the execution of a program influences the execution of subsequent programs.
Van Binsbergen et al. introduce the ‘exploring interpreter’ algorithm, a bookkeeping device that tracks program execution history in a graph structure, referred to as the execution graph, with configurations labeling nodes and programs labeling edges. Starting from the initial configuration, applications of the definitional interpreter (possibly) result in new nodes and edges added to the graph such that for every edge it holds that the source and target configurations of the edge capture the effect of the program labeling the edge.
An example execution graph is shown in Figure 3, showing the effect of three code snippets in the calculator language. The exploring interpreter algorithm supports three actions:
1. `form[0]` with `age: integer`
2. `assign("x", add(lit(5), lit(2)))`
3. `expr(mul(var("x"), lit(6)))`
That divergent programs cannot be represented is addressed in Section 3.
execute for executing programs and extending the execution graph, revert for changing the configuration used as input for the next execute action, and display for producing a structural representation of the current graph. The algorithm can be implemented generically, taking as type-level arguments the type of programs and configurations, and as (value-level) arguments a definitional interpreter and initial configuration of the correct types [11]. In this sense, the approach is (object) language-parametric.
Finally, the class of sequential languages is introduced for which hold that an operator can be identified such that for any two syntactically valid programs \( p_1 \) and \( p_2 \) it holds that \( p_1 \otimes p_2 \) is a syntactically valid program whose effect is the composition of the effects of \( p_1 \) and \( p_2 \):
\[
\begin{aligned}
\llbracket p_1 \otimes p_2 \rrbracket & = \llbracket p_2 \rrbracket \circ \llbracket p_1 \rrbracket
\end{aligned}
\]
(1)
This definition states that a language is sequential if it has an operator for composing programs equivalent to chaining programs. A language with a definitional interpreter is made sequential by adding a top-level operator with its semantics given by Equation 1. A transitivity property follows stating that for every path in the execution graph of a sequential language, it holds that the source and target configurations capture the effects of the program formed by composing the labels of the edges of the path using \( \otimes \) (in order).
In [11], Frölich and Van Binsbergen describe an implementation of the exploring interpreter algorithm and use it to evaluate the effects of certain implementation choices on the exploratory process. The authors conclude that the algorithm should support both a destructive and non-destructive variant of revert and that a tree view is easier to reason about since cycles are absent and there is a unique path from the root to every node – every node has a unique history.
The authors observed a distinction can be made between configuration components only used as ‘output’ and those influencing the execution of subsequent programs. The output components can be recorded on the labels of edges alongside the program. An example is the result field of the calculator configurations. In the next section we extend the approach to account for these suggestions.
3 Exploring Interpreter Extensions
This section describes extensions to the approach explained in Section 2. In our extended approach, a language is (optionally) defined to have output components separate from the configuration. Given a program and an input configuration, a definitional interpreter produces output and either diverges or yields an updated configuration. The definitional interpreter exec of the updated example has the type \( \text{tuple}[\text{Output}, \text{maybe}[\text{Config}]] \) (\text{Cmd}, \text{Config}) with the following definitions:
- \( \text{alias Config = tuple[Store store]; Config initial = {};} \)
- \( \text{alias Output = list[int]; Output no output = [];} \)
- \( \text{tuple[Output, Maybe[Config]] exec(expr(Expr e), Config c) =}
\)
\( \ldots \) // produces [], nothing() = if an unknown variable is used Function exec yields no configuration when eval fails due to a reference to an unknown variable. Inspired by write-only entities in MSOS [38], output is a monoidal structure (here a list of integers) of which values can be concatenated. The transitivity property expressed at the end of Section 2 can be updated by using the monoidal operator to concatenate the output appearing as labels on edges. If a divergent program is chained with another program, the input configuration of the first is used also as the input to the second.
In our extended approach, an alternative version of the exploring algorithm is used and has been implemented as a modification to the generic algorithm of [11]. This algorithm labels the nodes of the execution graph with references rather than configurations and a mapping from references to configurations is maintained separately. The algorithm ensures the graph satisfies tree-properties by generating a fresh reference for every (successful) execute action. Edges are labeled with pairs of program and output. The revert action is destructive and a separate jump action is introduced as a non-destructive variant. Both receive references as argument rather than configurations. The revert action only accepts ancestors of the current node and removes only those nodes and edges that are on the path from the given ancestor \( r \) to the current node, retaining \( r \) and those nodes and edges that occur in any other paths from \( r \). To preserve the benefits of sharing, our implementation recognizes whether a previously visited runtime state is reached by comparing configurations and maintaining sets of references pointing to the same configuration.
4 Exploratory Programming Protocol
This section introduces the Exploratory Programming Protocol (EPP), described as a sequence of TypeScript interface definitions, akin to the Language Server Protocol (LSP) [35].
The core of the EPP captures the actions of the exploring interpreter algorithm as RPC-methods and has additional methods to inspect and manipulate the execution tree. The full list of methods in the protocol is given in Table 1. The execute, revert, and jump methods correspond to the actions of the exploring interpreter algorithm. The getExecutionTree, getTrace, and getPath functions are variants of the display action to obtain (parts of) the execution tree in a structured format. The meta method gives access to meta-commands, providing language-specific services implemented in the back-end that do not involve updates to the execution tree. The remaining methods are auxiliary methods to extract information from the execution tree, such as the content of a specific configuration or a list containing all leaves.
4.1 Specification Using JSON RPC 2.0
The protocol is an instance of JSON RPC 2.0. The full specification is in the supplementary material of this paper.
---
3[https://www.jsonrpc.org/specification](https://www.jsonrpc.org/specification)
Table 1. The methods in the exploratory programming protocol.
<table>
<thead>
<tr>
<th>Method name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>execute</td>
<td>See <code>execute</code> in Section 3 and the protocol specification in §4.</td>
</tr>
<tr>
<td>revert</td>
<td>See <code>revert</code> in Section 3.</td>
</tr>
<tr>
<td>jump</td>
<td>See <code>jump</code> in Section 3.</td>
</tr>
<tr>
<td>getCurrentReference</td>
<td>Gets the reference labeling the current node.</td>
</tr>
<tr>
<td>getAllReferences</td>
<td>Returns all references used as a label.</td>
</tr>
<tr>
<td>getRoot</td>
<td>Returns the reference labeling the root node.</td>
</tr>
<tr>
<td>deref</td>
<td>Gets the configuration assigned to the given reference.</td>
</tr>
<tr>
<td>getExecutionTree</td>
<td>Gets the execution tree in the form of the current node a list of edges and</td>
</tr>
<tr>
<td></td>
<td>list of nodes.</td>
</tr>
<tr>
<td>getPath</td>
<td>Gets the edges representing the path from the root node to the current node.</td>
</tr>
<tr>
<td>getLeaf</td>
<td>Gets the edges representing that path between the nodes labeled by two given</td>
</tr>
<tr>
<td></td>
<td>references.</td>
</tr>
<tr>
<td>meta</td>
<td>Executes a meta-command without affecting the execution tree.</td>
</tr>
</tbody>
</table>
interface ExecuteRequest extends RequestMessage {
method: "execute";
params: ExecuteParams;
}
interface ExecuteParams {
program: string;
}
Listing 1. Interface definitions for the `execute` action.
The JSON RPC 2.0 protocol defines a request object, a response object, and an error object, which are all encoded as JSON objects. A request object contains an identifier, a method name and the type capturing the parameter(s) of the method (if any). A response object contains an identifier for the request it responds to and either a result or an error. The result can be any encoded JSON object and the error object contains a unique error code, a short descriptive error message, and optional extra error data as an object.
The exploratory programming protocol is an interface between the front-end or GUI of a programming environment and an exploring interpreter serving as a back-end. The requests and response pairs of the protocol encode the actions of the exploring interpreter algorithm as JSON objects, of which we detail the `execute` specification in Listing 1. The `execute` action is encoded with a request with the method specified as "execute", and a parameter object containing a string representing the program to be executed.
As a response, the `execute` action can produce an error or a (normal) result, for which the interfaces are defined in Listing 2. The result contains the current reference from both before and after the execution, the output produced by the execution, and an optional object containing the result of post-processing the effects of the execution (discussed below). The references and the output are part of the edge added to the execution tree. The program component completing the edge is part of the request and is omitted from the response.
interface ExecuteResponse extends ResponseMessage {
result?: ExecuteResult;
error?: ExecuteError;
}
interface ExecuteResult {
source: uinteger; // reference before execution
target: uinteger; // reference after execution
output: object;
post?: object;
}
interface ExecuteError extends ResponseError {
code: DefaultErrorCodes | ProgramParseError;
}
Listing 2. Interface definitions for responses to `execute`.
Following the terminology of Section 2, the effect of a program is the set of changes it makes to a configuration and the output it produces when successfully executed. The source, target, and output fields of an `ExecuteResult` object contain all the information necessary to compute the effects of the executed program, using a `DerefRequest` to gain access to the relevant configurations. On top of this, the optional post field contains any data that the back-end wishes to send to the front-end in response to an execution request by doing additional post-processing on the execution result. This can be used to compute (a summary of) the effects of a program on behalf of the front-end, as it may be more convenient to compute this information in the back-end. Such post-processing is used, for example, in the back-end for eFLINT to determine any norm violations resulting from executing a program. Finally, an `execute` operation might fail, e.g., because the program cannot be parsed (ProgramParseError) or the request object is invalid (DefaultErrorCodes).
5 Supporting Exploratory Programming
This section collects desirable features of programming environments for exploratory programming from the literature.
in the software engineering, human-computer interaction, and data science fields. The next section demonstrates how the EPP protocol is used to implement some of these features.
Exploratory programming is a programming style in which users (programmers) use experimentation to simultaneously find what end-result is desired, as well as the program producing that result. Exploratory programming thus requires the ability to develop a program incrementally and get immediate feedback after every submitted program (fragment). Feedback is essential to users in order to evaluate the result of the programs they submit and, if the output does not produce the desired result, users should be able to discard programs easily [51]. In particular, the user should receive enough information about the effects of the most recent execution to update their mental model of the run-time state in order to be able to predict the effects of the next program they intend to submit (e.g., see the Scrubbing Calculator [63]). From this perspective, computational notebooks do not always offer sufficient feedback [10]. And as stated by Don Norman “poor feedback can be worse than no feedback” [41]. User affordances are needed to inform about program state (the executed program (fragments)) and run-time state (the context in which the next execution will take place).
**Micro-Versioning.** Users should be able to work with different versions of both code and (intermediate) results [3, 14, 21, 65]. As such, users can better understand the design space and make better coding decisions. However, fine-grained (sub-file level) support for versioning, referred to as micro-versioning by Hiroaki et al. [36], is not common in present-day programming tools. From the interaction-design perspective, micro-versioning is challenging because users have to cognitively maintain multiple representations of code and the running program [14, 65]. In other words, users have to maintain multiple mental models of program and run-time states and which program resulted in which run-time state.
Software engineers use version control systems like Git or Subversion for versioning large-scale software projects. However, version control systems operate at the project and file level instead of the level of program fragments making them insufficient for micro-versioning as described above [20].
**Experimentation and Modification.** Systems that are difficult to modify are referred to as systems of high viscosity and are not suited for exploratory programming [3, 13]. Exploratory programming requires quick and easy creation and modification of program fragments while editing or after execution, without noticeable overhead [7, 16, 31].
Notebooks typically allow users to modify and re-execute existing code cells, but do not keep track of previous version of a code cell. This may leave a notebook in an inconsistent state in the sense that the contents of the code cells no longer respect (data) dependencies or give a different result when executed from scratch [10]. Users can also decide to copy cell contents to a new code cell [29]. This strategy ensures that program state remains consistent with run-time state, but results in an ever-growing program that records earlier unsuccessful experiments. As a result, users tend to start from scratch to avoid the aforementioned situations. Code cloning is a common practice in software development [49] and it is even more common and noticeable in exploratory programming environments [54]. Trial-and-error causes users to copy-paste code snippets with modifications to explore alternatives [3, 16, 65]. Especially non-expert users copy-paste code snippets and start tweaking them to understand their semantics and to adapt them to achieve their goals [31]. Users require a mechanism to display the history of the different commands they have executed in their sessions with a limited number of actions and be able to interact the alternatives they have created [16, 21, 33, 36, 65]. As discussed later, our experimental front-end supports backtracking with unlimited levels of undo/redo and branching explicitly, as suggested by Hauswirth and Azadmanesh [15].
**Managing Alternatives.** The exploratory style of programming may result in a collection of alternative versions of a program, which can grow rapidly with little organization [10]. It may be hard for users to keep track of the alternatives and the intermediate results they achieved throughout their explorations. Users should be able to browse through alternative program states, using an efficient representation of the possibly large number of alternatives. It should be possible to compare alternatives, both with respect to their program state and (intermediate) run-time state(s), to select the best candidate for further exploration. The ability to search through alternatives aids selection even further [10].
The documentation cells of notebooks enable literate programming [27] for explaining design choices and functionality. In an exploratory setting it should also be possible to document the exploration process itself. This gives users insights into the thinking that went into different exploration attempts, thereby supporting users in understanding (intermediate) results and attempting further explorations.
Outcomes should be easy to share with other users when an exploration session concludes. The ability to reuse code is important in software engineering, and reproducability is an important principle in (data) science. Jupyter notebooks are saved in a textual format that can be subjected to version control, shared easily with other users and rendered as HTML for online publication. An exploratory programming environment that supports the exploration of alternative programs has additional requirements. In particular, it should be possible to share only part of the exploration, i.e. only those alternatives having produced desirable results. But since the exploration process itself may contain meaningful information, e.g., about documented implementation decisions and false starts, sharing multiple alternatives should be an option. Ideally, the user has the ability to choose freely which alternatives to make available for sharing in a form that preserves
the features mentioned earlier: (micro-)versioning, feedback, browse, compare, search, and document(ation).
6 An Experimental Notebook Front-end
This section describes the features of a GUI interface which has been designed to experiment with novel features for exploratory programming in a notebook-like environment. The front-end communicates with a back-end via the EPP protocol. In particular, we use the front-end to demonstrate how certain features discussed in the previous section can be realized on top of the EPP (references to features appear in bold). The architecture on which we performed our experiments is described in the next section. A thorough design and evaluation of GUI components is part of future work. The features of the front-end are generic in the sense that they are not designed for a specific object language and behave similarly across languages. The artifact submitted alongside this paper includes the prototypes for the Idris and eFLINT languages using this front-end.
The main screen of the front-end, in this case for Idris, is shown in Figure 4. As the front-end is based on the EPP, it naturally supports backtracking and jumping to previous program states (history). Rather than visualizing the execution tree such as in [33, 60], the front-end shows a single ‘execution trace’ corresponding to the path in the tree from the root node to the node representing the current program state (center component). For each edge in the trace, the executed program is shown together with its output\(^4\) (feedback) and a button to revert to the state prior to that execution. The user is able to switch between execution traces by selecting the ‘Switch trace’ button on the right-hand side, associated with ‘head nodes’ and ‘tagged node’ (browse). Head nodes correspond to the leaves of the underlying execution tree, tagged nodes are selected by the user, e.g. to record states that have achieved interesting intermediate results (document). The head and tagged nodes are shown on the right-hand side of the screen with their identifier (the reference labeling the node of the execution tree) and the program executed to produce that node. The left-hand side contains code cells, output cells and documentation cells as is common in computational notebooks.
The front-end supports incremental program execution through the execution of code cells in two ways. Firstly, the notebook (left-hand side) component can be extended with new code cells for execution, and existing code cells can be modified and re-executed. The ‘Actions’ button attached to a code cell reveals, among others, a ‘Revert’ button and an ‘Execute’ button. To support micro-versioning and to better keep tracking of execution history, code cells in the notebook are associated with one or more edges of the execution tree, always showing one pair of ‘Previous state’ and ‘Output state’, together with the output labeling the edge (feedback).
Under ‘Actions’, the user can switch between different executions of the same cell. Secondly, the code cells that make up the execution trace (center) can be modified and re-executed. The “modify and re-execute” button attached to these cells makes it possible to execute the modified program in the state prior to its original execution, using a jump, and to subsequently re-execute all code fragments whose effects were undone by the jump, resulting in a new branch in the execution tree. The new branch is represented by a new head node on the right-hand side of the interface and is shown as the current execution trace. These features make it possible to experiment by creating alternative explorations as (minor) modifications of existing explorations (micro-versioning). Note that a modification to the cell may be such that the subsequent code fragments are no longer type correct, in which case errors are produced as they would normally.
The execution trace shows a program state that is consistent with the current run-time state. And as discussed, the notebook component is capable of keeping track of multiple versions of code cells. However, executing code cells from within the execution trace may cause inconsistency between the execution trace and the narrative of the notebook on the left-hand side. For this purpose, it is possible to ‘migrate’ the execution trace to the notebook by creating code cells with the context of the programs in the trace available under ‘Actions’. In this case, the annotations added to code cells in the execution trace (maintained by the front-end) can be turned into documentation cells. They are also used to add documentation to tagged cells in order to document the exploration process (document).
The head and tagged nodes appearing on the right-hand side can be selected (checkbox) for comparison. Clicking the ‘Compare nodes’ button opens a pop-up such as the one shown in Figure 5. The view has tabs for comparing configurations – summarizing run-time state – traces, and the annotations attached to traces (compare). This way, traces, individual configurations, and annotations can be placed side-by-side for comparison. Structural diff algorithms could be applied to show the difference to the user, but this feature is not yet part of our experimental front-end.
A search field is also available to filter the contents of a trace/configuration (Figure 5). The generic implementation of this feature performs a textual search on the underlying HTML. More sophisticated language-specific search options could be realized as well. For example, a programmer might like to search for configurations that satisfy some property written as a Boolean expression in the object language. In the next section we discuss how the Idris version of this front-end features a specialized version of search with which a user can search for occurrences and declarations of a variable.
The front-end is able to export and import execution trees, using the ‘getExecutionTree’ method of the EPP. This way
---
\(^4\)No output is produced by the declarations in Figure 4.
expansion sessions can be shared with other users and subjected to version control (reuse and reproducability). Individual traces can also be exported using ‘getTrace’ making it possible to share only certain desirable traces. Additional export functionality could be developed on top of these (and other) methods of the protocol too, to export exactly those traces in the tree ending in tagged nodes.
7 Reusable Architecture Implementation
This section describes the design and implementation of an architecture that enables research into generic or language-specific (UI) features for exploratory programming. Based on the EPP, the architecture demonstrates that the EPP is language-parametric in that it can be used for object languages for which a definitional interpreter is available (implemented in the back-end host language). The architecture is visualized in Figure 6. Given a choice of host languages for the front- and back-end, some components of the architecture are reusable across prototypes, as indicated by the dashed components. We explain how we used the protocol to implement several prototype programming environments for different object languages, emphasizing the connection between reusable components and language- or UI-specific components. The architecture has been used to implement prototype notebooks and REPLs for Idris [5], MiniJava [1, 8], eFLINT [57] and Funcons-beta [58]. The Idris and Funcons-beta prototypes are especially interesting as they are built on top of existing interpreters developed without anticipating their usage with the EPP. Other prototypes, such as the QL prototype described in Section 1 preceded the work presented in this paper and inspired the formulation of the protocol as well as the design and implementation of the architecture. The implemented prototypes, as instances of
Figure 4. An experimental notebook interface with various generic exploratory programming features used with the Idris language. The dependently typed nature of Idris is shown by performing \texttt{sHead} on an empty vector, resulting in a type error.
Figure 5. Screenshot of the Idris prototype, showing a pop-up in which traces are compared.
7.1 Back-end
The back-end consists of a server parameterized by the following (object) language-specific components: a parser, a meta-handler, and a definitional interpreter. The definitional interpreter is used to instantiate a generic exploring interpreter, which maintains the execution tree. The server transforms a message from the protocol into operations on the three components. It then takes the result of these operations and transforms them into a message according to the protocol and sends it to the front-end. For example, execute requests are realized via the parser and the definitional interpreter by first parsing the input string with the given parser and then invoking the exploring interpreter.
Our prototypes are based on a reusable Haskell implementation of the server and exploring interpreter components. The latter is a modification of the implementation of [11] to account for the extensions in Section 3. The implementation of the execute method within the Haskell server is shown in Listing 3 (simplified for clarity). The request object is parsed as a JSON object. If the request is not correctly formatted, the InvalidParameters error is returned. Otherwise, the parser is applied to the program field of the request, returning an error (left err) or a parsed program (right prog). When parsing is successful, the exploring interpreter executes the program, resulting in an extended execution tree and optional output. The source and target labels (references) of the new edge are part of the result, together with the output and the result of any post-processing.
The parser is a language-specific component with the signature: \( \text{String} \rightarrow c \rightarrow \text{Either String p} \), where \( c \) represents the configurations of the language and \( p \) the programs. The parser yields either a program or an error, and it has access to the current configuration. Access to the current configuration can be useful for context-sensitive parsing, e.g., Idris allows dynamic extensions of syntax. When the parse is unsuccessful, the parser can provide an error message sent to the front-end as part of the error object.
Via the meta-handler, a back-end can deliver additional features. A meta-handler has the signature: \( \text{Value} \rightarrow \text{Explorer p m c o} \rightarrow \text{m Value} \). The handler receives a parameter of the request (a JSON value), the current exploring interpreter, and returns a JSON value. The meta-handler has access to the exploring interpreter to support reading the execution tree. In our Idris prototype, we use meta-commands to provide semantics-based search through the execution tree. This search finds all leaves in which an identifier occurs before searching for all nodes where the identifier was declared.
The definitional interpreter implements the operational semantics of the language and has the following type signature: \( p \rightarrow c \rightarrow m \) (Maybe c, o), where \( p \) are the programs, \( c \) configurations, \( o \) is the monoidal output component, and \( m \) is an arbitrary monad in which the interpreter can execute. The signature is general in the sense that many languages can have an interpreter implemented according to the required
The back-end consists of a server parameterized by the following (object) language-specific components: a parser, a meta-handler, and a definitional interpreter. The definitional interpreter is used to instantiate a generic exploring interpreter, which maintains the execution tree. The server transforms a message from the protocol into operations on the three components. It then takes the result of these operations and transforms them into a message according to the protocol and sends it to the front-end. For example, execute requests are realized via the parser and the definitional interpreter by first parsing the input string with the given parser and then invoking the exploring interpreter.
Our prototypes are based on a reusable Haskell implementation of the server and exploring interpreter components. The latter is a modification of the implementation of [11] to account for the extensions in Section 3. The implementation of the execute method within the Haskell server is shown in Listing 3 (simplified for clarity). The request object is parsed as a JSON object. If the request is not correctly formatted, the InvalidParameters error is returned. Otherwise, the parser is applied to the program field of the request, returning an error (left err) or a parsed program (right prog). When parsing is successful, the exploring interpreter executes the program, resulting in an extended execution tree and optional output. The source and target labels (references) of the new edge are part of the result, together with the output and the result of any post-processing.
The parser is a language-specific component with the signature: \( \text{String} \rightarrow c \rightarrow \text{Either String p} \), where \( c \) represents the configurations of the language and \( p \) the programs. The parser yields either a program or an error, and it has access to the current configuration. Access to the current configuration can be useful for context-sensitive parsing, e.g., Idris allows dynamic extensions of syntax. When the parse is unsuccessful, the parser can provide an error message sent to the front-end as part of the the error object.
Via the meta-handler, a back-end can deliver additional features. A meta-handler has the signature: \( \text{Value} \rightarrow \text{Explorer p m c o} \rightarrow \text{m Value} \). The handler receives a parameter of the request (a JSON value), the current exploring interpreter, and returns a JSON value. The meta-handler has access to the exploring interpreter to support reading the execution tree. In our Idris prototype, we use meta-commands to provide semantics-based search through the execution tree. This search finds all leaves in which an identifier occurs before searching for all nodes where the identifier was declared.
The definitional interpreter implements the operational semantics of the language and has the following type signature: \( p \rightarrow c \rightarrow m \) (Maybe c, o), where \( p \) are the programs, \( c \) configurations, \( o \) is the monoidal output component, and \( m \) is an arbitrary monad in which the interpreter can execute. The signature is general in the sense that many languages can have an interpreter implemented according to the required
signature, or that an existing interpreter can be adapted to adhere to the signature. For example, the Idris prototype uses a definitional interpreter implemented as a wrapper around an existing interpreter for the language [6].
Reflections on Reuse. The prototypes we developed as instances of the architecture use Haskell implementations of parsers, definitional interpreters, and meta-handlers. Both the server and the exploring interpreter components are language-parametric, and therefore only needed to be implemented once. The server and exploring interpreter need to be re-implemented in a different host language to use the protocol and architecture for object languages implemented in that host language. However, this (hypothetical) novel back-end can be combined with existing front-ends, as it relies on the language-agnostic EPP for its communication.
7.2 Front-end
The front-end is divided into two parts: an interface that extends a reusable client and a bridge that connects the front-end to the back-end.
The client provides an API for the interface developer abstracting over communication details. This is achieved by defining the client as an abstract class consisting of concrete methods for performing EPP requests and abstract methods for handling the EPP responses. The concrete methods are implemented once and for all within the client and are generic. These methods assign a unique reference to every request and store the request to be later matched with a response. After receiving a response from the client bridge, the client calls the corresponding request handler method. These handler methods are language-specific and must be implemented for every interface. For example, in a prototype for the eFLINT language, an execute action is performed as follows (see Listing 4). A button click triggers the execution of a code cell by calling the handler of the click event doExecute which calls the method execute of the client by providing the ExecuteParams of the request. When the response arrives, the client calls the onExecute method with the original request and the response as arguments. The onExecute method first determines if the request was successful or not. If the request was successful, the method calls the showViolations method to display any violations to the user when any violations are discovered by the back-end’s post-processing.
The client bridge is an adapter, translating messages from the protocol used between the client and the client bridge into messages of the EPP, and vice versa. This layer of indirection makes it possible to support a wide variety of front-end implementations separate from back-ends.
Reflections on Reuse. For our prototypes we have two implementations of the client component (hence the two arrows between client and client bridge in Figure 6): one implementation uses the WebSocket protocol as the communication format between the client and client bridge and the other uses a native UI, implemented using Python and the Tk interface. In the latter implementation, the client and client bridge are connected directly via function calls. Both front-end implementations are used to develop prototypes on the same Haskell back-end. In general, any implementation of the client (bridge) component can be used in combination with any implementation of the server component.
The interface component can be implemented with both features and widgets that are generic and specific to a certain object language. Features can be developed on top of the generic part of the protocol, e.g. executing code in code cells, displaying execution traces, jumping to previous runtime states, etc. Such features are reusable across languages, reducing the workload for language engineers and providing a common experience for programmers switching between languages. On the other hand, a more tailored experience can be offered to programmers with features which are designed specifically for a particular object language, e.g., using post-processing and meta-handlers. With our architecture we can combine generic and language-specific features and replace generic features with specialized variants when available.
One way specialization is achieved is by making the implementation of a feature parametric such that language-specific behavior can be provided as an argument. For example, a variable watcher [33] – showing the assignments to variables in the current run-time state – can be implemented such that a function is given as an argument that extracts variable assignments from a configuration. A different argument is used for different object languages as each language has its own notion of configuration and approach to keeping track of assignments. Other examples are output cells and visualizations of the execution history when they include information extracted from configurations.
Another approach to specialization is overriding or extending a generic implementation of a feature. The default, generic implementation of the search functionality of our experimental front-end is realized in a text-based fashion by searching the DOM-rendering of the trace. In the Idris
```typescript
Listing 4. TypeScript code that shows part of an interface implementation for the eFLINT language.
```
A Language-Parametric Approach to Exploratory Programming Environments
8 Discussion
From the previous sections we conclude that the protocol offers benefits to the software language engineering process of building programming environments that support exploratory programming. We can use the protocol and architecture to experiment with the design of exploratory programming features with relative ease by reusing components. In Section 6 we provided evidence to the claim that the EPP supports interesting exploratory programming scenarios by relating features of a prototype to scenarios discussed in the literature. The thorough design and evaluation of GUI elements and features for exploratory programming is part of future work.
Applicability. The main limitation of our approach is that we rely on the availability of definitional interpreters and parsers for the object languages with which we wish to experiment. However, when a definitional interpreter is available, existing (generic) interfaces are immediately applicable to the new object language. Language-specific GUI elements, meta-handler functionality or post-processing steps can then be added on a by-need basis. The Idris and Funcons-beta examples show that existing interpreters can be reused, even when they have been developed without anticipating the EPP. To embed the Idris interpreter of [6] into our architecture, only four lines of Haskell code where needed to map errors to output. Adapting the interpreter of Funcons-beta [59], required around 50 lines of Haskell to extend the language to a sequential variant, following the methodology proposed in [60], and propagating bindings between funcon terms.
As indicated in [60], the class of languages to which the approach can be applied contains all languages that can have their semantics expressed as a (possibly partial) transition function. This class contains real-world, large-scale, deterministic programming languages, as is demonstrated by the body of literature on big-step, small-step and natural semantics [2, 19, 37, 39, 44] and does not necessarily exclude languages with non-deterministic aspects when these aspects can be captured algebraically [64]. However, implementing such a transition function as an efficient, production-ready interpreter maintaining explicit representations of configurations is another matter. Comparing alternative development strategies for such interpreters and demonstrating the practicality of the EPP in real-world environments is left as future work. The goal of this work has been to create an environment for experimenting with user interfaces and functionality for exploratory programming, without having run-time or space requirements as a limiting factor on the design space being explored. We intend to capitalize with future collaborations in the spirit of ‘PL and HCI: better together’ [9].
Object Languages. Our approach is particularly useful in the context of DSLs, since DSL engineers make different trade-offs regarding performance. Fast prototyping and design iteration with stakeholders is often more valuable than raw speed. In that sense, this work adds exploratory tooling for ‘free’, to the language workbench’s tool box. Several of the prototypes we developed are indeed for DSLs (eFLINT and QL). The design of object languages plays an essential role in the support for exploratory programming. DSLs capture abstractions tailored to specific domains, and programming environments can offer such high-level abstractions (and others) as widgets [3, 55] which can yield better and more powerful explorations [52]. The rendering of the QL form (Figure 1) is an example of such a language-specific widget, with modifications being reified as code [24, 66].
The exploratory programming protocol makes no assumptions about whether the object language is statically or dynamically typed. Exploration in statically typed languages is interesting because type-checking is expected to be performed on individual program fragments. This means that typing information needs to propagate between the (dynamic) execution of fragments and that, in some sense, the distinction between static analyses and dynamic evaluation becomes blurred. Figure 4 shows how a type error produced by the interpreter for Idris – a dependently typed language – is presented as output. Similar to offering flexibility in typing, exploratory programming can also significantly benefit from being able to submit partial programs with holes and receiving feedback on these holes [12, 42].
Debugging programs is another important aspect of exploratory programming [7, 28]. Traditional debugging requires users to switch between a source code and a debugging view, hindering users from having a clear picture of the run-time state, and preventing them to seamlessly continue experimenting [17]. Live programming helps users understand and comprehend their programs by giving immediate feedback about the program state after a change to the source code [45]. This is, for instance, demonstrated in our QL prototype. In another experiment, we treat stepwise debugging as a matter of language design. The methodology of [60] can be extended to incorporate stepwise debugging features, assuming a ‘stepwise interpreter’ is available. This achieved by adding elementary debugging constructs such as debug(e), for some expression e, step, and continue as phrases to the object language. The intermediate results of steps are recorded in the execution tree, enabling jumping to an earlier point in a debugging session, reminiscent of omniscient debugging [4, 32]. The QL and Funcons-beta prototypes support stepwise debugging in this way.
Some computational notebooks enable polyglot programming in which multiple object languages are used simultaneously [40, 43, 53]. Being able to apply multiple languages within the same exploration session is considered desirable [7]. We are performing experiments to determine how to enable polyglot programming within our approach.
9 Related Work
Exploratory programming is becoming increasingly important, especially as the number of end-users is outgrowing the number of professional programmers [48]. There is a need for better tools and languages aimed at end-users. In this direction, computational notebooks (e.g., Jupyter, ObservableHQ, Apache Zeppelin, and Google Collaboratory) have become an interesting and popular solution used by end-users when they need to work with code, prose, and interactive results. However, as found in the literature, these programming environments have some limitations, especially for common exploratory programming tasks [10, 16, 20, 22, 23, 46, 50].
Everyday exploratory programming tasks require that users have to deal with different explorations [3, 14, 16]. For instance, Juxtapose [14] is a tool for managing different alternatives across source code and execution environments. It allows users to execute alternatives in parallel and display their results in the same window. This is a crucial task for exploratory programming not supported by popular notebooks. Juxtapose also generates a control interface that allows users to manipulate application parameters through sliders. Exploratory programming activities require support for versioning [23]. However, given that exploratory programming often relies on incremental program development, traditional software versioning systems are too complex and challenging to use for end-users. Therefore, Micro-versioning [36] is an interesting approach for versioning partial programs and results in exploratory programming environments.
Another critical aspect is the diversity of tools for exploratory programming. As the number of users increases, the number of exploratory programming environments is also increasing [30, 62]. This offers benefits to users; however, language developers need to offer support for different platforms, which is a cumbersome and expensive task. To address this problem, some protocols (e.g., the Language Server Protocol [35] and Debug Adapter Protocol [34]) have been defined to standardize communication between tools and languages so that language and tool developers can reuse a single implementation across platforms. For instance, LSP enables the communication between code editors and languages to offer different IDE services (e.g., auto-completion, go to definition, etc.). However, LSP does not formalize an API to manage the execution of programs. In Section 4, we present a first approach towards defining a language-independent protocol that considers the execution step, primarily to support exploratory programming scenarios.
10 Conclusion
We have presented a generic protocol and a reusable architecture for programming environments supporting exploratory programming, a style of programming characterized by prototyping, versioning, and various forms of experimentation. The protocol is generic in that it can be used for a large class of object languages and can be implemented in various host languages. The architecture enables us to experiment with novel features for exploratory programming in environments such as computational notebooks and REPLs. The prototypes developed in our experiments demonstrate that the protocol can be used to deliver features for many exploratory programming scenarios discussed in the literature. The next step in our research is to design, implement, and evaluate generic and language-specific user-interface components for exploratory programming, by taking advantage of the protocol and architecture presented in this paper.
Acknowledgments
This work has been partially supported by the Kansen Voor West EFRO project (KVW00309) AMdEX Fieldlab and the province of Noord-Holland, and has been executed as part of the Agile Language Engineering collaboration3.
References
http://gemoc.org/ale/
[Online, accessed 16 July 2021].
|
{"Source-Url": "https://inria.hal.science/hal-03921387/file/3567512.3567527orig.pdf", "len_cl100k_base": 12058, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 52855, "total-output-tokens": 15739, "length": "2e13", "weborganizer": {"__label__adult": 0.0003743171691894531, "__label__art_design": 0.0003421306610107422, "__label__crime_law": 0.00020956993103027344, "__label__education_jobs": 0.0007977485656738281, "__label__entertainment": 5.91278076171875e-05, "__label__fashion_beauty": 0.00014531612396240234, "__label__finance_business": 0.00013744831085205078, "__label__food_dining": 0.0003063678741455078, "__label__games": 0.00045680999755859375, "__label__hardware": 0.0005545616149902344, "__label__health": 0.0003364086151123047, "__label__history": 0.00020134449005126953, "__label__home_hobbies": 7.18235969543457e-05, "__label__industrial": 0.0002605915069580078, "__label__literature": 0.0002703666687011719, "__label__politics": 0.00020182132720947263, "__label__religion": 0.0004420280456542969, "__label__science_tech": 0.0054473876953125, "__label__social_life": 8.064508438110352e-05, "__label__software": 0.003383636474609375, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002713203430175781, "__label__transportation": 0.00046372413635253906, "__label__travel": 0.0001832246780395508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68851, 0.0261]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68851, 0.53457]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68851, 0.87054]], "google_gemma-3-12b-it_contains_pii": [[0, 1275, false], [1275, 4950, null], [4950, 10494, null], [10494, 13953, null], [13953, 20182, null], [20182, 25246, null], [25246, 31540, null], [31540, 37606, null], [37606, 39793, null], [39793, 46316, null], [46316, 51599, null], [51599, 56156, null], [56156, 61804, null], [61804, 61804, null], [61804, 68851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1275, true], [1275, 4950, null], [4950, 10494, null], [10494, 13953, null], [13953, 20182, null], [20182, 25246, null], [25246, 31540, null], [31540, 37606, null], [37606, 39793, null], [39793, 46316, null], [46316, 51599, null], [51599, 56156, null], [56156, 61804, null], [61804, 61804, null], [61804, 68851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68851, null]], "pdf_page_numbers": [[0, 1275, 1], [1275, 4950, 2], [4950, 10494, 3], [10494, 13953, 4], [13953, 20182, 5], [20182, 25246, 6], [25246, 31540, 7], [31540, 37606, 8], [37606, 39793, 9], [39793, 46316, 10], [46316, 51599, 11], [51599, 56156, 12], [56156, 61804, 13], [61804, 61804, 14], [61804, 68851, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68851, 0.05102]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
f4dbe245d4ea9063bbd878191e170ca216555f12
|
Embedded Systems Development:
From Functional Models to Implementations
Alberto Sangiovanni-Vincentelli, Haibo Zeng, Marco Di Natale
and Peter Marwedel
Alberto Sangiovanni-Vincentelli
UC Berkeley, Berkeley, USA, e-mail: alberto@eecs.berkeley.edu
Haibo Zeng
McGill University, Montreal, Canada, e-mail: haibo.zeng@mcgill.ca
Marco Di Natale
Scuola Superiore Sant’Anna, Pisa, Italy, e-mail: marco.dinatale@sssup.it
Peter Marwedel
Technische Universität Dortmund, Dortmund, Germany, e-mail: peter.marwedel@tu-dortmund.de
Contents
Embedded Systems Development: From Functional Models to Implementations ........................................ v
Alberto Sangiovanni-Vincentelli, Haibo Zeng, Marco Di Natale and Peter Marwedel
Preface ............................................................................................................................................. xiii
1 Introduction: Modeling, Analysis and Synthesis of Embedded Software and Systems ................................................. 1
Alberto Sangiovanni-Vincentelli, Haibo Zeng, Marco Di Natale and Peter Marwedel
1.1 Recommended Reading ................................................. 4
1.2 Model-Based Design and Synthesis ................................................. 7
1.3 Model-Driven Design, Integration and Verification of Heterogeneous Models ................................................. 8
1.4 Component-Based Design and Real-Time Components ................................................. 9
1.5 Timing Analysis and Time-Based Synthesis ................................................. 11
Part I Model-Based Design and Synthesis
2 Modeling, Analysis, and Implementation of Streaming Applications for Hardware Targets ......................................... 15
Kaushik Ravindran, Arkadeb Ghosal, Rhishikesh Limaye, Douglas Kim, Hugo Andrade, Jeff Correll, Jacob Kornerup, Ian Wong, Gerald Wang, Guang Yang, Amal Ekbal, Mike Trimborn, Ankita Prasad, Trung N Tran
2.1 Introduction ................................................. 16
2.2 Related Work ................................................. 18
2.3 DSP Design Module: Models and Analysis ................................................. 19
2.3.1 Static DataFlow ................................................. 20
2.3.2 SDF: Properties and Analysis ................................................. 20
## Contents
2.3.3 Extensions for Cyclo-Static Data Rates and Parameterization ........................................... 22
2.4 DSP Design Module: Implementation Flow ................................................................. 23
2.4.1 Design Environment ................................................................. 23
2.4.2 Implementation Strategy ............................................................. 26
2.4.3 Glue Design and IP Integration ................................................. 27
2.4.4 I/O Integration ................................................................. 27
2.5 OFDM Transmitter & Receiver Case Study .............................................................. 28
2.5.1 Transmitter and Receiver Overview ........................................... 29
2.5.2 Hardware Implementation ......................................................... 30
2.5.3 Design Exploration ................................................................. 31
2.5.4 Extensions ................................................................. 32
2.6 Summary ................................................................. 32
3 Dataflow-based, Cross-platform Design Flow for DSP Applications .................................. 35
Zheng Zhou, Chung-Ching Shen, William Plishker, and Shuvra S. Bhattacharyya
3.1 Introduction ................................................................. 36
3.2 Background ................................................................. 37
3.2.1 CFDF Dataflow Model ......................................................... 37
3.2.2 Lightweight Dataflow ......................................................... 38
3.2.3 The Targeted DIF Framework ............................................. 39
3.3 From simulation to implementation ............................................................. 39
3.3.1 Step 1: System Formulation .................................................. 40
3.3.2 Step 2: System Validation and Profiling .................................... 42
3.3.3 Step 3: System Optimization .................................................. 42
3.3.4 Step 4: System Verification and Instrumentation ....................... 43
3.3.5 Determining Buffer Sizes ..................................................... 44
3.3.6 Discussion ................................................................. 45
3.4 Case Study 1 - CPU/GPU ................................................................. 45
3.4.1 Simulation ................................................................. 46
3.4.2 Implementation ................................................................. 48
3.5 Case Study 2 - Multicore PDSP ................................................................. 51
3.5.1 Simulation ................................................................. 52
3.5.2 Implementation ................................................................. 53
3.6 Summary ................................................................. 57
Part II Model-Driven Design, Integration and Verification of Heterogeneous Models
4 Model-Driven Design of Software Defined Radio Applications Based on UML .................. 61
Jair Gonzalez, Renaud Pacalet
4.1 Introduction ................................................................. 61
4.2 Related Work ................................................................. 62
4.3 Proposed Design Methodology ................................................................. 63
<table>
<thead>
<tr>
<th>Section</th>
<th>Title</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.4</td>
<td>DiplodocusDF</td>
<td>64</td>
</tr>
<tr>
<td>4.4.1</td>
<td>SDR Waveform Notations</td>
<td>65</td>
</tr>
<tr>
<td>4.4.2</td>
<td>Target Architecture Notations and Mapping</td>
<td>66</td>
</tr>
<tr>
<td>4.4.3</td>
<td>Performance Requirements Notations</td>
<td>67</td>
</tr>
<tr>
<td>4.5</td>
<td>Code Generation</td>
<td>67</td>
</tr>
<tr>
<td>4.5.1</td>
<td>Model Extension Constructs</td>
<td>68</td>
</tr>
<tr>
<td>4.5.2</td>
<td>DiplodocusDF Translation Semantics</td>
<td>68</td>
</tr>
<tr>
<td>4.6</td>
<td>Runtime Environment</td>
<td>69</td>
</tr>
<tr>
<td>4.7</td>
<td>DiplodocusDF example: Welch Periodogram Detector</td>
<td>70</td>
</tr>
<tr>
<td>4.8</td>
<td>Conclusions</td>
<td>73</td>
</tr>
<tr>
<td>5</td>
<td>On Integrating EAST-ADL and UPPAAL for Embedded System Architecture Verification</td>
<td>75</td>
</tr>
<tr>
<td>5.1</td>
<td>Introduction</td>
<td>75</td>
</tr>
<tr>
<td>5.2</td>
<td>Related Work</td>
<td>76</td>
</tr>
<tr>
<td>5.3</td>
<td>EAST-ADL and Timing Extension</td>
<td>77</td>
</tr>
<tr>
<td>5.3.1</td>
<td>EAST-ADL Core and Behavior Model</td>
<td>78</td>
</tr>
<tr>
<td>5.3.2</td>
<td>Function Behavior Semantics</td>
<td>78</td>
</tr>
<tr>
<td>5.3.3</td>
<td>Timing Model</td>
<td>79</td>
</tr>
<tr>
<td>5.4</td>
<td>Timed Automata and UPPAAL</td>
<td>80</td>
</tr>
<tr>
<td>5.5</td>
<td>EAST-ADL and Timed Automata Relationship</td>
<td>81</td>
</tr>
<tr>
<td>5.5.1</td>
<td>Mapping Scheme</td>
<td>82</td>
</tr>
<tr>
<td>5.5.2</td>
<td>Usage and Automation Considerations</td>
<td>84</td>
</tr>
<tr>
<td>5.5.3</td>
<td>System Verification</td>
<td>85</td>
</tr>
<tr>
<td>5.6</td>
<td>Brake-by-Wire Case Study</td>
<td>85</td>
</tr>
<tr>
<td>5.7</td>
<td>Discussion</td>
<td>87</td>
</tr>
<tr>
<td>6</td>
<td>Schedulability Analysis at Early Design Stages with MARTE</td>
<td>89</td>
</tr>
<tr>
<td>6.1</td>
<td>Introduction</td>
<td>89</td>
</tr>
<tr>
<td>6.2</td>
<td>Overview of the Optimum Process</td>
<td>91</td>
</tr>
<tr>
<td>6.3</td>
<td>The Schedulability Model</td>
<td>92</td>
</tr>
<tr>
<td>6.4</td>
<td>Detailed Optimum Methodology</td>
<td>93</td>
</tr>
<tr>
<td>6.4.1</td>
<td>Modeling Language Description</td>
<td>94</td>
</tr>
<tr>
<td>6.4.2</td>
<td>The Optimum Models</td>
<td>95</td>
</tr>
<tr>
<td>6.4.3</td>
<td>Conformance to the Formal Schedulability Model</td>
<td>99</td>
</tr>
<tr>
<td>6.4.4</td>
<td>Software Architecture Exploration Phase</td>
<td>99</td>
</tr>
<tr>
<td>6.5</td>
<td>Application on an Automotive Case Study</td>
<td>101</td>
</tr>
<tr>
<td>6.5.1</td>
<td>Workload model</td>
<td>101</td>
</tr>
<tr>
<td>6.5.2</td>
<td>Generation of the Architecture Model</td>
<td>103</td>
</tr>
<tr>
<td>6.5.3</td>
<td>Schedulability Analysis Results</td>
<td>104</td>
</tr>
<tr>
<td>6.6</td>
<td>Related Works</td>
<td>105</td>
</tr>
<tr>
<td>6.7</td>
<td>Conclusions and Future Work</td>
<td>105</td>
</tr>
</tbody>
</table>
Part III Component-Based Design and Real-Time Components
7 Early Time-Budgeting for Component-Based Embedded Control Systems ........................................... 109
Manoj G. Dixit and S. Ramesh and Pallab Dasgupta
7.1 Introduction ......................................................... 109
7.2 Time-Budgeting Methodology ................................. 111
7.2.1 Formalization of Component Time-Budgeting ........ 112
7.2.2 Component Time-Budget Computation .................. 114
7.3 RDP Constraint Computation Methods ....................... 115
7.3.1 Emptiness and Universality Check Method .......... 115
7.3.2 Bounded Response Constraint Extraction Method .... 116
7.3.3 Corner Point Constraint Extraction Method .......... 118
7.4 Case Studies ....................................................... 120
7.5 Conclusion and Future Work .................................... 121
8 Contract-Based Reasoning for Component Systems with Rich Interactions ..................................... 123
Susanne Graf, Roberto Passerone and Sophie Quinton
8.1 Introduction ......................................................... 124
8.2 Design methodology ............................................. 125
8.2.1 Contract framework ..................................... 127
8.2.2 Reasoning within a contract framework ............. 129
8.3 Circular Reasoning in Practice ................................ 131
8.3.1 The L0 framework ..................................... 131
8.3.2 The L1 framework ..................................... 132
8.3.3 Relaxed circular reasoning ............................. 135
8.4 Conclusion and Future Work .................................... 136
9 Extracting End-to-end Timing Models from Component-Based Distributed Embedded Systems ............... 137
Saad Mubeen, Jukka Mäki-Turja and Mikael Sjödin
9.1 Introduction ......................................................... 137
9.2 Background and Research Problem ........................... 138
9.2.1 The Rubus Concept ..................................... 138
9.2.2 Problem Statement: Linking of Distributed Chains .... 140
9.3 End-to-end Timing Model ......................................... 142
9.3.1 System Timing Model ................................... 142
9.3.2 System Linking Model ................................. 143
9.4 Extraction of End-to-end Timing Model ....................... 143
9.4.1 Proposed Solution .................................... 144
9.4.2 Extraction of End-to-end Timing Model in Rubus-ICE .. 146
9.5 Related Work ....................................................... 146
9.6 Conclusion ......................................................... 149
Contents
Part IV Timing Analysis and Time-Based Synthesis
10 Distributed Priority Assignment in Real-Time Systems .......................... 153
Moritz Neukirchner, Steffen Stein, Rolf Ernst
10.1 Introduction ................................................. 153
10.2 Related Work ............................................... 154
10.3 System Model & Admission Control concept ................................. 155
10.4 Self-Configuration Strategy ..................................... 156
10.5 The Local Improvement Target .................................... 158
10.6 Distributed Self-Configuration Algorithm ............................... 159
10.7 Evaluation .................................................... 162
10.7.1 Number of Feasible Priority Assignments ...................... 163
10.7.2 Runtime .................................................. 164
10.8 Conclusion .................................................... 165
11 Exploration of Distributed Automotive Systems using Compositional Timing Analysis .......................... 167
Martin Lukasiewycz, Michael Glaß, Jürgen Teich, and Samarjit Chakraborty
11.1 Introduction .................................................. 167
11.2 Design Space Exploration Model ..................................... 168
11.2.1 Model Description ......................................... 168
11.2.2 Binary Encoding ......................................... 171
11.3 Compositional Timing Analysis ....................................... 173
11.3.1 Timing Model .............................................. 173
11.3.2 Dependency-based Fixed-Point Iteration ...................... 175
11.3.3 Fine-grained Fixed-Point Iteration ........................ 177
11.4 Experimental Results ............................................ 178
11.4.1 Automotive Case Study .................................... 178
11.4.2 Design Space Exploration Results ............................ 179
11.4.3 Timing Analysis Results .................................. 180
11.5 Concluding Remarks ............................................. 181
12 Design and Evaluation of Future Ethernet AVB-based ECU Networks ........................................ 183
Michael Glaß, Sebastian Graf, Felix Reimann, and Jürgen Teich
12.1 Future Communication Media for ECU Networks ......................... 183
12.2 Related Work .................................................. 184
12.3 Fundamentals .................................................. 186
12.4 VPC Model ..................................................... 190
12.4.1 AVB Scheduling ........................................... 191
12.4.2 Overall Ethernet AVB Model ................................ 193
12.5 Case Study ..................................................... 194
12.6 Conclusion ..................................................... 197
List of Figures
Contents
References ................................................................. 203
Index ................................................................. 219
This book is an edited collection of contributions in selected topics related to embedded systems modeling, analysis and synthesis. Most contributions are extended versions of papers that originally appeared throughout several workshops organized in the context of the Embedded Systems Week and Real-Time Systems Symposium in the last months of 2011. The workshops targeted topics and challenges related to the use of models for the design, analysis, and synthesis of Embedded Systems. Problems and solutions were discussed for different stages in the development process and apply to the system-level view, as well as to the design, analysis and synthesis of components and subsystems and the behaviors therein. These workshops were the WSS, Workshop on Software Synthesis, the TiMoBD, Time Analysis and Model-Based Design, and the SOMRES, Workshop on Synthesis and Optimization Methods for Real-Time Embedded Systems.
As workshop organizers and editors of this book, we believe that these are very special times for embedded and cyber-physical systems researchers and developers. A time of opportunity, because of the need and emergence of feature-rich, complex, distributed systems and the need to tame their complexity in new ways, including the adoption of model-based development, new analysis methods and design synthesis techniques, and true component-based development, in which functional and platform assemblies are correct-by-construction.
This book collects contributions on different topics, including system and software models, innovative architectures (including OS and resource managers), formal methods, model checking and analysis techniques, software synthesis, system optimization and real-time networks, with the ambitious objective of providing useful insights and innovative ideas on how to solve very complex problems throughout the entire (model-based) development cycle. Contrary to other books on the subject, we attempt at reconciling the two communities of Model-Based and Model-Driven Design, which often operate in independent ways, with only a few fortunate exceptions.
Regardless of the workshop organization, the selected paper have been organized according to their topics and divided in chapters that better fit the stages in the development process rather than an abstract classification based, for example, on lan-
guages, algorithmic solutions or methods. The intended audience includes of course the general community of embedded systems researchers, but we believe several topics and contributions should be also of interest for developers, tool vendors and development process experts. Several contributions are provided by industry developers and researchers and refer to upcoming commercial products, methods and tools. The applicability of most other results is demonstrated by use cases and/or project experiences.
We would like to acknowledge all authors, the workshop audiences that provided constructive feedback and interesting discussion, which eventually found their way into improved and new content and the assistant editors at Springer.
Berkeley, Montreal, Pisa, Dortmund, March 2013
Alberto Sangiovanni-Vincentelli,
Haibo Zeng,
Marco Di Natale,
Peter Marwedel
Chapter 8
Contract-Based Reasoning for Component Systems with Rich Interactions
Susanne Graf, Roberto Passerone and Sophie Quinton
Abstract In this paper we propose a rule unifying circular and non-circular assume-guarantee reasoning and show its interest for contract-based design and verification. Our work was motivated by the need to combine, in the top-down methodology of the FP7 SPEEDS project, partial tool chains for two component frameworks derived from the HRC model and using different refinement relations. While the L0 framework is based on a simple trace-based representation of behaviors and uses set operations for defining refinement, the more elaborated L1 framework offers the possibility to build systems of components with complex interactions. Our approach in L1 is based on circular reasoning and results in a method for checking contract dominance which does not require the explicit composition of contracts. In order to formally relate results obtained in L0 and L1, we provide a definition of the minimal concepts required by a consistent contract theory and propose abstract definitions which smoothly encompass hierarchical components. Finally, using our relaxed rule for circular reasoning, we show how to use together the L0 and L1 refinement relations and as a result their respective tool chains.
Susanne Graf
VERIMAG/CNRS, 2 avenue de Vignate, 38610 Gières, France.
e-mail: susanne.graf@imag.fr
Roberto Passerone
DISI/University of Trento, via Sommarive 5, 38123 Trento, Italy.
e-mail: roberto.passerone@unitn.it
Sophie Quinton
IDA/TU Braunschweig, Hans-Sommer-Straße 66, 38106 Braunschweig, Germany.
e-mail: quinton@ida.ing.tu-bs.de
8.1 Introduction
Contract and interface frameworks are emerging as the formalism of choice for system designs that require large and distributed teams, or where the supply chain is complex [242, 64, 65]. This style of specification is typically employed for top-down design of systems of components, where the system under design is built by a sequence of decomposition and verification steps. In this paper we present and study some distinctive features of contract theories for frameworks in which the interaction between components is “rich”, i.e., more complex than the usual input/output (I/O) communication. One such component framework is BIP [26] which allows multi-party synchronizations scheduled according to priorities. In addition, we show how to combine results obtained using different contract refinement relations.
Our work has its practical motivation in the component framework HRC [29, 32, 64, 65] (standing for Heterogeneous Rich Components) defined in the FP7 IP project SPEEDS [255], which has been reused in the FP7 STREP project COMBEST [60] and the ARTEMIS project CESAR [51]. The HRC model defines component properties in terms of extended transition systems and provides several composition models, ranging from low-level semantic composition to composition frameworks underlying the design tools used by system designers. More precisely, HRC is organized around two abstraction levels called L0 and L1 and describing respectively the core level and the analysis tool level of HRC [210]. That is, L0 determines the expressive power of the entire model and there exist translations from L1 models to L0. On the other hand, L1 extends the core model with concepts such as coordination mechanisms — the rich interactions mentioned in the title. Analysis tools can then take advantage of these additional concepts to make system descriptions more concise and therefore verification more efficient.
Our objective is to allow combined use of synchronous tools like Simulink [269] for L0 and synchronization-based tools like BIP for L1, which have complementary strengths. For example, Simulink is very convenient for modeling physical dynamic systems or streaming applications. In contrast BIP, which encompasses rich interactions, is well adapted for describing the dynamic behavior of sets of components depending on available resources for memory, energy, communication bandwidth etc. We are interested in this paper in the relation between the L0 and L1 contract frameworks as we want to use verification results established in L1 for further reasoning within L0. The presence of rich interactions in L1 makes contract composition problematic and leads us to focus instead on circular reasoning, which allows a component and its environment to be refined concurrently — each one relying on the abstract description of its context — and entails an interesting rule for proving dominance, i.e., refinement between contracts. In order to relate L0 and L1, we define a generic contract framework that uses abstract composition operators and thus encompasses a variety of interaction models, including those for L0 and L1. Finally, we show how to use a relaxed rule for circular reasoning to combine partial tool chains for both frameworks into a complete tool chain for our methodology.
To the best of our knowledge, this is the first time that a rule combining different refinement relations is proposed and used to unify two contract frameworks.
While circular reasoning has been extensively studied, e.g. in [6, 173], existing work focuses on finding sufficient conditions for soundness of circular reasoning while we focus on how to use circular reasoning in a contract-based methodology. Non-circular assume-guarantee reasoning is also a topic of intense research focused on finding a decomposition of the system that satisfies the strong condition imposed on at least one of its components [59]. Finally, our contract frameworks are related to interface automata [2]. Since de Alfaro and Henzinger’s seminal paper many contract and interface theories have been developed for numerous frameworks (see e.g. [153, 282, 71, 229, 231, 230] to name just a few). However these theories focus on composition of contracts while we strive to avoid that and furthermore they do not handle rich interactions. Examples include [154, 230] based on modal I/O automata and [282] defining relational interfaces for capturing functional dependencies between inputs and outputs of an interface. Preliminary versions of our contract framework appeared in [224, 109] but did not address the question of combining results obtained for different refinements.
This paper is structured as follows: Section 8.2 describes our design and verification methodology as well as generic definitions of component and contract framework. It then discusses sufficient reasoning rules for establishing dominance without composing contracts. Section 8.3 presents how the proposed approach is applied to the \( L_0 \) and \( L_1 \) frameworks. In particular it shows how their different satisfaction relations may be used together using relaxed circular reasoning and discusses practical consequences of this result. Section 8.4 concludes the paper. The proofs of all theorems presented in this paper are presented in [102].
### 8.2 Design Methodology
Our methodology is based on an abstract notion of component. We characterize a component \( K \) by its interface defined as a set \( \mathcal{P} \) of ports which describe what can be observed by its environment. We suppose given a global set of ports \( \text{Ports} \), which all sets of ports in the following are subsets of. In addition, components are also characterized by their behavior. At this level of abstraction, we are not concerned with how behaviors are represented and develop our methodology independently of the particular formalism employed. Interactions (potentially complex) between components are expressed using the concept of glue operator [254]. A glue defines how the ports of different components are connected and the kind of synchronization and data exchange that may take place. We denote the composition of two components \( K_1 \) and \( K_2 \) through a glue \( gl \) as \( gl\{K_1, K_2\} \). The glue must be defined on the union of the ports \( \mathcal{P}_1 \) and \( \mathcal{P}_2 \) of the components.
In order to separate the implementation phase of a component from its integration into the system under design, we use contracts [32, 30, 224]. A contract for a component \( K \) describes the interface \( \mathcal{P} \) of \( K \), the interaction between \( K \) and its environment \( E \), the expected behavior of \( E \), called the assumption \( A \) of the contract, and the expected behavior of \( K \), called the guarantee \( G \). Assumptions and guarantees
are in turn expressed as components, defining the interface and the behavior that are considered acceptable from the environment and from the component. Thus, formally, a contract $C$ for an interface $P$ is a triple $(A, gl, G)$, where $gl$ is a glue operator on $P \cup P_A$ for some $P_A$ disjoint from $P$; the assumption $A$ is a component with interface $P_A$; and the guarantee $G$ is a component with interface $P$. Note that the interface of the environment is implicitly defined by $gl$. Graphically, we represent contracts as in Figure 8.1.
$$C = (A, gl, G)$$

The proof of conformance, decomposition and dominance is shown in Figure 8.2.
$$K_1 \models \mathcal{C}_1 \quad K_2 \models \mathcal{C}_2 \quad K_3 \models \mathcal{C}_3$$
$satisfaction$
$$gl\{A, gl\{K_1, K_2, K_3\}\} \preceq \varphi$$

From a macroscopic point of view, we adopt a top-down design and verification methodology (see Figure 8.2) in which global requirements are pushed progressively from the top-level system to the low-level atomic components. As usual, this is just a convenient representation; in real life, the final picture is always obtained in several iterations alternatively going up and down the hierarchy [213]. While the refinement relation between a specification and an implementation is at the core of component-based design, in contract-based design refinement takes different forms depending on whether it relates a system to a specification, two contracts or an implementation to a contract. In this paper we use a methodology which divides the design and verification process into three steps corresponding to these three forms of refinement.
We assume that the system $K$ under construction has to realize a global requirement $\varphi$ together with an environment on which we may have some knowledge, expressed by a property $A$. Both $\varphi$ and $A$ are expressed w.r.t. the interface $\mathcal{P}$ of $K$. We proceed as follows: (1) define a contract $\mathcal{C} = (A, gl, G)$ for $\mathcal{P}$ such that $gl\{A, G\}$ conforms to $\varphi$; (2) decompose $K$ as subcomponents $K_i$ connected through a glue operator $gl_I$ and provide a contract $\mathcal{C}_i$ for each of them; possibly iterate this step if needed; (3) prove that whenever a set of implementations $K_i$ satisfy their contracts $\mathcal{C}_i$, then their composition satisfies the top-level contract $\mathcal{C}$ (dominance) — and thus guarantee $\varphi$; (4) provide such implementations.
The correctness proof for a particular system is therefore split into 3 phases: conformance (denoted $\ll$) of the system defined by the top-level contract $\mathcal{C}$ to $\varphi$; dominance of $\mathcal{C}$ by the composition of the set of contracts $\{\mathcal{C}_i\}$ through $gl_I$; and satisfaction (denoted $|=\) of each $\mathcal{C}_i$ by the corresponding implementation $K_i$. Thus, conformance relates closed systems, dominance relates contracts, while satisfaction relates components to contracts.
The assumption of $\mathcal{C}_1$ is represented as one component $A_1$ while in the actual system $K_1$ will be used in the context of three components, namely $K_2$, $K_3$ and $A$. Thus, we need to relate the actual glues $gl$ and $gl_I$ to the glue $gl_1$ of $\mathcal{C}_1$. In other words, we need a glue $gl_{E_1}$ to compose $K_2$, $K_3$ and $A$ as well as an operation $\circ$ on glues such that $gl \circ gl_I = gl_1 \circ gl_{E_1}$. In most cases, $\circ$ cannot simply be composition of functions and has to involve some flattening of the system.
### 8.2.1 Contract Framework
To summarize, we consider a component framework that smoothly supports complex composition operators and hierarchical components. The elements of the component framework are as follows:
**Definition 1 (Component framework).** A component framework is defined by a tuple $(\mathcal{K}, GL, \circ, \cong)$ where:
- $\mathcal{K}$ is a set of components. Each component $K \in \mathcal{K}$ has as interface a set of ports, denoted $\mathcal{P}_K$ and subset of our global set of ports $\text{Ports}$.
• GL is a set of glues. A glue is a partial function \(2^\mathcal{K} \rightarrow \mathcal{K}\) transforming a set of components into a new composite component. Each \(gl \in GL\) is defined on a set of ports \(S_{gl}\), called support set, and defines a new interface \(\mathcal{P}_{gl}\) for the new component, called exported interface. \(K = gl([K_1, ..., K_n])\) is defined if \(K_1, ..., K_n \in \mathcal{K}\) have disjoint interfaces, \(S_{gl} = \bigcup_{i=1}^{n} \mathcal{P}_{K_i}\), and \(\mathcal{P}_{K} = \mathcal{P}_{gl}\).
• | is a partial operator on GL, called flattening, to compose glues. \(gl \circ gl'\) is defined if \(\mathcal{P}_{gl} \subseteq S_{gl}\). Its support set is \(S_{gl} \setminus \mathcal{P}_{gl} \cup S_{gl'}\) and its interface is \(\mathcal{P}_{gl}\).
• \(\equiv\subseteq \mathcal{K} \times \mathcal{K}\) is an equivalence relation between components.
We simplify our notation by writing \(gl([K_1, ..., K_n])\) instead of \(gl([K_1, ..., K_n])\). The equivalence relation \(\equiv\) is typically used for relating composite components with their semantics given as an atomic component. More importantly, \(|\) must be coherent with \(\equiv\) in the sense that \(gl\{gl'([\mathcal{K}_1], \mathcal{K}_2) \equiv (gl \circ gl')\{\mathcal{K}_1 \cup \mathcal{K}_2\}\) for any sets of components \(\mathcal{K}\) such that all terms are defined.
After formalizing generic properties required from a component framework, we now define the relations used in the methodology for dealing with contracts. Satisfaction is usually considered as a derived relation and chosen as the weakest relation implying conformance and preserved by composition. We loosen the coupling between satisfaction and conformance to obtain later stronger reasoning schemata for dominance. Furthermore, we propose a representation of satisfaction as a set of refinement under context relations denoted \(\sqsubseteq_{A,gl}\) and such that \(K \sqsubseteq_{A,gl} G\) iff \(K \models (A, gl, G)\).
**Definition 2 (Contract framework).** A contract framework is defined by a tuple \((\mathcal{K}, GL, |, \equiv, \equiv, \models)\) where:
1. \((\mathcal{K}, GL, |, \equiv)\) is a component framework.
2. \(\equiv\subseteq \mathcal{K} \times \mathcal{K}\) is a preorder called conformance relating components having the same interface.
3. \(\models\) is a relation called satisfaction between components and contracts s.t.: the relations \(\sqsubseteq_{A,gl}\) defined by \(K \sqsubseteq_{A,gl} G\) iff \(K \models (A, gl, G)\) are preorders; and, if \(K \models (A, gl, G)\) then \(gl\{A, K\} \equiv gl\{A, G\}\).
Our definition of satisfaction emphasizes the fact that \(\models\) can be seen as a set of refinement relations where \(K \sqsubseteq_{A,gl} G\) means that \(K\) refines \(G\) in the context of \(A\) and \(gl\). The condition which relates satisfaction and conformance ensures that the actual system \(gl\{A, K\}\) will conform to the global requirement \(\varphi\) discussed in the methodology because \(\equiv\) is transitive and \(gl\{A, G\} \equiv \varphi\).
**Example 1.** Typical notions of conformance for labeled transition systems are trace inclusion and its structural counterpart simulation. For these, satisfaction is usually defined as the weakest relation implying conformance.
\[K \models (A, gl, G) \equiv gl\{K, A\} \subseteq gl\{G, A\}\]
Dominance is a key notion for reasoning about contracts rather than using refinement between components. Proving that a contract \(C\) dominates \(C'\) means showing...
that every component satisfying $\mathcal{C}$ also satisfies $\mathcal{C}'$. However, a dominance check involves in general not just a pair of contracts: a typical situation would be the one depicted in Figure 8.2, where a set of contracts $\{\mathcal{C}_i\}_{i=1}^n$ are attached to disjoint interfaces $\{\mathcal{P}_i\}_{i=1}^n$. Besides, a glue $gl_I$ is defined on $P = \bigcup_{i=1}^n \mathcal{P}_i$ and a contract $\mathcal{C}$ is given for $P$. In this context, a set of contracts $\{\mathcal{C}_i\}_{i=1}^n$ dominates a contract $\mathcal{C}$ w.r.t. a glue $gl_I$ if any set of components satisfying contracts $\mathcal{C}_i$, when composed using $gl_I$, makes a component satisfying $\mathcal{C}$.
**Definition 3 (Dominance).** Let $\mathcal{C}$ be a contract on $P$, $\{\mathcal{C}_i\}_{i=1}^n$ a set of contracts on $P_i$ and $gl_I$ a glue such that $S_{gl_I} = \bigcup_{i=1}^n \mathcal{P}_i$ and $P = P_{gl_I}$. Then $\{\mathcal{C}_i\}_{i=1}^n$ dominates $\mathcal{C}$ with respect to $gl_I$ iff for all components $\{K_i\}_{i=1}^n$:
$$ (\forall i : K_i \models \mathcal{C}_i) \implies gl_I\{K_1, \ldots, K_n\} \models \mathcal{C} \tag{8.1} $$
Note that this formal definition of dominance does not help establishing dominance in practice because looking at all possible components satisfying a contract is not realistic. What we need is a sufficient condition that refers to assumptions and guarantees, rather than components. One such condition is when the composition of the low-level guarantees $G_i$ satisfies the top-level contract $\mathcal{C}$ and furthermore each low-level assumption $A_i$ is discharged by the abstraction of its environment defined by the guarantees of the other components. Formally:
$$ \left\{ \begin{array}{l}
gl_I\{G_1, \ldots, G_n\} \models \mathcal{C} \\
\forall i : gl_{E_i}\{A, G_1, \ldots, G_{i-1}, G_{i+1}, \ldots, G_n\} \models \mathcal{C}_i^{-1}
\end{array} \right. \tag{8.1} $$
where for any contract $\mathcal{C}_i = (A_i, gl_i, G_i)$ we use the notation $\mathcal{C}_i^{-1}$ to denote the contract $(G_i, gl_i, A_i)$.
In the next subsection, we provide two rules which indeed make the previous condition sufficient for establishing dominance: one is similar to circular assume-guarantee reasoning and the other one deals with preservation of satisfaction by composition. This result is particularly significant because one can check dominance while avoiding composition of contracts, which is impossible in the general case and leads to state explosion in most concrete contract frameworks.
### 8.2.2 Reasoning within a Contract Framework
We use here the representation of satisfaction as a set of refinement under context relations $\sqsubseteq_{A, gl}$ where $K \sqsubseteq_{A, gl} G$ if and only if $K \models (A, gl, G)$. The usual non-circular assume-guarantee rule reads as follows in our context:
1 One may also need to ensure that the assumptions of the low-level contracts are indeed satisfied in the actual system. This is achieved by strengthening the definition with:
$$ \forall E \text{ on } A, \text{ if } E \models (G', gl', A') \text{ then } E \models (G, gl, A) $$
where $E \sqsubseteq A$ denotes that for any component $G$ and $gl$ such that $\sqsubseteq_{G,gl}$ is defined $E \sqsubseteq_{G,gl} A$. This rule relates the behavior of $K$, when composed with the abstract environment $A$, to the behavior of $K$, when composed with its actual environment $E$. However it is quite limited as it imposes a very strong condition on $E$. Hence the following rule which is commonly referred to as circular reasoning.
$$K \sqsubseteq_{A,gl} G \land E \sqsubseteq A \implies K \sqsubseteq_{E,gl} G$$
Note that $E$ and $K$ may symmetrically rely on each other. For a given contract framework, this property can be proven by an induction based on the semantics of composition and refinement. Unfortunately, circular reasoning is not sound in general. In particular it does not hold for parallel composition with synchronizations (as in Petri nets or process algebras) or instantaneous mutual dependencies between inputs and outputs (as in synchronous formalisms). The following example illustrates one possible reason for the non validity of circular reasoning.
**Example 2.** Consider a contract framework where components are labeled transition systems and composition is strong synchronization between corresponding labels and interleaving of others denoted $\parallel$. Define conformance as simulation and satisfaction as the usual relation defined in Example 1. The circular reasoning rule translates into: if $K \parallel A$ is simulated by $G \parallel A$ and $E \parallel G$ is simulated by $A \parallel G$ then $K \parallel E$ is simulated by $G \parallel E$. In the example of Figure 8.3, both $G$ and $A$ forbid a synchronization between $b_K$ and $b_E$, since they can rely on respectively $G$ and $A$ to forbid their actual occurrence. But obviously, the composition $K \parallel E$ now allows a synchronization between $b_K$ and $b_E$.

Fig. 8.3: $K \parallel A \not\equiv G \parallel A$ and $E \parallel G \not\equiv A \parallel G$ but $K \parallel E \not\equiv G \parallel E$.
Note that this satisfaction relation can be strengthened to obtain a more restrictive relation for which circular reasoning is sound. This is the approach taken for the L1 contract framework in Section 8.3.2, where we need circular reasoning to avoid composition of contracts.
\[^2\] Note that non-determinism is another reason here for the non validity of circular reasoning.
A second rule which is used for compositional reasoning in most frameworks is:
\[ I \subseteq S \quad \text{then} \quad I \parallel E \subseteq S \parallel E. \]
It states that if an implementation \( I \) refines its specification \( S \) then it refines it in any environment \( E \). The equivalent of this rule for satisfaction is more complex as refinement here relates closed systems.
**Definition 4.** Satisfaction \( \models \) is preserved by composition iff for any component \( E \), \( gl \) such that \( S_{gl} = P_E \cup P \) for some \( P \) such that \( P \cap P_E = \emptyset \) and \( gl_E, E_1, E_2 \) such that \( E = gl_E \{ E_1, E_2 \} \), the following holds for any components \( I, S \) on \( P \):
\[ I \subseteq_{E, gl} S \implies gl_1 \{ I, E_1 \} \subseteq_{E_2, gl_2} gl_1 \{ S, E_1 \} \]
where \( gl_1 \) and \( gl_2 \) are such that \( gl \circ gl_E = gl_2 \circ gl_1 \).
We now have the ingredients to formalize our sufficient condition for dominance. This condition reduces a dominance proof to a set of satisfaction checks, one for the refinement between the guarantees and \( n \) for discharging individual assumptions.
**Theorem 1.** Suppose that circular reasoning is sound and satisfaction is preserved by composition. If \( \forall i \exists gl_{E_i} : gl \circ gl_{I_i} = gl_{I_i} \circ gl_{E_i} \) then to prove that \( \{ C_i \}_{i=1}^n \) dominates \( C \) w.r.t. \( gl \), it is sufficient to prove that condition (1) holds.
set of behaviors all the identity traces. Composition can then be taken as the intersection of the sets of behaviors of the components, together with the glue. To make this work, we must also equalize the ports of all trace sets using inverse projection \( \text{proj}^{-1}_{\mathcal{P}, \mathcal{P}} \), which extends behaviors over \( \mathcal{P}_1 \) with the appropriate additional ports of \( \mathcal{P} \). If we denote the interface of the composite as \( \mathcal{P}_{gl} \), and if \( \mathcal{K} = \{K_1, \ldots, K_n\} \) is a set of components such that \( \mathcal{P}_1, \ldots, \mathcal{P}_n \) are pairwise disjoint, then a glue \( gl \) for \( \mathcal{K} \) is a component \( K_{gl} \) defined on the ports \( \mathcal{P} = \mathcal{P}_{gl} \cup (\bigcup_{i=1}^{n} \mathcal{P}_i) \), and:
\[
K = gl\{K_1, \ldots, K_n\}
= \text{proj}_{\mathcal{P}_{gl}, \mathcal{P}} \left( K_{gl} \cap \text{proj}^{-1}_{\mathcal{P}_1, \mathcal{P}} (K_1) \cap \cdots \cap \text{proj}^{-1}_{\mathcal{P}_n, \mathcal{P}} (K_n) \right)
\]
The definition of \( \circ \) is straightforward: since glues are themselves components, their composition follows the same principle as component composition. Finally, the \( \equiv \) relation on \( \mathcal{K} \) is taken as equality of sets of traces.
In the L0 model there exists a unique maximal component satisfying a contract \( \mathcal{C} \), namely \( M_{\mathcal{C}} = G \cup \neg A \), where \( \neg \) denotes the operation of complementation on the set of all behaviors over ports \( \mathcal{P}_A \). A contract \( \mathcal{C} = (A, G) \) is in canonical form when \( G = M_{\mathcal{C}} \). Every contract has an equivalent contract in canonical form, which is obtained by replacing \( G \) with \( M_{\mathcal{C}} \). The operation of computing a canonical form is well defined, since the maximal implementation is unique, and it is idempotent. It is easy to show that \( K \models \mathcal{C} \) if and only if \( K \subseteq M_{\mathcal{C}} \).
The L0 contract framework has strong compositional properties, which derive from its simple definition and operators [30]. The theory, however, depends on the effectiveness of certain operators, complementation in particular, which are necessary for the computation of canonical forms. While the complete theory can be formulated without the use of canonical forms, complementation remains fundamental in the definition of contract composition, which is at the basis of system construction. Circular reasoning is not sound for contracts which are not in canonical form (Example 2 is a counter-example in that case). This is a limitation of the L0 framework, since working with canonical forms could prove computationally hard.
### 8.3.2 The L1 Framework
L1 composition is based on interactions, which involve non-empty sets of ports. An interaction is defined by the components that synchronize when it takes place and the ports through which these components synchronize. Interactions are structured into connectors which are used as a mechanism for encapsulation: only these connectors appear at the interface of a composite component. This enables to abstract the behavior of a component in a black-box manner, by describing which connector is triggered but not exactly which interaction takes place. Furthermore L1 is expressive enough to encompass synchronous systems.
**Definition 5.** An atomic component on a interface $\mathcal{P}$ is defined by an LTS $K = (Q, q^0, 2^\mathcal{P}, \rightarrow)$, where $Q$ is a set of states, $q^0$ an initial state and $\rightarrow \subseteq Q \times 2^\mathcal{P} \times Q$ is a transition relation.
Note that atomic components are labeled by sets of ports rather than ports because we allow several ports of a component to be triggered at the same time.
**Definition 6.** An interaction is a non-empty set of ports. A connector $\gamma$ is defined by a set of ports $S_\gamma$ called the support set of $\gamma$, a port $p_\gamma$ called its exported port and a set $\mathcal{I}(\gamma)$ of interactions in $S_\gamma$.
The notions of support set and exported port are illustrated in Figure 8.4, where connectors relate in a composition a set of inner ports (of the subcomponents) to an outer port (of the composite component). One should keep in mind that a connector $\gamma$, and thus the exported port $p_\gamma$, represents a set of interactions rather than a single interaction.
Typical connectors represent rendezvous (only one interaction, equal to the support set), broadcast (all the interactions containing a specific port called trigger) and also mutual exclusion (some interactions but not their union).

We now define glues as sets of connectors which may be used together in order to compose components.
**Definition 7.** A glue $gl$ on a support set $S_g$ is a set of connectors with distinct exported ports and with support sets included in $S_g$.
A glue $gl$ defines as exported interface $\mathcal{P}_{gl}$ the set $\{p_\gamma \mid \gamma \in gl\}$. Besides, $\mathcal{I}(gl)$ denotes the set of all interactions of the connectors in $gl$, i.e.: $\mathcal{I}(gl) = \bigcup_{\gamma \in gl} \mathcal{I}(\gamma)$. In Figure 8.4, $gl$ is composed of connectors $\gamma$ and $\gamma'$ and defines a composite component denoted $gl\{K_1, K_2\}$.
**Definition 8.** A component is either an atomic component or it is inductively defined as the composition of a set of components $\{K_i\}_{i=1}^n$ with disjoint interfaces $\{\mathcal{P}_i\}_{i=1}^n$ using a glue $gl$ on $\mathcal{P} = \bigcup_{i=1}^n \mathcal{P}_i$. Such a composition is called a composite component on $\mathcal{P}_{gl}$ and it is denoted $gl\{K_i\}_{i=1}^n$.
So far, we have defined components and glues. Glues can be composed so as to allow flattening of components. Such a composition requires to handle hierarchical connectors built by merging connectors defined at different levels of hierarchy. The definition of the operator \( \circ \) used for this purpose is omitted here and can be found in [102]. Connectors whose exported ports and support sets are not related are called disjoint and need not be composed. The operator \( \circ \) is then easily extended to glues: the composition \( gl \circ gl' \) of two glues \( gl \) and \( gl' \) is obtained from \( gl \cup gl' \) by inductively composing all connectors which are not disjoint.
We can now formally define the flattened form of a component. This in turn will allow us to provide an equivalence relation between components based on the semantics of their flattened form. A component is called flat if it is atomic or of the form \( gl\{K_1,\ldots,K_n\} \), where all \( K_i \) are atomic components. A component that is not flat is called hierarchical. A hierarchical component \( K \) is of the form \( gl\{K_1,\ldots,K_n\} \) such that at least one \( K_i \) is composite. Thus, such a \( K \) can be represented as \( gl\{gl'\{s^1\},\ldots,gl'\{s^n\}\} \), where \( s^1 \) and \( s^n \) are sets of components.
Definition 9. The flattened form of a component \( K \) is denoted \( flat(K) \) and defined inductively as:
- if \( K \) is a flat component, then \( flat(K) \) is equal to \( K \).
- otherwise, \( K \) is of the form \( gl\{gl'\{s^1\},\ldots,gl'\{s^n\}\} \), and then \( flat(K) \) is the flattened form of \( (gl \circ gl')\{\bigcup s^i\} \).
Definition 10. The semantics \( \llbracket K \rrbracket \) of a flat component \( K = gl\{K_1,\ldots,K_n\} \) is defined as \( (Q,q^0,\mathcal{J}(gl),\rightarrow) \), where \( Q = \prod_{i=1}^n Q_i \), \( q^0 = (q^0_1,\ldots,q^0_n) \), and \( \rightarrow \) is such that: given two states \( q^1 = (q^1_1,\ldots,q^1_n) \) and \( q^2 = (q^2_1,\ldots,q^2_n) \) in \( Q \) and an interaction \( \alpha \in \mathcal{J}(gl) \), \( q^1 \xrightarrow{\alpha} q^2 \) if and only if for all \( q^1_i, q^2_i \xrightarrow{\alpha_i} \), where \( \alpha_i = \alpha \cap \mathcal{P}_i \).
We use the convention that \( \forall q : q^0 \xrightarrow{0} q \) so components not involved in an interaction do not move. Thus the semantics of a flat component is obtained as the composition of its constituting LTS where labels are synchronized according to the interactions of \( \mathcal{J}(gl) \).
We then define equivalence \( \cong \) as follows: two components are equivalent if their flattened forms have the same semantics. Note that in practice one would prefer to define the semantics of a hierarchical component as a function of the semantics of its constituting components. In presence of encapsulation this requires to distinguish between closed and open systems and thus to provide two different semantics. Details can be found in [102].
We now have the ingredients for defining the L1 component framework and we focus on its contract framework.
Definition 11. \( K_1 \cong^{L1} K_2 \) if and only if \( \llbracket K_1 \rrbracket \) is simulated by \( \llbracket K_2 \rrbracket \).
Thus L1-conformance is identical to L0-conformance for components without non-observable non-determinism, and otherwise stronger. Note that in verification tools, in order to check trace inclusion efficiently, one will generally check simulation anyway. Satisfaction is defined as follows.
Definition 12. A component $K$ satisfies a contract $C = (A, gl, G)$ for $\mathcal{P}_K$, denoted $K \models \mathcal{L}^L_1 (A, gl, G)$, if and only if:
$$\{g\{K, A_{det}\} \preceq \mathcal{L}^L_1 g\{G, A_{det}\}, (q_K, q_A) \mathcal{R} (q_G, q'_A) \land \exists q'_K : q_K \xrightarrow{\alpha} q'_K \implies \exists q'_G : q_G \xrightarrow{\alpha} q'_G$$
where $A_{det}$ is the determinization of $A$, $\mathcal{R}$ is the relation on states proving that $g\{K, A_{det}\} \preceq \mathcal{L}^L_1 g\{G, A_{det}\}$ and $\alpha \in 2^{\mathcal{P}_K}$ is such that $\exists \alpha' \in \mathcal{I}(gl) : \alpha \subseteq \alpha'$.
Thus $\models \mathcal{L}^L_1$ strengthens the satisfaction relation used in the $\mathcal{L}^0$ framework by: 1) determinizing $A$; 2) requiring every transition of $K$ to have a counterpart in each related state of $G$ — unless it is structurally forbidden by $gl$ — but the target states of the transition need to be related only if the environment allows this transition. As a consequence, $\models \mathcal{L}^L_1$ allows circular reasoning.
8.3.3 Relaxed Circular Reasoning
We have presented in the previous sections two contract frameworks developed in the SPEEDS project. We show now how we use their respective tool chains together. Unifying the $\mathcal{L}^0$ and $\mathcal{L}^1$ component frameworks is quite straightforward. Nevertheless, we have introduced two different notions of satisfaction: $\models \mathcal{L}^0$ and $\models \mathcal{L}^1$ where the second one is strictly stronger than the first one. To combine results based on $\mathcal{L}^0$ and $\mathcal{L}^1$, we propose a rule called relaxed circular reasoning for two (possibly different) refinement relations:
$$K \sqsubseteq^1_{A, gl} G \land E \sqsubseteq^2_{G, gl} A \implies K \sqsubseteq^1_{E, gl} G$$
This rule generalizes circular and non-circular reasoning by not restricting $\sqsubseteq^2_{G, gl}$ to refinement under context $\sqsubseteq^1_{G, gl}$ or refinement in any context $\sqsubseteq^1$. Depending on which relation is the most restrictive it can be used in two different ways:
1. If the first relation allows circular reasoning and is stronger than the second one (i.e. $K \sqsubseteq^1_{A, gl} G \implies K \sqsubseteq^2_{A, gl} G$) then our new rule relaxes circular reasoning by requiring $E \sqsubseteq^2_{G, gl} A$ rather than $E \sqsubseteq^1_{G, gl} A$.
2. Symmetrically, if the first relation does not allow circular reasoning and refinement in any context $\sqsubseteq^1$ is stronger than the second one then this rule relaxes non circular reasoning by requiring $E \sqsubseteq^2_{G, gl} A$ rather than $E \sqsubseteq^1 A$.
Interestingly, relaxed circular reasoning can be used both ways for $\mathcal{L}^0$- and $\mathcal{L}^1$-satisfaction. First it leads to a relaxed sufficient condition for dominance in $\mathcal{L}^1$.
Theorem 2. $K \sqsubseteq^L_1 G \land E \sqsubseteq^L_0 A$ implies $K \sqsubseteq^L_1_{E, gl} G$.
Theorem 3. If $\forall i \exists g|_{E_i} : gl \circ gl_i = gl_i \circ g|_{E_i}$ the following is sufficient to prove that $C$ dominates $\{g|_i\}_{i=1..n}$ w.r.t. $gl$:
\[
\begin{align*}
\{ \text{gl}_1(G_1, \ldots, G_n) \} & \models^{L1} \mathcal{C}^0 \\
\forall i : \text{gl}_{E^i}(A, G_1, \ldots, G_{i-1}, G_{i+1}, \ldots, G_n) & \models^{L0} \mathcal{C}^{-1}_i
\end{align*}
\]
In that case, checking that contracts \( \{ \mathcal{C}_i \}_{i=1}^n \) L1-dominate a contract \( \mathcal{C} \) requires one L1-satisfaction check and \( n \) L0-satisfaction checks. This is particularly interesting since checking L0-satisfaction may be achieved by using other tools or approaches (that may not need circular reasoning). Moreover, dominance can be established more often as L1-satisfaction is stronger than L0-satisfaction. Second:
**Theorem 4.** \( K \sqsubseteq_{A, gl}^{L0} G \land E \sqsubseteq_{G, gl}^{L1} A \) implies \( K \sqsubseteq_{E, gl}^{L0} G \).
This result made it possible in SPEEDS to incorporate results from tools checking L0-satisfaction with results obtained through L1-dominance (implemented by a set of L1-satisfaction checks), thus building a complete tool chain.
### 8.4 Conclusion and Future Work
The work presented in this paper has been motivated by the necessity of combining contract-based verification tools and corresponding results for two component frameworks L0 and L1 defined in the context of the European SPEEDS project. In particular, we were interested in using dominance results established in L1 — and which cannot be obtained using the L0 refinement relation — for further reasoning in L0. To that purpose, we have presented an abstract notion of contract framework for a given component framework that defines three different notions of refinement, that is, conformance, dominance and satisfaction. We show how to derive these notions from refinement of closed systems and refinement under context and we provide a methodology for compositional and hierarchical verification of global properties.
We have studied circular reasoning as a powerful means for proving dominance. As circular reasoning does not always hold for usual notions of refinement, we provide proof rules for dominance relying on a relaxed notion of circular reasoning based on two notions of refinement. We have then shown that our abstract framework is general enough to represent both L0 and L1 as specific instances and proved that the L0 and L1 refinement relations satisfy the condition for relaxed circular reasoning.
This approach was applied to only simple case studies in the SPEEDS project and should therefore rather be seen as a proof of concept. The practical relevance of such an approach is that it opens up ways of connecting tools that work at different levels of abstraction, and relate their results to prove stronger properties. In addition, our results relax the requirements on the tools, since circular reasoning would not be needed at the L0 level.
**Acknowledgements** This work was supported in part by the EU projects COMBEST (n. 215543) and ArtistDesign (n. 214373).
References
18. AUTOSAR Consortium: AUTOSAR Technical Overview, Version 2.2.2. AUTOSAR – AU-
http://autosar.org
C., Sangiovanni-Vincentelli, A., Sentovich, E., Suzuki, K., Tabbara, B. (eds.): Hardware-
lishers, Norwell, MA, USA (1997)
A.L.: Metropolis: An integrated electronic system design environment. IEEE Computer
10.1109/TSE.2004.9. URL http://dx.doi.org/10.1109/TSE.2004.9
for real-time systems design. In: Proceedings of the 32nd EUROMICRO Conference on
Software Engineering and Advanced Applications, EUROMICRO ’06, pp. 108–117. IEEE
URL http://dx.doi.org/10.1109/EUROMICRO.2006.14
24. Bartolini, C., Lipari, G., DiNatale, M.: From functional blocks to the synthesis of the ar-
chitectural model in embedded real-time applications. In: Proceedings of the 11th IEEE
Real Time on Embedded Technology and Applications Symposium, RTAS ’05, pp. 458–
URL http://dx.doi.org/10.1109/RTAS.2005.24
25. Bartolini, C., Lipari, G., Natale, M.D.: From Functional Blocks to the Synthesis of the Ar-
chitectural Model in Embedded Real-time Applications. In: IEEE Real-Time and Embedded
27. Behrmann, G., David, A., Larsen, K.G.: A tutorial on UPPAAL. In: M. Bernardo, F. Corradini
Formal Methods for the Design of Computer, Communication, and Software Systems, SFM-
Concurrency and Petri Nets, LNCS, vol. 3098, pp. 87–124. Springer Berlin / Heidelberg
(2004)
ple viewpoint contract-based specification and design. In: F.S. de Boer, M.M. Bonsangue,
S. Graf, Willem-Paul de Roever (eds.) Formal Methods for Component-based Objects and Objects, 6th
International Symposium (FMCO 2007), Amsterdam, The Netherlands, October 24–26, 2007,
Revised Papers, Lecture Notes in Computer Science, vol. 5382, pp. 200–225. Springer Verlag
30. Benveniste, A., Caillaud, B., Passerone, R.: A generic model of contracts for embedded sys-
tems. Rapport de recherche 6214, Institut National de Recherche en Informatique et en
Automatique (2007)
32. Benvenuti, L., Ferrari, A., Mangeruca, L., Muzzi, E., Passerone, R., Sofronis, C.: A contract-
based formalism for the specification of heterogeneous systems. In: Proceedings of the For-
um on Specification, Verification and Design Languages (FDL08), pp. 142–147. Stuttgart,
Tampere, Finland (2008)
References
76. Douglass, B.: Real Time UML. Pearson Education (2009)
References 209
129. The Institute of Electrical and Electronics Engineers, Inc.: IEEE Standard for Local and metropolitan area networks–Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks (2011)
References
|
{"Source-Url": "http://www-verimag.imag.fr/~graf/PAPERS/2013-Book-w-chapter-contract-based-chap-8-only.pdf", "len_cl100k_base": 14778, "olmocr-version": "0.1.49", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 118498, "total-output-tokens": 34744, "length": "2e13", "weborganizer": {"__label__adult": 0.000598907470703125, "__label__art_design": 0.001308441162109375, "__label__crime_law": 0.00048470497131347656, "__label__education_jobs": 0.00156402587890625, "__label__entertainment": 0.00023031234741210935, "__label__fashion_beauty": 0.0003771781921386719, "__label__finance_business": 0.0009684562683105468, "__label__food_dining": 0.0005707740783691406, "__label__games": 0.0021419525146484375, "__label__hardware": 0.011627197265625, "__label__health": 0.0006556510925292969, "__label__history": 0.0010528564453125, "__label__home_hobbies": 0.00033020973205566406, "__label__industrial": 0.0019550323486328125, "__label__literature": 0.0006723403930664062, "__label__politics": 0.0008268356323242188, "__label__religion": 0.0010080337524414062, "__label__science_tech": 0.37109375, "__label__social_life": 0.00011008977890014648, "__label__software": 0.009552001953125, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.0006117820739746094, "__label__transportation": 0.005619049072265625, "__label__travel": 0.0004379749298095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 111859, 0.06231]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 111859, 0.4174]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 111859, 0.70675]], "google_gemma-3-12b-it_contains_pii": [[0, 522, false], [522, 522, null], [522, 2373, null], [2373, 5964, null], [5964, 7953, null], [7953, 10661, null], [10661, 13702, null], [13702, 13869, null], [13869, 16227, null], [16227, 17093, null], [17093, 18767, null], [18767, 22241, null], [22241, 25627, null], [25627, 26582, null], [26582, 29853, null], [29853, 33401, null], [33401, 36553, null], [36553, 38974, null], [38974, 40452, null], [40452, 43830, null], [43830, 46212, null], [46212, 49754, null], [49754, 52901, null], [52901, 55848, null], [55848, 58552, null], [58552, 62659, null], [62659, 66685, null], [66685, 70653, null], [70653, 74507, null], [74507, 78495, null], [78495, 82577, null], [82577, 86434, null], [86434, 90259, null], [90259, 94137, null], [94137, 98149, null], [98149, 102042, null], [102042, 106062, null], [106062, 109929, null], [109929, 111859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 522, true], [522, 522, null], [522, 2373, null], [2373, 5964, null], [5964, 7953, null], [7953, 10661, null], [10661, 13702, null], [13702, 13869, null], [13869, 16227, null], [16227, 17093, null], [17093, 18767, null], [18767, 22241, null], [22241, 25627, null], [25627, 26582, null], [26582, 29853, null], [29853, 33401, null], [33401, 36553, null], [36553, 38974, null], [38974, 40452, null], [40452, 43830, null], [43830, 46212, null], [46212, 49754, null], [49754, 52901, null], [52901, 55848, null], [55848, 58552, null], [58552, 62659, null], [62659, 66685, null], [66685, 70653, null], [70653, 74507, null], [74507, 78495, null], [78495, 82577, null], [82577, 86434, null], [86434, 90259, null], [90259, 94137, null], [94137, 98149, null], [98149, 102042, null], [102042, 106062, null], [106062, 109929, null], [109929, 111859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 111859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 111859, null]], "pdf_page_numbers": [[0, 522, 1], [522, 522, 2], [522, 2373, 3], [2373, 5964, 4], [5964, 7953, 5], [7953, 10661, 6], [10661, 13702, 7], [13702, 13869, 8], [13869, 16227, 9], [16227, 17093, 10], [17093, 18767, 11], [18767, 22241, 12], [22241, 25627, 13], [25627, 26582, 14], [26582, 29853, 15], [29853, 33401, 16], [33401, 36553, 17], [36553, 38974, 18], [38974, 40452, 19], [40452, 43830, 20], [43830, 46212, 21], [46212, 49754, 22], [49754, 52901, 23], [52901, 55848, 24], [55848, 58552, 25], [58552, 62659, 26], [62659, 66685, 27], [66685, 70653, 28], [70653, 74507, 29], [74507, 78495, 30], [78495, 82577, 31], [82577, 86434, 32], [86434, 90259, 33], [90259, 94137, 34], [94137, 98149, 35], [98149, 102042, 36], [102042, 106062, 37], [106062, 109929, 38], [109929, 111859, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 111859, 0.05959]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
2a371e07b11a1b69645fa9dc1eb40c00f3d07eb4
|
Verification of Business Process Quality Constraints
Based on Visual Process Patterns
Alexander Förster, Gregor Engels, Tim Schattkowsky
Dept. of Computer Science
University of Paderborn
Warburger Strasse 100, 33098 Paderborn, Germany
Email: {alfo,engels,timschat}@upb.de
Ragnhild Van Der Straeten
System and Software Engineering Lab
Vrije Universiteit Brussel
Pleinlaan 2, 1050 Brussel, Belgium
Email: rvdstrae@vub.ac.be
Abstract
Business processes usually have to consider certain constraints like domain specific and quality requirements. The automated formal verification of these constraints is desirable, but requires the user to provide an unambiguous formal specification. In particular since the notations for business process modeling are usually visual flow-oriented languages, the notational gap to the languages usually employed for the formal specification of constraints, e.g., temporal logic, is significant and hard to bridge. Thus, our approach relies on UML Activities as a single language for the specification of both business processes and the corresponding constraints. For the expression of such constraints, we have provided a process pattern definition language based on specialized Activities. In this paper, we describe how model checking can be employed for formal verification of business processes against such patterns. For this, we present an automated transformation of the business process and the corresponding patterns into a transition system and temporal logic, respectively.
1 Introduction
Effective and reliable business processes are a major building block for the success of modern enterprises. However, such business processes and their corresponding models can become very complex. For the design, understanding, and maintenance of big and complex processes it is necessary that process constraints, i.e., properties and requirements related to the processes, can be verified.
In the context of business processes, such requirements are for example legal, domain specific, or quality requirements. In particular with the rising popularity of modern Total Quality Management (TQM) systems, the question whether such quality requirements are fulfilled by a business process becomes increasingly important.
The quality requirements contained in a TQM system are usually given in natural language and are therefore difficult to be checked against existing business processes. This is also the usual case for other domain specific requirements in organizations. In order to make requirements for processes manageable and enable automated verification, they need to be specified in a precise, formal way such that exact methods can be applied to verify the correct fulfillment of the specified requirements by given business processes. Furthermore, it should also be possible to specify quality requirements such that they are easy to formulate, read, and to apply by humans like quality managers, domain experts, and process designers.
These two different demands are at first sight contradictory. One way out of this dilemma is to define a language that allows specifying quality requirements in a user-friendly way, yet having a clear formal underpinning. In previous works, we have already proposed an approach for modeling process constraints, in particular related to quality management, that is based on process patterns [7]. These process patterns can be visually modeled using a subset of UML2.0 Activities [14] with light-weight extensions based on stereotypes called the Process Pattern Specification Language (PPSL). A complementary pattern-based development process and some further extensions of the PPSL can be found in [8].
UML Activities have become a widespread modeling language for business processes. Such Activities are usually represented as Activity Diagrams. Many process developers are familiar with the syntax of Activity Diagrams and their meaning. The PPSL as an extension to UML Activities allows process developers to also model process constraints based on the language they are familiar with.
In previous works we have described the construction of business processes with respect to process patterns reflecting quality constraints. Based on that, we propose the ver-
ification of existing processes to ensure conformance to a
given set of process constraints. The constraints are visu-
ally modeled as a process pattern using the PPSL. However,
the definition of a precise meaning of the PPSL elements is
a necessary prerequisite to allow verification of the pro-
cess constraints. Therefore, we will define the semantics of
the PPSL by presenting a translation of PPSL models into
temporal logic.
The next section discusses related work before section
3 introduces the PPSL together with a small example. In
Sect. 4, we present our approach in three consecutive steps.
First, we formalize the behavior of the business process as a
labeled transition system (LTS). Second, we provide an ex-
licit translation of the PPSL elements into temporal logic.
Finally, we show how the temporal logic formulas can be
checked against the LTS representation by a state-of-the-art
model checker. We describe a preliminary tool chain imple-
mented as an integrated workbench to facilitate the design
and verification process for the business process designer.
Thus, our approach supports the business process designer
in determining if the behavior of the business process con-
forms to the requirements specified by a process pattern.
2 Related Work
For the topic of modeling constraints for business pro-
cesses using comprehensible visual notation consistent with
that of the business process, the related work falls into these
categories: workflow and process patterns, checking formal
properties of workflows and processes, modeling behavioral
predicates, and Activity Diagram semantics.
Van der Aalst et al. [18] have devised a number of work-
flow patterns concerning different types of control flows in
workflow systems. Their aim was mainly to demon-
strate the expressiveness and capabilities of existing work-
flow management systems and workflow specification lan-
guages. Unlike our approach, their process patterns cover
mainly technical concepts like all kinds of different basic
and complex control flows and they are focused on Petri-
nets. Approaches like [1] and [16] consider the applica-
tion of process patterns to software development processes.
However, these approaches cover aspects specific to soft-
ware development and contain no formal underpinning or
means for automatic checking.
There are approaches checking formal properties of
workflow models like Van der Aalst and Kindler [13].
These approaches focus on formally verifying general prop-
erties like soundness, fairness, termination etc. In contrast
to that our approach allows the verification of user-defined,
specific properties like quality management requirements or
domain specific requirements.
In [3], Deveraux and Chechik present an approach for
building behavioral models of event-driven systems. These
models can then be verified over a given software program
to conform to certain kinds of temporal or causal properties.
However, the approach assumes that the software program
is already presented as a Kripke structure and thus remains
at an abstract level. For our application area, this is not suf-
ficient as both the transformation of the actual application
language into such structures is not elaborated and the def-
inition of the properties to be checked is too general. Thus,
our approach employs similar basic ideas, but at a differ-
ent granularity and up to the level of the real application
language including a generic approach on defining process
constraints.
In [12], the authors propose an approach for model
checking of business processes using temporal logic. The
approach is based on a proprietary process modeling lan-
guage. The authors provide formalizations for some basic
sorts of constraints. However, there is no support for user-
declared constraints.
To allow model checking of UML Activity Diagrams,
we need to employ a formal semantics for Activities. In
[17], the authors provide a translation into Petri Nets. How-
ever, this translation does not consider some important se-
matic properties of Activities like traverse-to-completion
etc. Also the non-local semantics of some model elements
like ActivityFinalNodes is not covered in this approach. In
[5], the author presents an in-depth coverage of a transla-
tion of UML Activities into the input language of NuSMV.
This semantics description is based on UML 1.x finite state
machine semantics for Activities. UML 2 Activities have a
completely different semantics based on token flow. Unfor-
unately, the translation is therefore not applicable.
In [6], we have introduced the general idea of using
patterns to describe quality requirements for business pro-
cesses. In [7], we introduced a pattern language and built
the ground for a formulation of an abstract pattern-instance
relationship for process patterns. In this paper, we focused
on specifying the formal semantics of process patterns for
automatic conformance checking. Related work includes
general works on the application of process patterns as well
as the verification of behavioral properties in such pro-
cesses.
3 Quality Assurance in Business Processes
In this section we use as an example a business process
that is a slightly adopted version of one of the example pro-
cesses in the UML Specification [14, p. 312], as shown
in Fig. 1. To briefly recapitulate the PPSL, we state some
domain specific and quality management requirements and
model them using the PPSL. In succeeding sections we will
show how the corresponding process patterns can be trans-
lated into temporal logic and verified against given Activity
Diagram based processes. As a first process constraint we
can state:
Process constraint #1: Before an order is being closed, records of the received orders have to be made.
The constraint implies that the Action “report order” is executed at some point before the Action “close order” is executed, but it does not require that the Action “report order” is executed directly before “close order”.
It is an important property of typical process requirements that they frequently contain rather loose or incomplete temporal/logical relationships between Actions. In a concrete business process there may be many other Actions executed in between “report order” and “close order” without contradicting the pattern. Since the original semantics of an ActivityEdge as described in the UML Superstructure is that Action “close order” is enabled immediately when Action “report order” terminates [14], we introduced the stereotype ≪after≫ for an ActivityEdge to express that some Action has to be executed after another but not necessarily directly following it. Stereotyping of model elements is the standard extension mechanism of the UML. Using stereotypes, model elements can be given additional or extended semantics. Figure 2a shows process constraint #1 modeled in our PPSL. The curly line in Fig. 2a is a visualization option of the ≪after≫ stereotype. In the remainder, we refer to this sort of stereotyped ActivityEdge as AfterEdge.
Being able to express such loose order relationships in process patterns is also a necessary prerequisite to enable flexible application of the process patterns since pattern actions and actions of the original business process usually need to be weaved together. If the pattern designer wants to specify that there may not be other Actions being executed in between two Actions of a pattern, a regular ActivityEdge without stereotype can be used in the pattern.
Process constraint #1 could be read in two directions. Either “every time an order is closed this has to be preceded by reporting an order” or “every report of an order must be followed by closing the order”. It is important to have the possibility to distinguish these two cases in the process constraint language. This can be done using the stereotype ≪all≫ for Actions. It denotes whether the implication given by the AfterEdge in the constraint refers to all “close order” Actions or all “report order” Actions. In the remainder, we will refer to an Action having an ≪all≫ stereotype as AllAction. The multi-node in Figs. 2 and 3 are a visualization option of the AllAction. It is also possible to use AllActions on both sides of the AfterEdge or ActivityEdge denoting that both implications have to be fulfilled. Consequently, it is a well-formedness rule for our language that at least one of two Actions being connected by an AfterEdge or ActivityEdge is an AllAction.
The next process constraint that we want to consider is:
Process constraint #2: After each production action a quality check has to be performed prior to delivery.
Process constraints #2 is similar to process constraint #1 but contains precisely spoken two different constraints put together. The first requirement is that after each production action there has to be a quality check and the second requirement is that before shipping a product, the quality has to be checked. This is why the actions “produce” and “ship” in the process pattern are AllNodes. The use of a regular ActivityEdge between ”test quality” and ”ship” sets the requirement that shipping has to be directly preceded by the quality test. There may not be other actions executed in between these two actions.
If we now compare the process constraints with the example business process in Fig. 1, we can see that it does not have an action called “produce” like the pattern in Fig.
Process constraint #3: Before an order is being closed, either records of payments made or records of the fact that the order was rejected have to be taken. Each payment received shall be reported.
Process constraint #4: When an order is filled, a product has to be shipped and an invoice has to be sent.
Figure 3a shows process constraint #3. It demands that one of the two Actions “report rejected order” or “report payment” has to be performed before the bill is being closed while “report payment” has to be executed after a payment was received. Conditional control flows, modeled by Fork-, Join, Decision- and MergeNodes, can be used in the PPSL like in regular Activity Diagrams to express such constraints.
Process constraint #4 is shown in Fig. 3b. Parallel control flows in the pattern mean that the actions of these control flows may be executed concurrently. When the pattern is applied, the parallel control flows should generally be sustained in the resulting process. However, parallelism in the process patterns can just be an expression of the fact that the order in which Actions are executed is irrelevant. Accordingly, in a business process any valid interleaving between the concurrent actions of the pattern is a correct application of the pattern as shown in Fig. 3c. This conforms to the semantics of parallel control flows described in the UML 2.0 Superstructure that real parallelism is not enforced.
In the remainder we want to give a precise, formal notion to when a business process model conforms to a process pattern and therefore respects the process constraints, such as quality requirements, encoded in the pattern.
4 Formalization
The principal aim is to be able to check whether a concrete business process conforms to a given process pattern. Fig. 4 describes the employed verification process.
To perform the conformance check we first need to specify the exact execution semantics of the given business process. Then, we need to precisely define how patterns modeled in the PPSL constrain this business process to finally be able to check these process patterns.
To provide the execution semantics of the business processes, we use the Dynamic Meta-Modeling (DMM) framework developed at the University of Paderborn [4]. The DMM framework is a semantics description method for visual modeling languages in general which combines a denotational meta modeling framework for expressing static semantics with operational rules capturing the behavior of the elements. For a detailed description of the DMM approach we refer to [4]. DMM has been successfully applied to statecharts [4], to UML 1.x sequence diagrams [11], and to UML 2.0 Activity Diagrams [10]. We briefly explain this approach in the next section and we show how the resulting interpretation of business process models is utilized in our approach.
As shown in the previous section, the process patterns specify logical and temporal constraints over the business process. Therefore, the semantics of the PPSL is provided by temporal logic formulas. We provide an explicit translation from a process pattern into temporal logic formulas. This translation is defined and exemplified in Sect. 4.2.
Please note that the PPSL is designed to allow modeling, formalizing and verifying constraints for business processes, it is not intended to be a graphical notation for temporal logic in general.
We show how the concrete example process patterns of Sect. 3 are translated into temporal logic and whether the business process of Fig. 1 conforms to these patterns. Finally in Sect. 4.3 we show how the verification process shown in Fig. 4 can be embedded in a tool chain, using state-of-the-art model checkers. This tool support supports the business process designer in verifying the application of the process patterns he/she selected.
4.1 Generation of the Labeled Transition System
The semantics of a visual language is defined in the DMM framework by a semantic domain meta model and a set of meta operations. The semantic domain meta model describes the semantical concepts of the language. For example, to be able to express the semantics of Activity Diagrams, the semantic concept ActionExecution is defined as a class in the semantic domain meta model. This concept denotes a currently running execution of an Action. For each semantic concept that relates to behavior it captures this behavior in a set of meta operations. The meta operations are defined by rules represented as UML communication diagrams. These communication diagrams are given a formal interpretation based on graph transformation rules.
Given the set of DMM rules for a particular language and a user-defined model expressed in the same language, a labeled transition system (LTS) is generated by a DMM interpreter that reflects all possible behaviors to the model. In the DMM approach, the GROOVE (GRaphical Object-Oriented VErification) tool set [15] has been chosen as DMM interpreter to produce the resulting labeled system.
Using the GROOVE tool, the set of DMM rules for UML 2.0 Activity Diagrams, and given a user-defined Activity Diagram, which is in our case the business process, we can generate a LTS that specifies the exact execution paths of the Activity Diagram. Figure 5 shows an excerpt of the resulting LTS from the example in Fig. 1. Each state in the LTS represents a state in the execution of the Activity Diagram. The labels in the states represent the fact that the corresponding Action is actually executing.
A name of an Action in the business process refers to a certain Behavior. Since the business process and the process pattern may have been devised by different persons using different Behavior namespaces, a mapping needs to be defined. This mapping is part of the tool chain described in Sect. 4.3. For the formalization, without loss of generality, we assume that the Behavior namespaces are synchronized.
The DMM approach for UML Activity Diagrams incorporates the semantics of the UML 2.0 Superstructure [14] for model elements of the packages StructuredActivities and IntermediateActivities. These semantics implemented in DMM include all important issues described in the UML Specification like traverse-to-completion, the fact that Actions capture all of their input tokens in one atomic step, etc. Concurrency in the Activity Diagram leads to a transition system that contains all possible interleavings between the concurrent Actions.
In the next section we specify how the process patterns can be translated into temporal logic formulas which can be checked against the transition system.
4.2 Formalization of Process Patterns
The formalization of the process patterns is presented in two consecutive steps. First, the notion of Pattern Graph is defined. Secondly, the translation of a process pattern into temporal logic is established.
A process pattern is represented by a Pattern Graph.
**Definition 1.** A Pattern Graph (PG) is a tuple $PG = (N, E)$ where $N$ is the set of nodes and $E$ is the set of edges, i.e., the set of tuples $N \times N$. The notation $e(n_1, n_2)$ is equivalent with $e \in E \land e = (n_1, n_2)$.
The set $N$ is divided into different disjoint subsets, $N = N_a \cup N_d \cup N_m \cup N_f \cup N_j$ where $N_a$ is the set of ActionNodes, $N_d$ is the set of DecisionNodes, $N_m$ is the set of MergeNodes, $N_f$ is the set of ForkNodes, and $N_j$ is the set of JoinNodes.
The set of ControlNodes, denoted by $N_c$, is defined as $N_c = N_d \cup N_m \cup N_f \cup N_j$.
The set of edges $E$ is divided into two disjoint sets $E = E_a \cup E_d$, where $E_a$ is the set of ActivityEdges, $E_a$ is the set of AfterEdges, and $E_d \cap E_a = \emptyset$.
The set of AllActions is denoted by $N_{all} \subseteq N_a$.
In the remainder of this section, we explain how process...
patterns can be expressed by \textit{Linear-time temporal logic} (LTL) formulas. LTL has appropriate expressional power for the formalization of the semantics of the PPSL. We use the temporal connectives \(F\) to denote “some Future state”, \(G\) to denote “all future states (Globally)”, \(X\) to denote “the next state”. We use LTL with the past operators \(O\) to denote “previously” and \(Y\) to denote “the previous state”. The use of LTL with past operators makes the formulation of some of the formulas significantly shorter and more intuitive. It shall be noted that past operators do not increase the complexity of LTL model checking and can be equivalently converted to future-only LTL \cite{9}.
The translation of the pattern into temporal logic formulas is defined recursively. We first determine the translation of the basic PPSL elements into temporal logic formulas as shown in Tab. 1. Using this recursive translation, the translation of a pattern graph corresponding to a process pattern is defined.
\textbf{Actions}. Let \(a \in N_a\), the Action is translated to a proposition. This proposition corresponds to the name representing the Action in the transition system representing the business process under study. The fixed set of propositions considered for the translation of the patterns is the set of the action names occurring in the generated transition system as described in Section 4.1.
\textit{Two Actions are connected through an Edge}. We specify as a well-formedness rule of the pattern graph that there has to be an AllAction at one side at least of an Edge, i.e.,
\[
\forall e = (n_1, n_2) \in E : n_1 \in N_{all} \lor n_2 \in N_{all} \quad (1)
\]
Row 2 to 4 in Tab. 1 each show the three possible ways how Actions can be connected by an AfterEdge (conforming to the well-formedness rule (1)) and their respective translation to a LTL formula.
The LTL formula \(G(a \rightarrow F b)\) (row 2, column 3) expresses that each time \(a\) is executed it is eventually followed by the execution of \(b\). The formula \(G(b \rightarrow O a)\) expresses that if an execution of \(b\) exists, it has to be preceded by the execution of \(a\). The conjunction of both LTL formulas states the meaning of an AllAction Node \(a\) connected to an AllAction Node \(b\) through an AfterEdge.
As an example consider process constraint \#1 (cf. Sect. 3). This requirement will be translated to the following LTL formula:
\[
G(\text{close order} \rightarrow O \text{report order}) \quad (2)
\]
The AfterEdge specified in this pattern spans nearly the whole business process. For the business process of Fig. 1 to fulfill this constraint it is important that the alternative and parallel parts of the business process are all merged and joined properly. Thus it is guaranteed that the execution of the business process finally reaches “close order” at some point after “report order”, so formula 2 holds. Checking process constraint \#1 has some interesting implications. Say the business process designer wants to make an alteration to the business process such that if the quality check fails, the process should be terminated. Figure 6 shows the alteration in the process model. The semantics of the FinalNode as described in the UML Specification is that all tokens in the Activity that is executed will be terminated immediately. The transition system resulting from the DMM transformation reflects this behavior. Accordingly, formula 2 will evaluate to false after the alteration meaning that it is now not guaranteed anymore that the order will be closed. If somebody had put the alteration shown in Fig. 6 somewhere in the middle of a much bigger business process, such deficiencies would probably be much harder to detect manually.
Quality Requirement \#2 results in two LTL formulas which both have to be fulfilled.
\[
G(\text{produce} \rightarrow F \text{test quality}) \quad (3)
\]
\[
G(\text{ship} \rightarrow Y \text{test quality}) \quad (4)
\]
Similar to the case where two Actions are connected through an AfterEdge, three cases can be distinguished where two Actions are connected through a regular ActivityEdge. Again an AllAction can be followed by an AllAction or an AllAction can be followed by an Action or an Action can be followed by an AllAction through an Edge. How the three different constraints are translated is shown in row 5 to 7 in Tab. 1. These LTL formulas are equal to the LTL formulas representing the corresponding kinds of Actions connected through an AfterEdge, except that the temporal connectives \(O\) and \(F\) are replaced \(Y\) and \(X\), respectively.
The question now arises how control nodes in the pattern graph have to be interpreted. Table 2 provides for each control node a small example pattern fragment and the general translation to LTL formulas in case of AfterEdges connected to the control node. In the remainder of this section, we explain the translation of the control nodes in detail.
There are two additional well-formedness rules for the use of control nodes. For the DecisionNode and the ForkN-
...
ode, either the Action preceding the control node has to be an AllActions or all nodes following the control node have to be AllActions (5). For the MergeNode and the JoinNode, a similar well-formedness rule applies in the opposite direction (6).
\[
e(a, c) \in E \land c \in N_d \cup N_f \land \forall i = 1, \ldots, n : e_i(c, b_i) \Rightarrow a \in N_{all} \lor \forall i = 1, \ldots, n : b_i \in N_{all} \quad (5)
\]
As already explained in case of two Actions connected by an edge, generally different formulas have to be created depending on whether the node(s) preceding or succeeding the control node are AllActions. Therefore, for each control node we will explain two cases.
**DecisionNodes** Let us at first assume all edges are AfterEdges. If \( a \in N_{all} \) and \( \forall i \in \{1, \ldots, n\} : b_i \notin N_{all} \), the corresponding pattern is translated to the LTL formula \( G(a \rightarrow \bigvee_{i=1,\ldots,n} F b_i) \). This formula expresses whenever an Action \( a \) is executed, eventually at least one of the \( b_i \) (\( i \in \{1, \ldots, n\} \)) will be executed, reflecting the choice semantics of the DecisionNode. If some \( b_i \) nodes are also AllActions, this means that the execution of these \( b_i \) Actions need to be eventually preceded by the execution of an \( a \) Action. This implies that a formula \( G(b_i \rightarrow O a) \) needs to be added for each \( b_i \in N_{all} \). If \( a \) is not an AllAction but only the \( b_i \) nodes are AllActions (remark that in this case, following our well-formedness rule it is mandatory that all \( b_i \) nodes are AllActions) the corresponding pattern is translated to the LTL formula \( \bigwedge_{i=1,\ldots,n} G(b_i \rightarrow O a) \), only.
Regular ActivityEdges can also be used to connect an Action with a DecisionNode and vice versa. There are always two edges of the pattern involved in each subpart of the resulting formula, i.e., the edge from Action \( a \) to the DecisionNode and the Edge from the DecisionNode to \( b_i \). If both edges are regular ActivityEdges the translations as specified in Tab. 2 have to be changed by replacing the temporal connective \( O \) by \( Y \) by \( X \). If at least one of the two edges is an AfterEdge, it shall overrule the regular ActivityEdge and the temporal connectives \( F \) and \( O \) remain. This does not only hold for DecisionNodes but for each ControlNode.
**MergeNodes** Again with \( m \in N_m \) we make a distinction between the case where \( \forall i = 1, \ldots, n : e_i(a_i, m) \land a_i \in N_{all} \) and the case where \( e(m, b) \in E \land b \in N_{all} \). The first case
<table>
<thead>
<tr>
<th>Model element</th>
<th>Notation</th>
<th>Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Action</td>
<td>![Actionname]</td>
<td>ActionName</td>
</tr>
<tr>
<td>AfterEdge between AllActions</td>
<td><img src="b=2" alt="a→b" /></td>
<td>( G(a \rightarrow F b) \land G(b \rightarrow O a) )</td>
</tr>
<tr>
<td>AfterEdge between AllAction and Action</td>
<td><img src="b=2" alt="a→b" /></td>
<td>( G(a \rightarrow F b) )</td>
</tr>
<tr>
<td>ActivityEdge between AllActions</td>
<td><img src="b=2" alt="a→b" /></td>
<td>( G(b \rightarrow O a) )</td>
</tr>
<tr>
<td>ActivityEdge between AllAction and Action</td>
<td><img src="b=2" alt="a→b" /></td>
<td>( G(a \rightarrow X b) \land G(b \rightarrow Y a) )</td>
</tr>
<tr>
<td>ActivityEdge between Action and AllAction</td>
<td><img src="b=2" alt="a→b" /></td>
<td>( G(b \rightarrow Y a) )</td>
</tr>
</tbody>
</table>
---
Table 1. Translation of the PPSL elements into LTL.
expresses that each execution of $a_i$ (for $i = 1, \ldots, n$) is eventually followed by an execution of $b$. The second case expresses that each execution of $b$ is preceded by at least one execution of the Action $a_i$ (for $i = 1, \ldots, n$). This results in the LTL formula $G(b \rightarrow \bigwedge_{i=1}^{n} O a_i)$.
As an example consider again process constraint #3. First of all, there is an AfterEdge from the Action “receive payment” to the Action “report payment” resulting in formula 7. The Actions “report rejected order” and “receive payment” are connected to the MergeNode with AfterEdges. The MergeNode is connected to “close bill” via an AfterEdge and the Action “close bill” is an AllAction. Using our translation the following LTL formula is obtained:
$$G(\text{receive \_ payment} \rightarrow F \text{ report \_ payment}) \land (O \text{ report \_ payment} \lor O \text{ report \_ rejected \_ order}) \land G(\text{close \_ bill} \rightarrow (G(b_i \in N_{\text{all}} \Rightarrow G(b_i \rightarrow O a_i))) \land (O a_i \in N_{\text{all}} F a_i \rightarrow F b) \land G(b \rightarrow \bigwedge_{i=1}^{n} O a_i)$$
The LTL formula 8 specifies that whenever the Action close_bill is executed, it has to be preceded by the execution of the Action report_payment or by the execution of the Action report_rejected_order.
For ForkNodes Let $f \in N_f$ and $a \in N_{\text{all}}$ and $e(a, f) \in E$ and $\forall i = 1, \ldots, n : e_i(f, b_i)$ and $b_i \in N$, this results in the LTL formula $G(a \rightarrow \bigwedge_{i=1}^{n} F b_i)$. This formula expresses that on each path where $a$ is executed, this execution has eventually to be followed by the execution of all the $b_i$ actions ($\forall i = 1, \ldots, n$). If at least one $b_i$ is an AllAction, this results in the LTL formula $\bigwedge_{b_i \in N_{\text{all}}} G(b_i \rightarrow O a)$.
As an example consider again process constraint #4. First of all, there is an AfterEdge from the Action “fill order” to the ForkNode. The ForkNode has two outgoing AfterEdges that connect the ControlNode to the Action “ship” and “send invoice” resp. . Using our translation the following LTL formula is obtained:
$$G(\text{fill \_ order} \rightarrow (F \text{ ship} \land F \text{ send \_ invoice})) \land G(\text{ship} \rightarrow F \text{ send \_ invoice}) \land G(\text{send \_ invoice} \rightarrow O \text{ ship})$$
The LTL formula (9) specifies that whenever the Action fill_order is executed, it has to be eventually followed by the execution of the ship Action and by the execution of the send_invoice Action.
JoinNodes Consider again $j \in N_j$ and $b \in N_{\text{all}}$ and $e(j, b) \in E$ and $\forall i = 1, \ldots, n : e_i(a_i, j)$ and $a_i \in N$.
<table>
<thead>
<tr>
<th>Model element</th>
<th>Example</th>
<th>General Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>DecisionNode</td>
<td><img src="https://example.com/decision_node" alt="DecisionNode Example" /></td>
<td>$a \in N_{\text{all}} \Rightarrow G(a \rightarrow \bigwedge_{i=1}^{n} F b_i) \land \bigwedge_{i=1}^{n} (b_i \in N_{\text{all}} \Rightarrow G(b_i \rightarrow O a_i))$</td>
</tr>
<tr>
<td>MergeNode</td>
<td><img src="https://example.com/merge_node" alt="MergeNode Example" /></td>
<td>$\bigwedge_{i=1}^{n} (a_i \in N_{\text{all}} \Rightarrow G(a_i \rightarrow F b)) \land (b \in N_{\text{all}} \Rightarrow G(b \rightarrow \bigwedge_{i=1}^{n} O a_i))$</td>
</tr>
<tr>
<td>ForkNode</td>
<td><img src="https://example.com/fork_node" alt="ForkNode Example" /></td>
<td>$a \in N_{\text{all}} \Rightarrow G(a \rightarrow \bigwedge_{i=1}^{n} F b_i) \land \bigwedge_{i=1}^{n} (b_i \in N_{\text{all}} \Rightarrow G(b_i \rightarrow O a_i))$</td>
</tr>
<tr>
<td>JoinNode</td>
<td><img src="https://example.com/join_node" alt="JoinNode Example" /></td>
<td>$G(\bigwedge_{a_i \in N_{\text{all}}} F a_i \rightarrow F b) \land (b \in N_{\text{all}} \Rightarrow G(b \rightarrow \bigwedge_{i=1}^{n} O a_i)$</td>
</tr>
</tbody>
</table>
This results in the LTL formula $G(\bigwedge_{i=1}^{n} O a_i)$ expressing that if Action \( b \) is executed it has to be preceded by the execution of all Actions \( a_i \), where \( a_i \in A \) for \( i = 1, \ldots, n \). Each \( a_i \in \mathbb{N}_{all} \) results in the LTL formula $G(F a_i \rightarrow F b)$ expressing that after each \( a_i \in \mathbb{N}_{all} \) has been executed, then also Action \( b \) has to be eventually executed.
### 4.3 Tool Chain
We have set up a tool chain for the verification process (cf. Fig. 4) of process patterns in business processes. Therefore, we have developed an integrated workbench as an Eclipse plugin. Figure 7 shows a typical situation. On the left hand side, different business processes and patterns can be organized in projects. In the upper part, a business process is being modeled using the build-in Activity Diagram editor. In the middle part, process patterns can be modeled using the PPSL. Triggered by user interaction, the conformance of the business process with selected process patterns can be checked automatically. The result of the model checker is presented in the lower part of the workbench. The layout of the different editors and views of the workbench can be customized by the user, as typical for the eclipse workbench.
When the user triggers the verification, the complete tool chain of Fig. 4 is enacted automatically. The transition system generated by GROOVE is automatically translated into the input language of the NuSMV model checker [2].
The selected process pattern is automatically translated into temporal logic formulas as described in the previous section. Finally, the model checker is started with the transition system and the temporal logic formulas as input.
A future version is intended to allow for visual back-annotation of the result of the model checker in the pattern editor. Also, at a later stage, the system is intended to also interactively assist a process developer in correctly implementing process patterns into existing processes that do not yet fulfill the requirements given by a process pattern.
The implementation of the verification process is written in a modular way. Translating business process models and translating the process patterns are independent activities as well as the checking of the patterns. Therefore, single tools can be exchanged unproblematically.
### 5 Discussion and Conclusion
In this paper, we have introduced an approach to automatically check process constraints and demonstrated the application for checking quality constraints in business processes. In our approach, such process constraints are formally described through process patterns based on UML Activities. These patterns are the basis for checking business processes for conformance with the respective process constraints. For this, the process patterns are transformed into temporal logic while the business process is transformed into a transition system. Together, this enables the application of model checking for ensuring conformance of the business process to the patterns defining the required process constraints. Thus, this technique allows formal verification of process constraints in business processes.
Furthermore, we have introduced tool support for defining and verifying such constraints by means of an Eclipse plugin. In a current project, this tool will be used to verify large-scale industry processes from the banking sector. Increasing “industrialization” in the finance business leads to the demand for well-defined business processes that interact seamlessly. Therefore, many requirements related to the processes have to be defined and verified. This will also be the basis to further investigate whether additional PPSL model elements and corresponding semantics are necessary to be able to express all typical sorts of constraints that occur in practice.
There are some more issues that need to be investigated. Different patterns can depend on each other or even contradict one another. The knowledge of these interdependencies between patterns can be used in tool support to increase the efficiency of the pattern checking process. Finally, we will also investigate how the occurrence of a process pattern can be located in a business process model.
### References
Figure 7. Eclipse plugin for modeling and checking quality constraint patterns
|
{"Source-Url": "http://ssel.vub.ac.be/Members/RagnhildVDS/publications/TASE2007.pdf", "len_cl100k_base": 8714, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37326, "total-output-tokens": 10401, "length": "2e13", "weborganizer": {"__label__adult": 0.000301361083984375, "__label__art_design": 0.0004856586456298828, "__label__crime_law": 0.0003552436828613281, "__label__education_jobs": 0.0012731552124023438, "__label__entertainment": 6.824731826782227e-05, "__label__fashion_beauty": 0.00015842914581298828, "__label__finance_business": 0.0006914138793945312, "__label__food_dining": 0.0003285408020019531, "__label__games": 0.0005278587341308594, "__label__hardware": 0.0006542205810546875, "__label__health": 0.0004396438598632813, "__label__history": 0.00022804737091064453, "__label__home_hobbies": 9.196996688842772e-05, "__label__industrial": 0.0005736351013183594, "__label__literature": 0.0002613067626953125, "__label__politics": 0.0002551078796386719, "__label__religion": 0.0003650188446044922, "__label__science_tech": 0.043731689453125, "__label__social_life": 8.64863395690918e-05, "__label__software": 0.0096435546875, "__label__software_dev": 0.9384765625, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.0005431175231933594, "__label__travel": 0.00018477439880371096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41980, 0.01205]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41980, 0.51585]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41980, 0.88823]], "google_gemma-3-12b-it_contains_pii": [[0, 4251, false], [4251, 9913, null], [9913, 13691, null], [13691, 16893, null], [16893, 21532, null], [21532, 26627, null], [26627, 29953, null], [29953, 33740, null], [33740, 39351, null], [39351, 41980, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4251, true], [4251, 9913, null], [9913, 13691, null], [13691, 16893, null], [16893, 21532, null], [21532, 26627, null], [26627, 29953, null], [29953, 33740, null], [33740, 39351, null], [39351, 41980, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41980, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41980, null]], "pdf_page_numbers": [[0, 4251, 1], [4251, 9913, 2], [9913, 13691, 3], [13691, 16893, 4], [16893, 21532, 5], [21532, 26627, 6], [26627, 29953, 7], [29953, 33740, 8], [33740, 39351, 9], [39351, 41980, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41980, 0.056]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a3f4ffd4152039a07b772ac044b53a06a62d8aa2
|
Establishing theoretical minimal sets of mutants
IEEE International Conference on Software Testing, Verification, and Validation, 7, 2014, Cleveland, Ohio
http://www.producao.usp.br/handle/BDPI/48408
Downloaded from: Biblioteca Digital da Produção Intelectual - BDPI, Universidade de São Paulo
Establishing Theoretical Minimal Sets of Mutants
Paul Ammann∗, Marcio E. Delamaro†, and Jeff Offutt∗
∗Software Engineering, George Mason University, Fairfax, VA, USA
Emails: {pammann,offutt}@gm.edu
†Instituto de Ciências Matemáticas e de Computação, Universidade de S˜ao Paulo, S˜ao Carlos, SP, Brazil
Email: delamaro@icmc.usp.br
Abstract—Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper’s contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
Keywords - Mutation testing, minimal mutant sets, dynamic subsumption
I. INTRODUCTION
Mutation analysis [5] is an approach to generating tests that distinguish all of a set of variants, or mutants, from some artifact. Mutation analysis is widely considered to be a powerful approach, so much so that other approaches to test generation are commonly evaluated on the basis of mutation score. One long-standing problem with using mutation score to evaluate other approaches is the presence of “redundant” mutants that do not contribute in any material way to the quality of a test set. For example, some mutants are killed by almost any test. Hence, eliminating such mutants from consideration does not affect which tests are chosen, but does result in a different mutation score. In other words, mutation scores can be inflated by redundant mutants, and this can make the mutation score harder to interpret.
The research area of reduced mutation has focused on achieving high quality test sets with fewer mutants [20], [23], [27], [29], [22], [21], [26], [6]. Selective mutation is a reduced mutation approach that limits the set of mutation operators to a subset of the available operators [20], [23], [27], [29], [22], [21], [26], [6]. Some approaches to reduced mutation limit the number of mutants considered to a random subset of mutants generated [19], [24]. Other approaches analyze relationships between specific mutants and remove redundant mutants [13], [11], [14]. Still others engineer higher-order-mutants (HOMs) that subsume one or more first-order mutants (FOMs)1. While these approaches clearly reduce the number of mutants under consideration, there is still a significant research gap. Specifically, there is no way to measure how close reduction techniques get to the goal of minimizing the number of mutants created while maintaining the quality of the corresponding test set.
This paper addresses exactly that research gap. We develop a theoretical framework for determining minimal sets of mutants. In particular, we show that, given a test set, a particular type of subsumption, called dynamic subsumption, enables efficient computation of minimal sets of mutants. We evaluate our approach against a benchmark set of programs and tests.
It is important to appreciate the role of the test set in our approach. Computing minimal mutant sets for all possible test sets is clearly undecidable; it is the fact that we limit attention to a particular test set that makes our approach computable. One way to think of our approach is that it approximates a limit: If one were able to run every possible test, then determining minimal sets of mutants with dynamic subsumption would, in fact, be both sound and complete. That is, any computed minimal mutant set would be, in fact, a “real” minimal mutant set. A corollary of this observation is that the more comprehensive the test set used in the analysis, the more accurate the resulting computation of minimal mutant sets.
Existing approaches to reduced mutation that use subsumption, such as the HOM approach, rely on detailed white-box analysis of the artifact under consideration. If a HOM is engineered to subsume several other mutants, then a test that kills that HOM will, of course, kill the subsumed mutants. However, equivalent mutants, that is, mutants that computes the same function as the original artifact, complicate the situation. If a HOM happens to be equivalent, or if the test engineer simply fails to find a test that kills the HOM, then the subsumption relationship does not help, since there may be tests that kill one or more of the subsumed mutants.
In contrast to the HOM approach to subsumption, our model takes a black-box perspective. We consider only the behavior of some fixed artifact in the context of a specific set of mutants and a specific set of test cases. In particular, our notion of subsumption is only assumed to hold with respect to the specific set of test cases under consideration, and it is possible
1In the development of Jia et al. [10], one mutant subsumes a second mutant if every test that kills the first mutant is guaranteed also to kill the second. The same notion of subsumption is used to reduce the number of logic mutants generated for DNF predicates [12].
that the subsumption relation would not hold for a different set of tests. Essentially, we replace the risk of equivalent mutants, which affects the HOM approach to subsumption, with the risk of incomplete test sets\textsuperscript{2}.
Our approach to modeling has two advantages. First, it frees us from the details of any particular programming language or artifact and lets us model the problem in a very general way. Second, it allows us to provide a precise definition for what constitutes a minimal set of mutants. While the definition itself is not constructive; the main result of the paper shows that a different notion of subsumption, called dynamic subsumption, completely characterizes mutant set minimality.
We used the Siemens suite \cite{9}, \cite{7}, to show the impact of our model. The Siemens suite includes a large number of tests. The evaluation shows that the size of the minimal mutant sets is much smaller than current approaches to reduced mutation achieve. The evaluation further shows that high mutation scores from different approaches to reduced mutation on a given test set are potentially misleading; once redundant mutants are removed, the scores are lower, sometimes much lower. In other words, there is substantial room for improvement in choosing mutants. Correspondingly, users of mutation scores should be cautious; large numbers of redundant mutants may make such scores misleading.
Again, it is important to appreciate the role of the chosen test set in the analysis of minimal mutants: generating a different test set might result in a different set of minimal mutants. That being said, most applications of mutation analysis end up with exactly one test set—namely the first set that kills enough mutants. From a practical perspective, an important question in this context is simply, “How many mutants (and which ones) are really needed to end up with this test set?” It is only in the context of the chosen test set that we determine which mutants are relevant.
The paper is structured as follows. Section II introduces a score function model for describing the relationship between mutants and test cases, and then develops the main theoretical results about minimal mutant sets. Section III applies the Proteum mutation tool to the Siemens suite of programs and computes minimal test and mutant sets for a specific initial set of tests. Section IV discusses related work. Section V puts the results into context and concludes the paper.
II. Model
This section presents a formal model for minimizing sets of mutants with respect to a test set. The model does not address any details of the artifact from which mutants are generated. Rather, it captures the “black-box” relationship of precisely which test cases kill which mutants.
A. Definitions
Let $M$ be a finite set of mutants on some artifact $P$. $P$ may be any testable artifact amenable to mutation analysis—a program, a specification, a design, etc. Let $m_i$, possibly subscripted, denote an element of $M$. Denote the cardinality of $M$ as $|M|$.
Let $T$ be a finite test set for $P$. Let $t$, possibly subscripted, denote an element of $T$. Denote the cardinality of $T$ as $|T|$.
The boolean score function $S$ specifies which mutants each test kills. Specifically, $S(i, j), i = 1, \ldots, |T|, j = 1, \ldots, |M|$, is true iff test $t_i$ kills mutant $m_j$. So $S$ can be considered to be a binary matrix with $|T|$ rows and $|M|$ columns.
$T$ is mutation-adequate for $M$ if for each mutant $m_j \in M$, there is some test $t_i \in T$ such that $S(i, j)$ is true. The development in this paper does not require that the test set in the score function be mutation-adequate. From a practical perspective, our algorithms can be applied at any stage of testing. The richer the test set $T$ is, the more mutants a minimal mutant set requires to capture the behavior exhibited by the artifact with respect to that test set.
In terms of the score function, if $T$ is not mutation adequate, then there will be at least one mutant $m$ in $M$ that is live, which means that no test $t_i \in T$ kills $m$. A live mutant $m$ may be equivalent, or $T$ may rather be missing a suitable test that kills $m$. Each live mutant has a column in the score function without any true entries. Instead of insisting on mutation adequacy, we constrain our minimization procedures to maintain the effectiveness of mutation, evaluated by which mutants are killed by a given test set. Formally, a subset of $T$, denoted $T_{\text{maintain}}$, maintains the mutation score with respect to $M$ (and $T$) if for every mutant $m \in M$, if $T$ kills $m$ then $T_{\text{maintain}}$ kills $m$.
The score function captures all of the information about the mutants and tests of interest in this paper. If two tests kill precisely the same set of mutants, we consider the tests to be indistinguished, even though, in terms of the domain of $P$, the tests may have different input values. Similarly, if two mutants are killed by precisely the same set of tests, we consider the mutants to be indistinguished (thus far), even though the mutants may involve different syntactic changes to the underlying artifact. Indeed, indistinguished mutants may well cause different semantic changes to the underlying artifact, but these semantics are simply not captured by the test set $T$, and hence are not reflected in the score function $S$. Put another way, if $T$ were augmented with additional tests, these additional tests might distinguish previously indistinguished mutants.
Below we show a score function for an example with five tests and four mutants: $T = \{t_1, t_2, t_3, t_4, t_5\}$ and $M = \{m_1, m_2, m_3, m_4\}$. $T$ is mutation-adequate, all tests in $T$ are distinguished, and all mutants in $M$ are also distinguished. We use this score function as a running example through the rest of this section.
<table>
<thead>
<tr>
<th></th>
<th>$m_1$</th>
<th>$m_2$</th>
<th>$m_3$</th>
<th>$m_4$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$t_1$</td>
<td>t</td>
<td>t</td>
<td></td>
<td>t</td>
</tr>
<tr>
<td>$t_2$</td>
<td>t</td>
<td>t</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$t_3$</td>
<td>t</td>
<td></td>
<td>t</td>
<td></td>
</tr>
<tr>
<td>$t_4$</td>
<td>t</td>
<td>t</td>
<td>t</td>
<td></td>
</tr>
<tr>
<td>$t_5$</td>
<td>t</td>
<td>t</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
\textsuperscript{2}Both the problem of determining whether a mutant is equivalent and the problem of finding a test case that kills a mutant are, of course, undecidable.
Observation 1: Score Function Boundedness.
The score function has, at most, $2^{|M|}$ distinguished rows. The reason is that each distinguished test kills some specific subset of $M$ and there are exactly $2^{|M|}$ such subsets.
Observation 1 is important because it makes clear that although the domain of $P$ may be large or unbounded, the number of distinguished rows in the score function is bounded. Put another way, the score function can identify every possible input in the domain of $P$ with one of $2^{|M|}$ equivalence classes, depending on which mutants that input kills.
B. Minimal Sets of Tests
The key theoretical contribution of this paper is describing sensible minimizations to the score function. The motivation for minimizing tests is straightforward: if killing mutants is the goal, why run tests that do not increase the mutation score? Minimal test sets directly help the practicing test engineer.
The motivation for minimizing mutants has less to do with the practicing test engineer than with mutation testing researchers. The motivation for minimizing mutants is identifying the theoretical boundary of just how many mutants are required, and comparing existing mutation analysis methods against this boundary to see whether they can be improved, and, if so, potentially how much. While this theoretical lower bound may never be reached, it gives testing researchers an important tool. By knowing what’s possible, we can objectively evaluate the effectiveness of our current engineering techniques to reduce the number of mutants. That is, this analysis gives us a firm bound against which to measure.
First we address test set minimization, a well-understood process that we include here for completeness.
Definition 1: Minimal test sets. A test set $T$ is minimal iff for any test $t_i \in T$, $T - \{t_i\}$ does not maintain the mutation score with respect to $M$ and $T$.
Note that $T$ depends on exactly which mutants are used.
There may be multiple minimal test sets, possibly of varying cardinalities, for any given test set $T$. Let $T_M = \{T_1, T_2, \ldots\}$ denote the set of all possible minimal test sets with respect to mutant set $M$. Any element of $T_M$ with the smallest cardinality is not only minimal, but also minimum.
In the example, $T_M$ contains three minimal test sets:
$$T_M = \{\{t_4\}, \{t_1, t_2\}, \{t_1, t_3\}\}$$
Note that a given test need not be part of any minimal test set. In the example, $t_5$ is not in any minimal test set. Of the three minimal test sets, one, namely $\{t_4\}$, has least cardinality (equal to 1), and hence is minimum.
Although finding a minimum test set is, like many optimization problems, computationally hard\(^1\), generating a minimal test set is straightforward. Algorithm 1 generates a minimal test set with time complexity $|T| \times |M|$. Note that Algorithm 1 selects tests for removal in an arbitrary order. If Algorithm 1 is applied to all possible permutations of tests in $T$, then it will generate all possible minimal test sets.
Algorithm 1: Test set minimization
```
// Input: Mutant set M and test set T
// Output: A minimal test set
minSet = T
for each t in minSet {
// Note: t selected arbitrarily
if (minSet-{t}) maintains
mutation score wrt M and T
minSet = minSet - {t}
}
return minSet
```
C. Minimal Sets of Mutants
We now turn to the problem of minimizing $M$, a topic that, to our knowledge, has not been previously addressed in the literature. We propose the following informal rationale for declaring mutants to be “unnecessary”:
Testing $P$ without considering unnecessary mutants should yield the exact same “results” as testing $P$ with the full set of mutants $M$.
Building on this rationale, the only tests that a given set of mutants can “force” to be in a test are those in some minimal test set. Hence, we define unnecessary mutants in terms of minimal test sets. We require that $M$ generate precisely the same set of minimal test sets both with and without a redundant mutant. Recall that $\bar{T}_M$ denotes the set of minimal test sets of $T$ with respect to some particular set of mutants $M$. The key part of the definition is the equality at the end:
Definition 2: Redundant mutants.
Let $M_j = M - \{m_j\}$ for some mutant $m_j \in M$. We say that $m_j$ is redundant with respect to mutant set $M$ and test set $T$ iff $\bar{T}_M = \bar{T}_{M_j}$.
Again, note that this definition of redundant mutants is in the context of a particular test set $T$. Computing $\bar{T}$ for various mutant sets in the running example, first the full mutant set $M$, and then $M$ with each mutant removed in turn, yields:
$$T_M = \{\{t_4\}, \{t_1, t_2\}, \{t_1, t_3\}\}$$
$$\bar{T}_{M_1} = \{\{t_4\}, \{t_1, t_2\}, \{t_1, t_3\}\}$$
$$\bar{T}_{M_2} = \{\{t_4\}, \{t_1, t_2\}, \{t_1, t_3\}\}$$
$$\bar{T}_{M_3} = \{\{t_4\}, \{t_1\}, \{t_1\}\}$$
$$\bar{T}_{M_4} = \{\{t_4\}, \{t_1, t_2\}, \{t_1, t_3\}, \{t_2, t_5\}, \{t_3, t_5\}\}$$
\(^1\)Finding a minimum test set is an NP-complete problem. Finding a minimum test set is an instance of the Set Covering Problem (SCP) [16], where the universe is the set of mutants $M$, and the family of subsets of $M$ is given by the rows of the score function, $S$.
Note that $\bar{T}_M$, $\bar{T}_{M_1}$, and $\bar{T}_{M_2}$ are identical. This means that both $m_1$ and $m_2$ are redundant with respect to $M$. If a pair of redundant mutants, $m_1$ and $m_2$, are indistinguishable, it is possible that we might only be able to remove one of the mutants safely. Consider the case where mutant $m_1$ is not redundant with respect to $M$. If some additional mutant $m_2$ is indistinguishable from $m_1$ and we form $M \cup \{m_2\}$ then only one of $m_1$ or $m_2$ can be removed from $M \cup \{m_2\}$ without altering the associated minimal test sets. Algorithm 2, based on the dynamic subsumption relation described later in the paper, clarifies precisely which mutants can be removed safely. In particular, only one mutant from each set of indistinguishable mutants is (possibly) needed; beyond that, all redundant mutants can be safely removed.
Since $m_1$ and $m_2$ are distinguished and redundant, both can safely be removed from $M$ without altering the minimal test sets, thereby yielding a minimal mutant set of $\{m_3, m_4\}$. In this example, there is only one minimal mutant set.
When a redundant mutant $m$ is removed, it is possible that tests that were distinguished with respect to $M$ are no longer distinguished with respect to $M - \{m\}$. From the practical perspective, this means that the test engineer has a choice about which test to use when constructing a minimal test set. In the example above, for the minimal set of mutants $\{m_3, m_4\}$, tests $t_2$ and $t_3$ are indistinguishable.
**Definition 3: Minimal mutant sets.**
Mutant set $M$ is minimal if it contains no redundant mutants.
We show the score function after minimization for our running example.
<table>
<thead>
<tr>
<th>$t_1$</th>
<th>$m_3$</th>
<th>$m_4$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$t_2$</td>
<td>$t$</td>
<td></td>
</tr>
<tr>
<td>$t_3$</td>
<td>$t$</td>
<td></td>
</tr>
<tr>
<td>$t_4$</td>
<td>$t$</td>
<td>$t$</td>
</tr>
<tr>
<td>$t_5$</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Although this example has only one minimal mutant set, there are potentially many minimal mutant sets.
Because there are a large number of minimal test sets for any given set of mutants, the definition of minimal mutant sets, which relies on comparing the associated minimal test sets, does not lend itself directly to an efficient algorithm. Hence, the next challenge is to develop a way to compute efficiently which mutants are redundant.
**D. Efficiently Computing Minimal Sets of Mutants**
We turn to the notion of subsumption. Traditionally, one mutant is defined to subsume another for all possible executions based on internal reasoning about the artifact being mutated or the mutation operator in question. For example, mutants that negate a term in a Disjunctive Normal Form (DNF) predicate subsume mutants that negate the entire DNF formula. A variety of these relationships are shown in the fault hierarchy of Lau and Yu [18]. The proof of subsumption relies on properties of predicates expressed in DNF.
In this paper, we define a different notion of subsumption strictly in terms of black-box behavior of mutants $M$ on a test set $T$ as captured by the score function. Crucially, this new notion of subsumption does not necessarily hold for all possible executions. Rather it is only guaranteed to hold for executions in the set $T$. Specifically, consider two mutants $m_x$ and $m_y$ where every test in $T$ that kills $m_x$ also kills $m_y$.
**Definition 4: Dynamic subsumption.**
If mutant $x$ is not live and $S(i, x) \rightarrow S(i, y), i = 1..|T|$, we say that $m_x$ dynamically subsumes $m_y$ with respect to $T$.
Dynamic subsumption differs from the notion used in white-box mutation analysis in a crucial respect: Not only are tests that kill $x$ also required to kill $y$, but $T$ also has to have at least one test that kills $x$. In other words, dynamic subsumption disallows “vacuous” subsumption, which would be possible if we did not have a test that killed $x$. For example, it is possible, through white-box analysis, to design a HOM $m$ that subsumes several other mutants, but it is (usually) not possible to tell if $m$ is equivalent. Since we work in the black-box context of a specific set of test cases $T$, the score function can distinguish among live mutants.
In any set $M$ that contains both $m_x$ and $m_y$, if $m_x$ dynamically subsumes $m_y$, then $m_y$ is redundant, and hence may be safely discarded, a fact we prove in the first part of Theorem 1 below.
Perhaps surprisingly, dynamic subsumption completely captures the notion of redundant mutants. That is, the only way in which a mutant becomes redundant is for it to be dynamically subsumed by some other mutant in $M$, a fact we prove in the second part of Theorem 1 below. The main result of this paper formalizes these two properties:
**Theorem 1: Dynamic subsumption and minimal test sets.**
Mutant set $M$ is minimal with respect to test set $T$ iff there does not exist a distinct pair $m_x, m_y \in M$ such that $m_x$ dynamically subsumes $m_y$.
**Proof:**
**Step 1:** If $M$ is minimal, then there does not exist a distinct pair $m_x, m_y \in M$ such that $m_x$ dynamically subsumes $m_y$.
We proceed by contradiction. Suppose there exist $m_x$ and $m_y$ such that $m_x$ dynamically subsumes $m_y$. Consider the process of producing a minimal test set for either $M$ or $M - \{m_y\}$ by applying Algorithm 1. If Algorithm 1 considers tests in the same order in each case, and if the **if** test in Algorithm 1 always comes to the same conclusion, then Algorithm 1 produces the same minimal test set in for both $M$ and $M - \{m_y\}$. Since this would happen for all possible orders of choosing tests, it means that $T_M = T_{M_y}$. But this would mean that $M$ is not minimal—a contradiction.
Hence, the proof comes down to considering whether, at some stage of Algorithm 1, the **if** test evaluates differently for
some test $t$ with respect to $M$ and $M - \{m_y\}$. We proceed by case analysis:
- Case 1: $t$ can be removed during the minimization with respect to $M$, but not the corresponding minimization with respect to $M - \{m_y\}$. Dynamic subsumption has nothing to do with this case. Rather, if a test is not needed for a particular set of mutants, it is clearly not needed for any subset either. Hence, Case 1 is impossible.
- Case 2: $t$ can be removed during the minimization with respect to $M - \{m_y\}$, but not the corresponding minimization with respect to $M$. In algorithm 1, the variable $\hat{m}_y$ must have some test that kills $m_x$, and thus, by dynamic subsumption, $m_y$ as well. Hence, $m_y$ cannot be the reason that $t$ must be kept for set $M$. In other words, the if decision must be the same for both $M$ and $M - \{m_y\}$. Hence, Case 2 is impossible.
**Step 2:** If there does not exist a distinct pair $m_x, m_y \in M$ such that $m_x$ dynamically subsumes $m_y$, then $M$ is minimal.
To show this part, for each $m_x$ in $M$, we incrementally construct a test set $T_x$ around $m_x$. We show that this test set is minimal with respect to $M - \{m_x\}$, but does not maintain the mutation score with respect to $M$. Hence $m_x$ is not redundant, and cannot be removed from the mutant set. Since we show this for each mutant in the set, the set $M$ must be minimal.
To construct $T_x$, consider each other mutant $m_y$ in $M$. There must be some test in $T$ that kills $m_y$ but does not kill $m_x$, or else $m_y$ would dynamically subsume $m_x$. Include this test in $T_x$. Note that $T_x$ kills every mutant except for $m_x$. Choose some minimal set $\hat{T}_x$ subseteq $T_x$ using Algorithm 1. Note that $\hat{T}_x$ is minimal with respect to $M - \{m_x\}$ but does not maintain the mutation score with respect to $M$. Hence no $m_x \in M$ is redundant, and so $M$ is minimal with respect to $T$.
QED
Algorithm 2 uses Theorem 1 to efficiently compute minimal mutant sets. First, live mutants are removed. Next, indistinguishable mutants are removed. Finally, dynamically subsumed mutants are removed.
**Algorithm 2: Mutant set minimization**
```plaintext
// Input: Mutant set M; Score function S
// Output: A minimal mutant set
remove live mutants from S
remove duplicate columns from S
minSet = remaining columns in S
subsumed = dynamically subsumed mutants in minSet
return (minSet - subsumed);
```
We now apply Algorithm 2 to our running example. There are no live mutants or duplicate columns in the score function, so the variable $\text{minSet}$ in the algorithm starts with all four mutants, $m_1, m_2, m_3,$ and $m_4$. Mutants $m_1$ and $m_2$ are dynamically subsumed by mutant $m_4$. Removing these two mutants from $\text{minSet}$ yields exactly the same minimal set of mutants, namely $\{m_3, m_4\}$, identified in the previous section by considering minimal test sets.
### E. Some Properties of Minimal Mutant Sets
Since a representative from each set of indistinguished mutants is chosen arbitrarily in the first step of Algorithm 2, where duplicate columns in $S$ are removed, minimal mutant sets need not contain exactly the same mutants. However, somewhat surprisingly, minimal mutant sets do all have the same cardinality.
**Theorem 2: Mutant set cardinality**
Every minimal mutant set has the same cardinality.
**Proof.**
The key observation is that dynamic subsumption is just logical implication, and hence is transitive. This means that if one removes a dynamically subsumed mutant from a set of mutants, that removal does not affect which of the remaining mutants are dynamically subsumed. Hence, dynamically subsumed mutants may be removed in an arbitrary order, which is why the second part of Algorithm 2 is structured the way it is, as opposed to being an explicit loop that iteratively checks for dynamic subsumption. Put another way, a minimal mutant set is simply a mutant set with indistinguished mutants collapsed to single representatives and the remaining dynamically subsumed mutants removed—operations that always produce a result of the same cardinality.
QED
The appeal of Theorem 2 is that in states that, for a given test set $T$, a specific number of mutants (selected from $M$) are both necessary and sufficient to generate all possible minimal test sets (selected from $T$).
**Observation 2. Minimal mutant sets for minimal test sets.**
If $T$ happens to be a minimal test set, then every corresponding minimal set of mutants has exactly $|T|$ elements. The resulting score function is square. Every row has exactly one true value, and every column has exactly one true value.
In particular, if $T$ has exactly one element, so does every minimal $M$. This extreme example illustrates the idea that $M$ simply generates tests with respect to some underlying set of tests $T$. If that test set is already minimal, all $M$ can do is generate exactly that set. If $T$ is not minimal, then $M$ can potentially generate more than one minimal test set.
### III. Assessment
We now use Algorithm 2, to compute minimal sets of mutants with respect to a given test set. This section applies Algorithm 2 to a standard benchmark for testing research, namely the Siemens suite [9], [7], which consists of seven C programs and associated test sets. We have two goals:
1) Examine the relationship between total mutants generated by traditional approaches and minimal mutant sets.
2) Highlight the effect on mutation score of measuring against traditional mutant sets vs. minimal mutant sets.
This section is not a formal experiment. Hence, we do not enumerate research questions, results, threats, etc. Rather, we simply apply our definitions and report facts about test set minimization, mutant set minimization, and reduced mutation.
For each program in the Siemens suite, the Proteum tool [4] was used to generate mutants and the score function was collected for 512 tests randomly taken from the Siemens suite4.
A. Minimal Test Sets
Table I presents characteristics about the test sets used in the study. The column labeled Program lists the programs. The column labeled Total Tests shows how many tests are available in the Siemens suite for each program. The column labeled Used Tests shows how many tests were used in this evaluation—512 for each program. The column labeled Distinguished Tests shows how many of the 512 tests are distinguished. Recall that two tests are indistinguished if they kill exactly the same subset of mutants. The table shows that for each of the seven programs, very few tests were indistinguished.
<table>
<thead>
<tr>
<th>Program</th>
<th>Total Tests</th>
<th>Used Tests</th>
<th>Distinguished Tests</th>
<th>Minimal Tests</th>
<th>Union : Intersection</th>
</tr>
</thead>
<tbody>
<tr>
<td>print_tokens</td>
<td>473</td>
<td>512</td>
<td>499</td>
<td>12.4</td>
<td>181 : 3</td>
</tr>
<tr>
<td>print_tokens2</td>
<td>4558</td>
<td>512</td>
<td>479</td>
<td>12.1</td>
<td>160 : 1</td>
</tr>
<tr>
<td>replace</td>
<td>5542</td>
<td>512</td>
<td>510</td>
<td>44.4</td>
<td>218 : 19</td>
</tr>
<tr>
<td>schedule</td>
<td>2650</td>
<td>512</td>
<td>482</td>
<td>14.5</td>
<td>158 : 2</td>
</tr>
<tr>
<td>schedule2</td>
<td>1052</td>
<td>512</td>
<td>479</td>
<td>17.1</td>
<td>131 : 4</td>
</tr>
<tr>
<td>tcas</td>
<td>1608</td>
<td>512</td>
<td>428</td>
<td>41.4</td>
<td>207 : 10</td>
</tr>
<tr>
<td>totoinfo</td>
<td>4073</td>
<td>512</td>
<td>452</td>
<td>13.3</td>
<td>134 : 4</td>
</tr>
</tbody>
</table>
The column labeled Minimal Tests shows how many tests are in a minimal test set produced by Algorithm 1 applied to the 512 selected tests. Since there are many possible minimal test sets, this final number is the average of 100 minimal test sets generated by choosing tests to remove at random in the if statement of Algorithm 1. Note that the minimal test sets are relatively small compared to the number of distinguished tests.
The column labeled Union: Intersection gives the number of tests (taken from 512) that appeared in the union and intersection of the 100 randomly selected minimal test sets. It is clear that even though minimal test sets are relatively small, many different tests can be used to construct a minimal test set. Conversely, there are very few tests that appeared in all 100 trials. This suggests that there are few, if any, “necessary” tests in the set of 512.
B. Minimal Mutant Sets
Table II captures relevant facts about the mutants used in the study with respect to the test sets (of size 512) described above. Again, the column labeled Program lists the programs. The column labeled Total Mutants reports the total number of mutants. The column labeled Live Mutants reports live mutants. Specifically, for each entry of the form X:Y, X is the number of mutants live after execution of the complete Siemens test suite, and Y is the number of mutants live after execution of the chosen 512 tests. The column labeled Difference (Ratio) reports the difference between these two values in absolute form and also their ratio. By either measure, relatively few mutants are killed by the full suite, but not by the set of 512 tests. In terms of mutation score (not shown in the table), the 512-sized test sets exceeds 99% for all of the programs.
<table>
<thead>
<tr>
<th>Program</th>
<th>Total Mutants</th>
<th>Live Mutants</th>
<th>Difference (Ratio)</th>
<th>Distinguished : Minimal</th>
</tr>
</thead>
<tbody>
<tr>
<td>print_tokens</td>
<td>4306</td>
<td>597 : 625</td>
<td>28 (0.96)</td>
<td>437 : 28</td>
</tr>
<tr>
<td>print_tokens2</td>
<td>4746</td>
<td>692 : 704</td>
<td>12 (0.98)</td>
<td>439 : 30</td>
</tr>
<tr>
<td>replace</td>
<td>11101</td>
<td>2195 : 2318</td>
<td>77 (0.95)</td>
<td>2309 : 58</td>
</tr>
<tr>
<td>schedule</td>
<td>2109</td>
<td>267 : 271</td>
<td>4 (0.99)</td>
<td>520 : 42</td>
</tr>
<tr>
<td>schedule2</td>
<td>2627</td>
<td>488 : 495</td>
<td>7 (0.99)</td>
<td>461 : 46</td>
</tr>
<tr>
<td>tcas</td>
<td>2384</td>
<td>418 : 427</td>
<td>9 (0.98)</td>
<td>596 : 61</td>
</tr>
<tr>
<td>totoinfo</td>
<td>6698</td>
<td>877 : 877</td>
<td>0 (1.00)</td>
<td>835 : 19</td>
</tr>
</tbody>
</table>
The first entry in the column labeled Distinguished : Minimal reports the number of distinguished mutants. Recall that two mutants are indistinguished if they are killed by exactly the same subset of tests. The number of mutants that are distinguished is much smaller than the total number of mutants. This suggests that many mutants are not only redundant, they also exhibit identical behavior with respect to the test set. Further, the fraction of mutants that are distinguished (17%) is much smaller than the fraction of tests that are distinguished (93%). In terms of distinguished entries, the score function exhibits different behavior when viewed from the row perspective than when viewed from the column perspective.
The second entry in the column labeled Distinguished : Minimal reports the number of minimal mutants in a minimal mutant set5.
Not only is the number of minimal mutants much smaller than the total number of mutants (on average, only 1.2% of mutants are in a minimal set), it is also much smaller than the total number of distinguished mutants (on average, only 6.6% of distinguished mutants are in a minimal set). In other words, the dynamic subsumption relation eliminates a large fraction of the distinguished mutants.
For example, in the case of totoinfo (last row in the table), Proteum generated 6698 mutants, \((6698 - 877) = 5811\) of which were killed by both the full Siemens test suite and
5 As Theorem 2 showed, there may be many minimal mutant sets for a given set \(T\), but all are of the same size. Hence, there is no reason to run multiple trials and average the results, as was the case for minimal test sets.
also the set of 512 tests. Of these 6698 mutants, 835 were distinguished. Of the 5811 killed mutants only 19 mutants, or 0.3%, are needed for a minimal test set. Of the 834 distinguished killed mutants\(^6\), only 19 mutants, or 2.3%, are needed for a minimal test set. By any measure, the number of generated mutants far exceeds the number necessary.
The two tables given so far give the dimensions of the score function for each program. For example, `print_tokens` has a score function with 512 rows, of which 499 are distinguished, and 4336 columns, of which 437 are distinguished.
C. Reduced and Selective Mutation
We turn next to analyzing reduced mutation, the idea that using fewer mutants is nearly as effective as the complete set of mutants. We consider five reduced mutation approaches, one random and four selective. The notion of selective mutation was first suggested by Mathur [20], developed by Offutt et al. [23], and studied extensively thereafter for both FORTRAN [22], [21] and C [1].
We use the Proteum mutation tool suite. We use generic labels for the approaches, and provide the Proteum names in parentheses.
1) STMT: Statement Deletion (Proteum SSDL)
2) ROR: Relational Operator Replacement (Proteum ORRN)
3) CON: Replace Scalars with Constants (Proteum CCSR)
4) 5RND: 5% random selection of all mutants
5) SELECT: An approximation of selective mutation (Proteum: OOAN+OLLN+ORRN+OLNG)
STMT has been studied as a stand-alone, cost-effective approach to mutation [6], [3], [26]. While ROR and CON have not been studied specifically as proposals for stand-alone operators, they are plausible candidates. A random percentage of all mutants has been widely used to reduce the number of mutants that need to be considered [19], [24]. We chose 5% of random mutants because the number of mutants selected approximated the mutants created by the SELECT strategy.
The SELECT strategy approximates the original selective mutation definition from the Mothra system [22]. The Mothra approach to selective mutation had five operators:
1) ABS: Absolute Value
2) AOR: Arithmetic Operator Replacement (Proteum: OAAN)
3) LCR: Logical Connector Replacement (AND and OR) (Proteum: OLLN)
4) ROR: Relational Operator Replacement (Proteum: ORRN, but this does not include using the constants true and false)
5) UOI: Unary Operator Insertion (Proteum, logic only: OLNG)
Of these five operators, Proteum has corresponding match for two and a partial match for two more. These matches are indicated in parentheses in the list above.
\(^6\)Of the 835 distinguished mutants, 834 are killed by the test set, and one is live.
Table III shows the results of analyzing these five approaches to reduced mutation in the context of the chosen 512 test cases. The rows in the tables are again the programs from the Siemens suite. Each column of data represents one of the five approaches to reduced mutation. Table entries are designed to show the difference between traditional mutation scores and a mutation score measured against the minimal mutant set.
Each entry in the table is of the form X:Y. X is the mutation score, as a percentage, obtained by a test set adequate to the corresponding reduction strategy, against all mutants that are killed by the chosen 512 test cases. Y is the mutation score, again as a percentage, obtained by the same test set against a minimal set of mutants, again in the context of the 512 test cases.
The noteworthy aspect of this table is that although the traditional mutation scores generally seem excellent, the mutation scores against the minimal mutant set are not nearly as good, ranging from a low 27% to a high of 82%. One lesson from this evaluation, consistent with other recent studies [11], is that a mutation score measured over a large number of redundant mutants is inflated—possibly to the point of being meaningless.
Figure 1 shows the data from the STMT column of Table III in chart form. For each program, the left bar shows the mutation score with respect to all mutants, and the right bar shows the mutation score with respect to a minimal set of mutants. The basic observation from the chart is that the redundancy in the full set of mutants makes it difficult to interpret mutations scores computed using the full set of mutants.
To take a specific case, consider `tcas`. The STMT approach appears to achieve a respectable score of 88% mutation coverage. However, in terms of a minimal set of mutants, statement deletion mutation only kills about one in four.
Next, we present some data about tests in the minimal test sets. Table IV continues the analysis of reduced mutation. This time the table shows how many mutants killed by the 512 test cases are generated by the technique, along with the corresponding test size. Each value should be compared to the reference value in the column labeled Minimal, which (again) shows the number of mutants in the minimal set, along with the corresponding test size. The average number of tests required for the minimal mutant set is often larger than the number of tests required by a reduced approach. The reason is that the test sets for the reduced approaches are missing key tests. Specifically, they are missing tests that kill mutants in
a minimal mutant set. Put another way, the reduced mutation approaches set omits key mutants; mutants that could lead to very good tests.
Table IV
<table>
<thead>
<tr>
<th>Program</th>
<th>STMT</th>
<th>ROR</th>
<th>CON</th>
<th>5RND</th>
<th>SELECT</th>
<th>Minimal</th>
</tr>
</thead>
<tbody>
<tr>
<td>print_tokens</td>
<td>196:11</td>
<td>98: 9</td>
<td>308:10</td>
<td>190:11</td>
<td>138:10</td>
<td>28:12.4</td>
</tr>
<tr>
<td>print_tokens2</td>
<td>203: 5</td>
<td>192: 8</td>
<td>445: 8</td>
<td>196: 9</td>
<td>244: 9</td>
<td>30:12.1</td>
</tr>
<tr>
<td>replace</td>
<td>219:23</td>
<td>264:27</td>
<td>1053:44</td>
<td>443:39</td>
<td>499:35</td>
<td>58:44.4</td>
</tr>
<tr>
<td>tcas</td>
<td>42:12</td>
<td>45:14</td>
<td>66:14</td>
<td>99:24</td>
<td>113:18</td>
<td>61:41.4</td>
</tr>
</tbody>
</table>
For example, consider tcas again. The STMT approach generated 42 mutants that were killed by the 512 test cases, which is in the neighborhood of the 61 mutants in the minimal mutant set. Unfortunately, the choice of these 42 mutants is far from optimal. A test set that kills these 42 mutants has only 12 tests, compared to the average of 41.4 tests needed to kill the minimal set of mutants. In other words, STMT is generating about 1/3 the number of required tests, a fact that was reflected in Table III in the poor STMT mutation score of 27% against the minimal mutant set.
What is striking about Table IV is that in many cases, significantly more mutants are generated than in the minimal mutant set, but, in terms of achieving the best coverage, they are not the optimal mutants, and significantly fewer tests than needed for full coverage are generated. This table highlights a research gap: it is clear that a small number of mutants can force the generation of a very high quality test set, but it is not known how to choose these mutants. The best techniques in practice today, selective mutation and SDL-mutation, are a very long way from generating mutant sets that both include the desirable mutants and exclude unnecessary (and, of course, equivalent) mutants. A complete solution is, of course, theoretically impossible. But even modest partial solutions have room to improve matters significantly. A key point is that minimal mutant sets are not a replacement for strategies such as reduced mutation—it is still necessary to execute each mutant to create the set of minimal mutants. Rather, minimal mutant sets give a bound against which to evaluate techniques such as reduced mutation.
IV. RELATED WORK
The subsumption relation has been studied in a variety of contexts for many years. Chusho observed that measuring branch coverage over all branches in a program led to an overestimation of quality, and defined the notion of essential branches as a way of removing redundant branches from coverage measures [2]. In this paper, dynamically subsumed mutants play exactly the same role as non-essential branches do in the Chusho analysis. The difference is that this paper is “black-box,” whereas the Chusho paper considers the actual structure of the code. Hence, the Chusho results hold for all test sets; our results are specific to a particular test set T.
Harman and Jia defined the notion of subsuming Higher Order Mutants (HOMs) [10]. The idea was that a single HOM could stand in for several mutants. Langdon et al. applied subsuming HOMs to relational operators [17]. Lau and Yu identified subsumption relations between faults in Disjunctive Normal Form (DNF) predicates and presented this subsumption relation in a fault hierarchy [18]. Kaminski et al.[12] extended this work by defining special HOMs, which,
though relatively few in number, still subsumed all of the
Lau and Yu hierarchy. In terms of the relationship to this
work, subsuming HOMs are defined by internal analysis of the
artifact under consideration; in contrast, we observe dynamic
subsumption with respect to a specific test set.
Kaminski et al. [15] observed that the four of the seven
mutants generated by Mothra’s Relational Operator Replace-
ment (ROR) were always subsumed by other mutants. The
special treatment here was that the subsumed ROR operators
depended on which operator appeared in the original code.
Just et al. raised exactly the point that raw mutation scores
led to overly optimistic evaluations of quality and defined
subsuming mutants in the context of the Conditional Operator
Replacement (COR) operator [11]. Again, in terms of the
relationship to this work, eliminating mutants in these papers is
done at the operator level before test cases are generated. Our
approach to subsumption is based on the artifact’s behavior
after a specific test set is chosen.
Given that test set minimization is NP-complete, various
researchers have developed test set minimization heuristics.
Harrold et al. gave an authoritative treatment [8]. Studies
have investigated whether minimizing test sets with respect to
various coverage criteria has an effect on fault detection of the
remaining tests. A positive result [28] reported on a case study
in which minimizing test sets with respect to the dataflow
“all-uses” coverage did not significantly reduce fault detection
ability. A subsequent study [25] on the Siemens suite came to
a contradictory conclusion: minimizing test sets with respect
to edge (or branch) coverage severely compromised fault
detection. The relevance of test set minimization to mutant
minimization is that minimal mutant sets are defined in terms
of minimal test sets; hence fault-detection bias introduced by
minimal test sets potentially affects minimal mutant sets as
well. Further research is needed to evaluate this issue.
V. DISCUSSION AND CONCLUSION
This paper has presented a way to identify precisely how
many mutants are needed in the context of a given test set.
The size of this set is much smaller than delivered by current
best-practice approaches to mutation. We conclude that there
is considerable scope for new approaches to mutation analysis
that consider only relatively few mutants while at the same
time thoroughly testing the underlying artifact.
Mutation score is widely used in the literature to evalu-
ate the quality of an approach to generating test cases. As noted
in Section IV, this approach has caused some disquiet in the
research community due to the presence of redundant mutants.
The results of this paper suggest a different methodology
for evaluating testing approaches. Rather than evaluating a
given approach against all mutants generated by some set of
operators, we propose that, in addition, the approach should be
evaluated against a minimal set of mutants. Any approach as
strong as the chosen mutation operators will achieve 100% in
either case. Weaker approaches can still be compared against
criteria such as random selection, but using a minimal set of
mutants for comparison removes the problem of redundant
mutants from the evaluation.
The minimization approach developed in this paper focused
on mutation analysis specifically to address the problem of
redundant mutants. However, since the approach uses only the
black-box score function, the model can also be applied to test
requirements from any other coverage criterion, e.g., statement
coverage, branch coverage, dataflow coverage, and so on.
The eventual goal of this line of research is to make
mutation testing cost-effective enough to use in practice. The
dynamic subsumption approach to minimizing the number of
mutants demonstrates that it is, indeed, possible to reduce the
number of mutants needed to a very small number. We hope
the theoretical structure presented in this paper will lead to
practical applications to dramatically reduce the number of
mutants generated by actual mutation systems.
ACKNOWLEDGMENT
Prof. Marcio Delamaro’s research is supported by FAPESP (Fundaçao de Amparo a Pesquisa do Estado de Sao Paulo),
process number 2012/16950-5.
REFERENCES
[1] Ellen Francine Barbosa, Jose Carlos Maldonado, and Auri
Marcelo Rizzo Vincenzi. Toward the determination of sufficient
mutant operators for C. Software Testing, Verification, and Reliability,
the concept of essential branches for path testing. IEEE Transactions
[3] Marco E. Delamaro, Lin Deng, Vinicius H. S. Durelli, Nan Li, and Jeff
Ofelt. Experimental evaluation of SDL and one-op mutation for c. In
7th IEEE International Conference on Software Testing, Verification and
the assessment of test adequacy for C programs. In Proceedings of the
Conference on Perforability in Computing Systems (PCS’96), pages
79-95, New Brunswick, NJ, July 1996.
test data selection: Help for the practicing programmer. IEEE Computer,
[6] Lin Deng, Jeff Ofelt, and Nan Li. Empirical evaluation of the statement
deletion mutation operator. In 6th IEEE International Conference on
Software Testing, Verification and Validation (ICST 2013), Luxemburg,
March 2013.
controlled experimentation with testing techniques: An infrastructure and
its potential impact. Empirical Software Engineering, 10(4):405–435,
October 2005.
for controlling the size of a test suite. ACM Transactions on Software
Experiments on the effectiveness of dataflow- and controlflow-based
test adequacy criteria. In Proceedings of the Sixteenth International
Conference on Software Engineering, pages 191-200, Sorrento, Italy,
[10] Yue Jia and Mark Harman. Constructing subtle faults using higher
order mutation testing. In 2008 Eighth IEEE International Working
Conference on Source Code Analysis and Manipulation, pages 249–258,
Beijing, September 2008.
redundant mutants affect the effectiveness and efficiency of mutation
analysis? In Eighth Workshop on Mutation Analysis (IEEE Mutation
2012), Montreal, Canada, April 2012.
|
{"Source-Url": "http://www.producao.usp.br/bitstream/handle/BDPI/48408/2509033.pdf%3Bjsessionid%3D27BC698AE41FA805B63BC0C111ED9BC1?sequence%3D1", "len_cl100k_base": 12354, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39051, "total-output-tokens": 14289, "length": "2e13", "weborganizer": {"__label__adult": 0.0004048347473144531, "__label__art_design": 0.00041031837463378906, "__label__crime_law": 0.0003447532653808594, "__label__education_jobs": 0.0009317398071289062, "__label__entertainment": 7.665157318115234e-05, "__label__fashion_beauty": 0.0001919269561767578, "__label__finance_business": 0.0002608299255371094, "__label__food_dining": 0.0003736019134521485, "__label__games": 0.0007376670837402344, "__label__hardware": 0.00101470947265625, "__label__health": 0.0007052421569824219, "__label__history": 0.0002779960632324219, "__label__home_hobbies": 0.00011742115020751952, "__label__industrial": 0.00035953521728515625, "__label__literature": 0.0004558563232421875, "__label__politics": 0.0002300739288330078, "__label__religion": 0.00043845176696777344, "__label__science_tech": 0.03802490234375, "__label__social_life": 9.715557098388672e-05, "__label__software": 0.006763458251953125, "__label__software_dev": 0.94677734375, "__label__sports_fitness": 0.0002894401550292969, "__label__transportation": 0.0004620552062988281, "__label__travel": 0.00018334388732910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54161, 0.04545]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54161, 0.3837]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54161, 0.89348]], "google_gemma-3-12b-it_contains_pii": [[0, 296, false], [296, 6078, null], [6078, 12450, null], [12450, 17717, null], [17717, 23541, null], [23541, 28884, null], [28884, 34704, null], [34704, 39957, null], [39957, 43548, null], [43548, 50341, null], [50341, 54161, null]], "google_gemma-3-12b-it_is_public_document": [[0, 296, true], [296, 6078, null], [6078, 12450, null], [12450, 17717, null], [17717, 23541, null], [23541, 28884, null], [28884, 34704, null], [34704, 39957, null], [39957, 43548, null], [43548, 50341, null], [50341, 54161, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54161, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54161, null]], "pdf_page_numbers": [[0, 296, 1], [296, 6078, 2], [6078, 12450, 3], [12450, 17717, 4], [17717, 23541, 5], [23541, 28884, 6], [28884, 34704, 7], [34704, 39957, 8], [39957, 43548, 9], [43548, 50341, 10], [50341, 54161, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54161, 0.11396]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
8c53764907342ca866a84e43f3d717884bf6e36a
|
Programmer-Controlled Application-Level Multicast
Prasun Dewan
Department of Computer Science
University of North Carolina
Chapel Hill, NC USA
dewan@cs.unc.edu
Abstract—Group communication abstractions provide application-level multicasting to communicate information among distributed processes. A variety of such abstractions have been provided to implement synchronous collaborative applications but they do not allow control over the multicast of information to the selected group of processes. We have developed a new abstraction that overcomes this limitation. It defines a two-level grouping of distributed processes, with one level defining the users interacting with a specific collaborative application, and the other defining the set of collaborative applications a set of users is sharing simultaneously to perform some collaborative task. It allows information to be sent directly to the receiving processes or through a centralized relayer. In either case, programmer-choosable and replaceable send and receive filters provide consistency guarantees. The abstraction provides message passing rather than remote procedure calls, and supports asynchronous sending and receiving of messages. It is designed to support both centralized and replicated architectures. The abstraction has been implemented on top of the Java Remote Method Invocation layer and has been used to implement a broad range of collaboration functions.
Keywords—Group communication; collaboration toolkits; multicast; collaboration awareness; consistency; sessions
I. INTRODUCTION
Distributed applications are tedious and difficult to implement as programmers must learn and use either (a) low-level connection details such as establishment, reading and writing of stream abstractions, or (b) complex concepts such as proxies, remote interfaces, remote exceptions, and thread semantics of remote method invocation. In either case, they must be aware of the end points of the parties with which they communicate, and send information to each of them individually. Domain-specific abstractions can ameliorate this situation. In this paper, we focus on the domain of distributed synchronous collaboration.
A variety of abstractions have been developed in the past for this domain. As with other kinds of abstractions, they must balance automation with flexibility – in general, the more tasks abstractions perform for programmers, the less flexibility they offer. Abstractions in this domain have focused on both goals, with abstractions supporting collaboration-awareness and parameterization [1] designed for automation, and those offering group communication abstractions designed for flexibility. We focus on group communication abstractions.
Our specific reason for addressing group communication is to build an experimental research and teaching tested for understanding and improving on the state-of-the art in collaboration concepts. There has been some early pioneering work in such abstractions but it has not evolved significantly for about two decades, and more important, from our point of view, not designed to meet our goal. Previous abstractions support direct communication among processes without enabling any consistency guarantees such as causality, jitter management, or replica consistency. We have developed a new group abstraction to address these limitations. This paper, describes its design, illustrates its uses, and discusses its implementation and our experience with the implementation.
Section 2 discusses the related work on which our abstraction is based. Section 3 presents the design or API of the abstraction, and shows how it can be used in a wide variety of contexts. Section 4 overviews its implementation and use. Section 5 presents conclusions and directions for future research.
II. RELATED WORK
A distributed collaborative application must implement a whole range of functions [2] including: session management, coupling, awareness, access control, and concurrency control. Implementation of each of these functions involves communication among distributed processes, which, as mentioned in the introduction, is non-trivial. Therefore, three forms of abstractions have been offered to ameliorate this problem, which fall at different places in the automation-flexibility spectrum.
Collaboration transparency: These abstractions automatically convert collaboration-transparent single-user programs to collaborative versions [1].
Parameterized: These define a parameterized design space for one or more collaboration functions, and allow application programs to control sharing policies by specifying values for these parameters [1].
Group communication: These allow processes to (a) join and leave collaborative sessions, and (b) “multicast” messages to groups of processes in the session without worrying about or even knowing about the existence of individual members in these groups. This is application-level rather than network-level multicast, as in the underlying network, a separate message is sent to each destination.
Recall that our goal was to create a test-bed that can be used to implement novel collaboration functions and provide students with an implementation-oriented understanding of
existing and novel collaboration functions. Thus, of these three kinds of abstractions, the last seem to be the most appropriate; so let us focus in some depth on them.
The first such abstraction was implemented in the mid-eighties as part of the influential Xerox Colab collaboration environment [15]. For each user in a collaborative session, it created a separate replica of a program. The program was implemented as an extension of an interpretive object-oriented programming language that allowed certain methods to be declared as broadcast methods. Invoking a broadcast method on a replica had the side effect of invoking the same method on all other replicas of the application. The actual task of sending messages to remote processes was handled by this group abstraction, making the application program more or less collaboration-unaware.
Two successor systems, GroupKit [3] and Suite [4], show that it is useful to make the abstraction more flexible. These were developed contemporaneously and independently in the early to mid-nineties, before the advent of Java, and were based on TCL and C, respectively.
GroupKit, like Colab, was designed for the replicated architecture, and supported group invocation of procedures. However, it offered more flexibility in three important ways. First, it allowed a replica to know when a replica of some other user joined or left the collaborative session, supplying the identity of the user, which could be used to provide customized session awareness to users. Second, it allowed a group call to be made not only on all replicas, but also all replicas except the one making the call. Third, the decision about the set of replicas on which a call was invoked was made at runtime by the caller rather than at program writing time by the callee.
Suite was designed for an architecture in which the model or semantics code was centralized and the user-interface code, called dialogue manager, was system-provided and replicated. This architecture allowed Suite to offer collaboration transparency, parameterized collaboration functions, and group communication in a single system. Here we focus on group communication.
The model could make a call in all remote dialogue manager connected to it. It could also make the call in a programmer-chosen group of dialogue managers. When a dialogue manager joined a session, the model was informed of its identity, which could be used to define arbitrary groups of dialogue managers. Finally, a call made by the model in a callback invoked by a dialogue manager could be invoked on two additional predefined groups: all dialogue managers except the one that made the callback, and the dialogue manager that made the callback. The groups defined by Suite are difficult to compare to those defined in GroupKit and Colab as they are designed for a centralized architecture. As we shall see later, it is possible to create a single system supporting both replicated and centralized architectures in which the groups defined by all three systems are included.
III. REQUIREMENTS
Together, the three multicasting primitives surveyed above define a design space of group abstractions in which the four main dimensions consist of the architecture, the groups of processes in which a call is made, and whether the group is decided at program writing time or at runtime, and the awareness a process has about other processes in the session. Based on these choices, GroupKit is more flexible than Colab as it supports caller control over multicasting, an additional multicast group, and awareness of users in the session. In comparison to Suite, it does not impose a centralized architecture. On the other hand, it imposes a replicated architecture as it assumes each communicating process implements the same set of methods so that a method can be called in all of these sites. Thus, none of these systems support both the centralized and replicated architectures. Both kinds of architecture are useful, as explained in [5]. There has been work in supporting multiple architectures [5] in one system and even adapting the architecture automatically [6], but this work has been done in the context of collaboration-transparency, which as mentioned above, gives programmers no flexibility.
Another important flexibility limitation of existing group communication systems is that they allow no control over the multicast of a message to a group. In particular, they do not allow programmers to (a) determine if a message is sent to a remote process directly or through one or more intermediary processes, and (b) re-order or change messages at the sending and or receiving sites to provide consistency guarantees.
The first property may not seem like a practical limitation. In synchronous collaboration, one can expect direct communication to offer better (remote) response times as the number of processes through which a message passes is minimized. For this reason, to the best of our knowledge, all three systems offer direct communication. However, there are at least three reasons for putting intermediaries.
Response times: Recently, Junuzovic and Dewan [7] have shown that in certain situations, multicasting a message through intermediaries can, in fact, improve remote response times, especially in today’s world of wireless communication and mobile computing. To illustrate, imagine a user on a mobile computer on a congested wireless connection making a presentation to a large number of users. In this scenario, response times will be smaller if the mobile computer sends a single message to a more powerful computer on a faster network and the latter relays it to all of the users viewing the presentation.
Firewalls: Often user processes are behind firewalls, which prevent them from communicating with each other directly.
Lock-less Consistency: Certain lock-less consistency algorithms such as atomic broadcast and operation transformation algorithms – in particular the Jupiter operation-transformation algorithm [19] used in GoogleDocs - require messages to be relayed through a server.
This does not mean that communication should always be relayed. When none of these conditions apply, communication should indeed be direct to support faster response times. Thus both forms of communications should be supported. Junuzovic and Dewan [6] have shown it is possible for the system to automatically choose the routing of messages based on
This research was sponsored in part by NSF
response times. However, programmer-control is still necessary to determine if application-specific consistency requirements require relayed communication.
Application-specific consistency requirements also motivate programmer-control over reordering and modification of messages. In direct communication, messages may need to be reordered to ensure causality [8]. In both direct and relayed communication, messages may need to be changed to support operation transformation [9].
Though the importance of these consistency requirements has been known since the first paper on operation transformation in the mid-eighties [10], designers of group toolkits have ignored them. We conjecture that the reason is that these requirements have been motivated mainly for collaborative text editors. To the best of our knowledge, no general-purpose toolkit has targeted such editors, and no other application has implemented (lock-less) consistency abstractions. Collaborative editors have been implemented mostly by extending existing editors such as Emacs [11] or Word [9] in an editor-specific way without trying to use general purpose language-based abstractions. One exception is the Google Docs editing tool, whose origin is the Writely editor, which was apparently built ground-up to support collaboration. However, to the best of our knowledge, this was a standalone project and thus did not use or create general-purpose collaboration abstractions.
Given this history and analysis, is it worth addressing (lock-less) consistency in general-purpose abstractions? The answer, we believe is, yes, for four reasons. First, an abstraction cannot be called general purpose if it precludes even a single important class of applications. Second, text editors are provided as part of a whole suite of collaborative applications, and it is important for these applications to reuse as much code as possible. For example, it is important for Google Talk and Google Docs to share code for multicasting messages so that changes to optimize this code are made once for each application. Third, from a teaching or research perspective, it is not so important to have the practical goal of extending existing single-user text-editors - creating such editors ground-up using a collaboration toolkit is a viable alternative. Finally, a text editor is not the only popular application requiring consistency. Arguably, IM, which is part of most collaborative sessions, could also benefit from causality and/or operation transformation, because misinterpretation of concurrent messages as serial can cause problems even in two-person IM. Enabling support for consistency in a general-purpose abstraction will allow a greater variety of applications to offer it. Consistency management is still an active area of research [9], especially as operation transformation algorithms do not come with proofs in which the community believes. Thus, it is important to allow these algorithms to be transparently substituted with possibly programmer-defined ones.
Support for programmer-defined consistency algorithms implies also that there should be a way to test these algorithms, which in turn implies a way to transparently delay delivery to different sites with which a process communicates. Current group abstractions do not provide such control.
As mentioned above, all group communication abstractions must allow processes to join and leave sessions so that multicast groups can be defined based on session membership. However, current systems consider all sessions to be equally related, that is, define a flat hierarchy of sessions. As a result, they do not directly capture modern collaborative environments in which multiple applications such as an IM, text-editing and whiteboard tools are used together by a group of users in a single logical collaborative session. Such environments require a more complex, multi-level session membership and notification semantics.
These, then, are the reasons motivating our project. Our goal is to offer the automation of previous multicast primitive while increasing their flexibility. Table 1 evaluates the current systems against automation and flexibility requirements identified above, and shows that none of the existing abstractions meet all of these requirements. It is our goal to develop an abstraction that meets all of them.
<table>
<thead>
<tr>
<th>Table 1 System vs. Multicast Requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Direct and Relayed communication</strong></td>
</tr>
<tr>
<td>Colab</td>
</tr>
<tr>
<td>No</td>
</tr>
<tr>
<td><strong>Caller Control of Message Destination</strong></td>
</tr>
<tr>
<td>No</td>
</tr>
<tr>
<td><strong>Centralized Architecture</strong></td>
</tr>
<tr>
<td>No</td>
</tr>
<tr>
<td><strong>Replicated Architecture</strong></td>
</tr>
<tr>
<td>Yes</td>
</tr>
<tr>
<td><strong>Transparent Message Delay, Reordering and Change</strong></td>
</tr>
<tr>
<td>No</td>
</tr>
<tr>
<td><strong>Multi-Application Sessions</strong></td>
</tr>
<tr>
<td>No</td>
</tr>
</tbody>
</table>
IV. DESIGN AND RUNNING-EXAMPLE
We have designed and implemented a Java-based system, called GroupMessages, to meet this goal. In this section, we describe its design and applications using a running example. We develop the example, incrementally, starting with a single-user program.
A. Model-Based Single-User Program
The single-user program provides a console-based user-interface to echo input lines. It also provides command to view the history of entered input lines and to quit the interactive session, as shown in Figure 1.
This version does not use any of the multicasting primitives. However, for it to be extended to support multi-user interaction, it has to be decomposed in a fashion that allows its
behavior to be extended. Ideally, to offer extensibility, a program should be implemented using appropriate design patterns. One design pattern that applies to all interactive programs is model-view-controller [12], which separates semantics, input, and output of interactive applications into model, view, and controller objects, respectively. Sometimes input and output are so coupled that it is sufficient to implement a coarser-grained version of this pattern in which the view and controller are combined into a single object, which we call an interactor.
```
Please enter an input line or quit or history
The woods are lovely, dark and deep
The woods are lovely, dark and deep(Echo)
Please enter an input line or quit or history
```
**Figure 1 Single-User Echoer**
This is the pattern used in this application (Figure 2). A History model object maintains a list of input lines, and allows other objects to add elements in the list and read the entire list. In response to the add operation, it notifies its observers of this event by calling the elementAdded() operation in them, which in this example, consists of an EchoerInteractor object. This object reads input lines, asks the model to add them, and reacts to a notification from the model by echoing each added line on the screen. As we see below, this architecture will allow us to reuse the model and interactor types without adding any collaboration awareness to them, thereby providing vindication for keeping the semantics and user-interface in separate modules. The user-interface of the application is kept simple so that we can focus on the collaborative aspects of the extended application.
```
History
```
```
elementAdded
observerAdd
```
**Figure 2 Single-User Architecture**
### B. Basic Collaborative User-Interface
The collaborative extension of this application is essentially a console-based group IM application, as shown in Figure 3.
```
Please enter an input line or quit or history
The woods are lovely dark and deep
The woods are lovely dark and deep(Echo)
Please enter an input line or quit or history
But I have promises to keep(Bob)
And miles to go before I sleep(Cathy)
```
**Figure 3 Echoer to Group IM**
Here we see three users, Alice, Bob, and Cathy, using the application. The IM application is a strict extension of the single-user echoer that allows users to see input lines not only entered by them but also their collaborators, and adds to the history both local and remote input. Each user is aware of the identity of the user who entered an input line. A consequence of the awareness is that a user who inputs a line sees a different view of it than the others. This feature has been added to illustrate some of the complications that arise when using multicasting primitives in replicated and centralized architectures.
Thus, we see that this version of our example offers three of the collaboration functions mentioned earlier, session management, coupling and awareness. These functions, of course, are implemented using GroupMessages. Let us first consider session management.
### C. Two-Level Session Management
Like GroupKit and Suite, GroupMessages allows processes to explicitly create, join, and leave sessions, and be notified when these operations are invoked by processes of other users. As in previous systems, a central process is used to support sessions, which we call the **session server**. Application code interacts with the session server and local multicasting code through a local object called the **communicator**. Figure 4 shows the basic architecture visible to the programmers. They know that a session server exists as they must provide its location. However, all functions of our abstraction are provided through the communicator. The communicator itself is partitioned into several internal components, which the message producers and consumers in the application can ignore. However, as we see later, two of these components are visible to consistency management modules in the application.
**Figure 4 Architecture Visible to the Programmer**
Our abstraction accommodates multi-application sessions. A session is not simply a set of users. Instead, it consists of a set of application sessions and users, and each application session consists of a set of users (Figure 5).
```
Session
```
```
Alice IM Bob Editor Cathy
Alice Bob Cathy Bob Cathy
```
**Figure 5 Multi-Level Session Structure**
Thus, like other systems, our abstraction allows users to be (dynamically) added to sessions. In addition, it allows applications to be added to sessions, and users to be added to applications. Adding an application to a session creates an **application session**, which corresponds to a session in other systems. Adding users to an application session allows them to use the application to collaborate with other users in the application session. Adding users to a session allows them to be informed of other users and applications in the (overall) session without participating in any joint activity. They can
react to this information by joining one or more application sessions in the (overall) session. In Figure 5, Alice, Bob and Cathy are all in the IM application session, while only Bob and Cathy are in the Editor application session. All three users are in the overall session, and thus, have the option of joining any application session in it. Currently, it is not possible for users to join an application session without also joining the overall session.
A single static call is provided for creating communicators and creating and joining sessions and applications:
```java
public static Communicator getCommunicator(
String aServerHost, String aSessionName,
String aClientName, String anApplicationName,
String aRoutingKind)
```
Here `aServerHost` is the name of the host on which the central session server resides, `aSessionName` is the name of the (overall) session, `aClientName` is the name of the client making this call, `anApplicationName` is the name of an application (session), and `aRoutingKind` denotes whether multicasts through the communicator will be routed through a relayer at the session server. If `anApplicationName` is null, then the client will be added to the overall session. Otherwise, it will be added to both the application session and the overall session. Of course, if it is already part of a session or application, then it is not added again. If the named session or application does not exist, then it will be created. By combining the creation of sessions and applications with addition of members to them, we allow all replicas in a replicated application to execute the same code and be started in any order. Otherwise, one of them must have special code to create a session and this code must be started before others. The client name is an identifier that distinguishes the caller from other members of a session and application. It can be any string chosen by the programmer. As we see below, our abstraction supports communication with specific clients. This name is used in such communication.
For each application and session combination, a separate communicator is created. Our design expects each user process to be associated with either the overall session or a single application in that session. Thus, we expect each process to create a single communicator, though the design and implementation support multiple communicators if a single process wishes to play the role of multiple logical applications.
Like GroupKit, our system allows processes to receive notifications about successful session creation and joins, including the ones they initiated, by implementing listener methods with the following signature:
```java
public void clientJoined (String aClientName,
String anApplicationName, String aSessionName,
boolean isNewSession, boolean isNewApplication,
Collection<String> previousClients);
```
It is the dual of the `getCommunicator()` call described above, providing the listener with information about a successful join. The two Boolean flags indicate if the application and session are newly created. `previousClients` is the collection of all previous clients in the session. When a client joins the session, this method is invoked for each existing client and application combination. It is also executed once for each subsequent join. To allow clients to register listeners before they join sessions, a communicator does not automatically join the specified session when it is created. It does so when the client makes a special non-blocking `join()` call. This call uses the parameters provided at communicator creation time to send an appropriate message to the session server. A communicator also provides a call to leave application/overall sessions, and a notification method to receive information about leaves. Let us use the running example to illustrate the nature of these session functions.
Client Alice creates the following communicator to initiate an application-less joining of session `SESSION_NAME`:
```java
Communicator communicator = getCommunicator(SERVER_HOST,
SESSION_NAME, ALICE, null,
Communicator.RELAYED);
```
It then registers a session listener that implements the following method for join notifications:
```java
public void clientJoined (String aClientName,
String anApplicationName, String aSessionName,
Collection<String> previousClients) {
if (anApplicationName != null &&
IM.equals(anApplicationName))
joinSession(anApplicationName, aSessionName);
}
```
Here, `joinSession()` is an internal application method that forks a new process that joins the IM session. Finally, it invokes the join() call. Assuming Alice is the first member of the session, at this point Alice’s `clientJoined()` method will be invoked informing it of the successful execution of the non-blocking `join()` call. This method does nothing as the application name is null.
Later, client Bob creates a communicator for the IM application:
```java
getCommunicator(SERVER_HOST, SESSION_NAME,
BOB, IM, Communicator.RELAYED);
```
Next it registers a session listener that defines the following join notification method:
```java
public void clientJoined (String aClientName,
String anApplicationName, String aSessionName,
boolean aNewSession, boolean aNewApplication,
Collection<String> previousClients) {
displayMessage(aClientName, anApplicationName);
}
```
When the join call is successfully invoked on the session manager, Alice’s `clientJoined()` method is invoked for Bob; and Bob’s `clientJoined()` method is invoked twice, first for the existing member, Alice, and then for the new member, Bob. Bob’s method simply prints a message, while Alice’s method forks the process that joins the application session.
If the actions were reversed and Bob joined the IM session before Alice joined the application-less session, the behavior would be more or less the same. Alice’s `clientJoined()` method would still be called for the existing application session. The
only difference is that displayMessage() method in Bob would be called first for Bob and then for Alice. Thus, the semantics are resilient to race conditions arising from uncoordinated join() calls being made by different clients.
Session management functions provides the basis for defining the groups used in multicasting calls. Let us consider these calls next.
D. Message-Based Multicasting
Concurrent systems are often classified as message-based or procedure-based depending on whether they communicate information by sending messages or invoking procedures. As Lauer and Needham [13] point out, these systems are equivalent in expressibility though one might be easier to program in certain situations.
All three group communication abstractions surveyed here are procedure-based in that they multicast (remote) procedure calls. In contrast, GroupMessages, as the name indicates, is message-based, because high-level, efficient and consistent multicasting would have required us to implement our own remote procedure call for Java. Directly using the standard library for Java, RMI, creates several problems:
**Transparent syntax:** In RMI, a remote method declaration and call has the same syntax as a local method declaration and call, respectively, though the caller has to address new kinds of exceptions. The callee is completely unaware of whether it was invoked remotely or locally, and thus does not know the identity of the caller. To implement awareness, access control, and concurrency control, it is useful to have this information. For instance, in any IM application, a user is aware of the identity of the person who sent a message. To support such awareness, the callers must explicitly send their identities using procedure parameters, even though the underlying system has this information.
**Synchronous call:** Consistent with its goal of compatibility with local calls, RMI supports synchronous calls, which blocks the caller until the call completes. Experience has shown that these semantics visibly slow down response times when the input rate is fast— in particular when a tele-pointer is moved [14]. The reason is that a sending site must wait for input to reach the remote site and an acknowledgement to return before it can send another input. The problem is aggravated by increasing the message hops; in particular sending the message through a relay. To achieve concurrency, it is possible to create multiple threads that make synchronous calls — a standard technique in procedure-based systems [13]. However, this adds to the programmer-effort. Moreover, creating multiple threads can tax or exhaust system resources. In particular, in the telepointer case, it is unreasonable to create a thread per move. Thus, the number of outstanding calls would be limited by the size of thread pool used to send the data. Finally, concurrent invocations of a remote method by different threads can lead to consistency problems. To illustrate, assume that in our running example, a user enters two input strings. If these are sent by two different threads, then because of scheduling uncertainties, the second string may reach the destination before the first one. Perhaps for this reason, some implementations do not allow a method to be invoked concurrently by threads in the same site, which leads to the high latency problem mentioned above.
**Concurrent remote calls by different sites:** In RMI, remote invocations of the same method by different sites execute in different server threads. These calls may need be serialized for consistency reasons. This means that the programmer must be careful to use Java’a (high-level) synchronization mechanisms to provide such serialization.
**Deadlocks:** Synchronous remote invocation and synchronized concurrent remote calls block threads at the invoking and invoked sites, respectively, which in turn can lead to deadlocks. For example, if a synchronized history object in a slave site makes a remote call to a synchronized serialized history object in a master site, and the latter invokes a method back in the slave history to provide a serialized update to the history, then we have a deadlock. This means that programmers must take special steps to avoid such deadlocks.
**Single-site proxy:** RMI proxies are created at the callee sites and distributed from there to calling sites. They are bound to the creating site. To support multicast RPC, we would have had to change RMI to create proxies at the caller site that forwards calls to multiple programmer–controlled server sites.
**Semantics of group function calls:** In RMI, a remote method can return a value. Supporting remote function calls requires us to determine what value should be returned by a multicast function call.
These problems are not unique to RMI and also arise if we were to, for instance, use the RPC layer of .NET. These are typical of RPC support for compiled object-oriented languages.
None of the previous multicasting systems change the syntax of call invocation to provide caller awareness. GroupKit does not face the other issues as it is built on top of TCL, which is a scripting interpretive system in which remote void asynchronous procedure calls are made by sending textual representations of the calls, which are simply forwarded asynchronously by GroupKit libraries. Colab is also built on top of an interpretive language (Object Oriented Lisp), and requires (pre)compiler support for labeling methods. The paper on Colab [15] does not address the issues above — in particular it does not indicate if the remote calls are synchronous, and what happens to results of broadcast functions. Presumably, as the functions are guaranteed to execute locally, the local results are returned. The issue of multicast proxies can be handled by appropriate (pre) compiler support. Suite provides multicasting of only predefined void procedures provided by a dialogue manager, which are handled by calling asynchronous remote procedures provide by the Suite RPC layer [16].
GroupMessages uses non-blocking message-based multicasting to address these issues. All outgoing and incoming messages go through the system, which can then control synchrony and threading issues. A client is guaranteed that all outgoing messages to an application session are serialized, as well as all of its incoming notifications. Moreover, a relayer guarantees that for a particular application session, messages leave in the same order in which they arrive.
As RMI is built on top of sockets, a message-based abstraction, it arguably provides a higher-level abstraction than sockets. However, GroupMessages does not have most of the disadvantages of sockets, as programmers do not have to create, bind, and connect sockets or implement threads to read and write from them. It does, however, have the fundamental disadvantage of message-based communication – a program that wishes to make a logical procedure call on a remote site must convert or marshal these parameters into a message at the caller, and unmarshal the message back into parameters at the callee site [17].
Like recent Google APIs, GroupMessages does not offer an explicit call to receive a message. It uses the observer pattern to not only inform interested parties about session notifications but also received data. As a result, programming of synchronous communication is more difficult and must be done using non message passing abstractions such as semaphores and monitors. Our decision is a consequence of the fact that GroupMessages is designed for synchronous collaborative applications in which response times are degraded by blocking.
E. Multicast Groups
The most fundamental multicasting call provided by GroupMessages is toOthers(), which allows a client that has joined an application session to send an arbitrary data object to all other members of the session. To illustrate, let us continue with our example by outlining how our echoing code was converted into a replicated implementation of the IM user-interface. Figure 6 shows the architecture of this application.
Figure 6 Replicated IM Architecture
The echo model and interactor objects of Figure 2 are replaced by extensions for the replicated IM. When the user inputs a line, the IMInteractor calls replicatedAdd() in the model object. This method calls the observableAdd() method of its superclass, marshals the name and parameters of the add operation into a ListEdit message object, and uses toOthers() to multicast this message to other replicas in the application session:
```java
public void replicatedAdd(ElementType anInput) {
int anIndex = size();
super.observableAdd(anIndex, anInput);
ListEdit listEdit = new AListEdit(
OperationName.ADD, anIndex, anInput);
communicator.toOthers(listEdit);
}
```
This message is delivered to a remote replica by calling the objectReceived() listener method:
```java
public void objectReceived(Object msg, String clientName) {
if (msg instanceof ListEdit)
processListEdit((ListEdit<String>) msg, clientName);
}
```
As we see in the code above, this method has the name of the calling client even though it was not explicitly provided by the caller. This method calls processListEdit(), which unmarshals the message object into parameters of the add operation, and uses these parameters together with the additional caller name to update the local history and display the input string along with the inputter’s name. The received message does not trigger a call to replicatedAdd() to prevent an infinite cycle of adds.
What we have described above is a standard implementation of the replicated architecture [18] except that messages can be routed through a central server if the relay routing option is used at communicator creation time.
It is possible to use GroupMessages to also implement the centralized architecture [18]. In this architecture, a central master computer stores a master copy of all shared objects, which is typically cached at users’ sites to support efficient reading of these objects. Writes to these objects are first made in the central copy and then copied into the cache. Figures 7 and 8 show the GroupMessages implementation of this architecture for the IM application.
Figure 7 Master IM in Centralized Architecture
When a slave interactor receives input, it does not send it directly to the local history. Instead, it uses a unicast call, toClient(), to send the new value to the central client. This call takes an additional parameter indicating the client name:
```java
void addToHistory(String newValue) {
communicator.toClient(
MasterIMModelLauncher.CLIENT_NAME, newValue);
}
```
The master adds the value to the history, and uses toOthers() to send a marshalled message to all slaves:
```java
public void centralizedAdd(
ElementType anInput, String aSourceName) {
int anIndex = size();
super.add(anIndex, anInput);
UserEdit<UserType> userEdit = new AUserEdit(
OperationName.ADD, anIndex, anInput, aSourceName);
communicator.toOthers(userEdit);
}
```
As in the replicated architecture, the marshaled message contains the index and value of the add operation. In addition, it contains the name of the inputter, which the slave extracts to determine the output. The reason for sending this name in the
centralized architecture is that the message arrives at the slave from the model, so the message sender parameter provided automatically by GroupMessages does not indicate the (slave) inputter name. In this implementation, each site determines the user-interface. This is the reason for needing the inputter name at each site.
As all sites display the same user-interface, the master could alternatively compute this output, in which case it would not have to send the inputter name. However, it would have to send different outputs to the inputter and other users. To support such communication, GroupMessages offers two additional multicast calls, toCaller() and toNonCallers(). Code invoked in response to a message to client C1 from client C2, can invoke these two calls to send message to C2, and all other clients other than C1 and C2, respectively. These two calls are inspired by analogous calls provided by the centralized Suite system. Also motivated by Suite, GroupMessages provides the toClients() call, which takes as an argument an object and a list of client names, and multicasts the object to all clients in the list. Finally, it provides the toAll() call to broadcast a message to all clients, including the one that invoked the call. With these calls, GroupMessages can simulate all multicast groups defined by previous systems.
In the centralized architecture above, all interactors of a slave model are distribution-aware as they communicate with the master model. This problem can be solved by making them make a special proxy add call on the slave model, which can then forward the call to the remote master model. Thus, the distribution-awareness is restricted to only the models.
So far, we have shown how GroupMessages can be used to implement coupling and awareness in centralized and replicated systems. It can also be used to implement control functions. Before allowing a change, a client can check with an access/concurrency control vetoer. Authorization and lock information can be shared in a centralized(replicated) architecture using GroupMessages.
To illustrate, let us extend the IM user interface to provide an access control user interface in which the addInputter/addAdministrator commands are used to allow a specific user to provide input and give another user the right to input, respectively (Figure 9).
Figure 10 shows how this functionality can be added to the replicated IM history. A special AccessController object processes the addInputter and addAdministrator commands and replicates these operations on all replicas. As the same change is to be made in all replicas the toAll() call is used.
\begin{verbatim}
public void replicatedAddInputter(String aNewInputer) {
String aUserName = communicator.getName();
if (canAdminister(aUserName)) {
showNoAdminMessageDialog(aUserName);
return;
}
communicator.toAll(new AnInputAuthorization(aNewInputer));
}
\end{verbatim}
An extension of ReplicatedHistory, ControlledHistory, checks with the AccessControl object before adding an item.
In this extension, the same session is being used to communicate two kinds of information, the user input and the authorization information, which are processed by different objects, the IMCoupler and AccessReceiver (Figures 6 and 10 respectively). As GroupMessages is unaware of these two subchannels, it passes an incoming message to all listeners (of a specific application session). Thus, each listener must determine, using characteristic of the received object, if it should process the object – a disadvantage of message-based communication. In our example, this task is relatively simple, involving simply the use of the Java instanceof operation, as the two receivers process different types of objects. Thus, the access receiver ignores ListEdit objects, as shown below:
\begin{verbatim}
public void objectReceived(
Object aMessage, String aSourceName) {
if (aMessage instanceof AnInputAuthorization)
processInputAuthorization(aMessage);
else if (aMessage instanceof AnAdministratorAuthorization)
processAdminAuthorization(aMessage);
}
\end{verbatim}
Concurrency control can be similarly implemented by checking and replicating lock information.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{access_control_interface.png}
\caption{Access Control User Interface}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{access_control_architecture.png}
\caption{Access Control Architecture}
\end{figure}
F. Send and Receive Filters
As mentioned earlier, one of our goals was to allow delay, modification, and re-ordering of messages to support programmer-controlled consistency. Apparently, the primitives described so far are sufficient, as we support message-based communication. Instead of forwarding messages to the communicator, message producers can submit them to local consistency modules, which can modify them by, for instance, time stamping them and then delaying them if necessary. Similarly, consistency modules can receive messages, and after reordering, modifying, and/or delaying them, submit them to the actual message consumers.
However, there are several problems with this approach. First, it requires the message producers and consumers to be consistency-aware as they must send (receive) message directly to (from) the communicator or through the consistency modules. Second the consistency modules must implement some of the functions of GroupMessages such as registering of different kinds of listeners and forwarding messages to them. Third, they must delay messages at the sending/receiving sites, which is a non-trivial task. Finally, in relayed communication, centralized consistency algorithms such as Jupiter [19] require
morphing/reordering of messages at the relaying site. The API, described so far, does not provide interception of these messages. To address these problems, GroupMessages provides several additional concepts.
Assuming that consistency module implementers would want to control only the amount of delays and not how the delays are implemented, GroupMessages provides operations that allow programmers to set the minimum and maximum delays to both other clients and the relayer, and given a message directed at a site, delays it by a random value between the two limits for that site. It allows programmer-defined modules to intercept sent messages after they have been submitted to the communicator but before they have been delayed or sent. Similarly, it allows these modules to intercept received message after they have been delayed but before they are distributed to listeners. Finally, it allows programmers to intercept sent and received messages at the relayer. An intercepting module is free to not (immediately) forward a message to the next stage in the communication pipeline and/or modify the message. Such a module is called a filter. The next stage in the pipeline is called a message processor and is passed to the filter as a parameter of a filter setter method.
Figure 11 shows the use of send and receive filters to implement causality in the IM application. After a ReplicatedHistory submits a ListEdit to the communicator, the latter (through subcomponents) wraps the edit in a SentMessage and passes this message to the filterMessage() method of the programmer-defined CausalSentMessageFilter. A SentMessage encapsulates not only messages generated by the client through explicit multicast calls but also system-generated messages resulting from client join and leave requests. The filter is given all messages so that it can, for instance, delay all of them using programmer-defined algorithms. This filter checks if the message is a user message, and if so, extracts the wrapped message, time stamps it, replaces the wrapped message with the timestamped edit, and forwards the modified SentMessage to the next sending stage of the communicator:
```java
public void filterMessage(SentMessage aSentMessage) {
if (message.isUserMessage()) {
message.setUserMessage (causalityManager.timeStamp(message));
sentMessageProcessor.processMessage(aSentMessage);
}
}
```
The dual of this event flow occurs at the receiving replica. The timestamped edits are passed to CausalityReceiveFilter, which removes the timestamps (after possibly buffering the messages) and forwards the list edits to the received message processor, which forwards it to the programmer-defined receive listener, IMCoupler we saw earlier. This part of the processing is shown in the trace displayed in the IM console window of Figure 12.
In this trace, Alice, Bob, and Cathy communicate messages directly to each other, and Alice’s messages to Cathy are delayed:
```java
communicator.setMinimumDelayToPeer(CathyP2P.USER_NAME, DELAY_TO_CATHY);
```

**Figure 11 Client-Side Send and Receive Filters**
Alice enters the string “The woods,” in response to which Bob enters “are lovely.” Because of the delay, these messages arrive in reverse order at the Cathy’s site and are processed in this order by the receive filter. The filters learns from the time stamp of Bob’s message that there is an earlier message, so it buffers the message, and when Alice’s message arrives, it delivers Alice and Bob’s messages, in that order, to the receive message processor, which, in turn gives them to the IM coupler in that order.
V. IMPLEMENTATION AND EXPERIENCE
Message filters expose part of the send and receive pipelines. We briefly outline the other aspects by tracing the flow of a multicast call from a sender to a receiver. The sent message along with the kind of the multicast group to which it is addressed (such as other, all, and caller) are wrapped in a SentMessage data object. This object is then given to the sent message filter, which, as mentioned above, gives it to the message processor. The filtered message then is then put in a sent message bounded buffer, unblocking the caller. The consumer of this buffer is a system-created message-sender thread. At this point the message takes two routes depending on whether it is to be relayed through the central server.
In the case of a relayed message, the message-sender thread computes the delay to the server, sleeps for the required time, and makes an RMI call to the relay to multicast the message. The relayer passes the message to a central multicast, which for each destination, wraps the user message along with the name of the source into a ReceivedMessage object, and makes an RMI call at the destination to hand it the message. A separate multicasting thread is created in the relay for each application session as it is assumed that messages of different applications do not interfere, and thus do not have to be serialized.
In the case of a direct message, the steps are similar except for the following differences. A local rather than central multicaster is used to deliver a ReceivedMessage to each
destination. Moreover, for each destination, a separate thread is created to send messages to the destination, delaying them if necessary. Thus, messages to different destinations can be sent concurrently. We do not create multiple threads for sending messages to the same destination to prevent messages from being delivered out of order. As the underlying IPC layer (RMI) supports synchronous sends, this means that the acknowledgement for a message to a destination must arrive before the next message can be sent. Thus, synchronous IPC conflicts with highly synchronous collaboration (such as sharing telepointer moves), even with (asynchronous) threads.
In both the relayed and direct cases, an RMI call is made at a receiving site to deliver the ReceivedMessage object. These calls put the message in a bounded buffer and unblock the RMI thread. A message-receiver thread is the consumer of this buffer. If a message arrives from the relayer, it calculates the amount of delay to the server, and sleeps for this amount. It then delivers the message to the receive-message filter, which gives it to the receive message processor. The final step is to extract the user data and sender name from the message and pass them as parameters to each receiver listener.
We have used this implementation for creating a variety of student assignments. These include a centralized and replicated implementation of shared Java widgets, and an integrated IM-editor tool that allows users to jointly edit a text area, exchange messages about the editing, and use a telepointer to point at the messages and text area. The assignments involved causality and operation transformation modules for direct and relayed communication, respectively, and jitter filters for reducing jitter in telepointers. None of the previous group communication tools are flexible enough to implement these assignments. Lower-level general purpose distributed computing platforms such as RMI of course offer this flexibility, but programmers would then be responsible for the non-trivial tasks handled by our implementation.
VI. Conclusions and Future Work
This paper motivates a new set of requirements for multicast including support for caller control of message destinations, centralized and replicated architectures, message delay, ordering and change, direct and relayed communication, and multi-application sessions. It identifies features of remote procedure call that conflict with these requirements such as synchronous calls, transparency, concurrent remote invocations, single-site proxies, and remote function calls. It describes a design and implementation of message multicast that meets the requirements.
The design has several new features including two-level sessions, joining a session as a relay client or direct communicator, automatic awareness of the message sender, send and receive filters, and high-level primitives for adding delay and jitter in both direct and relayed communication.
The paper shows how these features can be used to implement (a) centralized and replicated architectures, and (b) coupling, awareness, control and consistency. In all of the examples, the original single-user code was used unmodified, and additional collaboration functions (such as access control and consistency) were added without changing the basic code for coupling users. Thus, while our primitives require collaboration awareness in the application code, different kinds of awareness such as coupling, control, and consistency awareness can be isolated in different modules.
While our driving problem was education and research, there is no reason why our design would not be useful also for building industrial strength applications, which arguably, do not offer more sophisticated synchronous collaboration functions than our running example. Of course, more work is needed to validate our hypothesis – our code is available in a GitHub repository for this validation. Additional research is also needed to integrate our research with the lower-level abstractions supporting RPC and higher-level abstractions supporting collaboration transparency. This paper provides a basis for investigating such support.
ACKNOWLEDGMENT
This research was supported in part by the NSF awards IIS 0810861 and IIS 1250702.
REFERENCES
|
{"Source-Url": "http://www.cs.unc.edu/~dewan/790-063/current/Chapters/sessioncom2.pdf", "len_cl100k_base": 10660, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 40053, "total-output-tokens": 12113, "length": "2e13", "weborganizer": {"__label__adult": 0.00024330615997314453, "__label__art_design": 0.00022912025451660156, "__label__crime_law": 0.00021088123321533203, "__label__education_jobs": 0.0012273788452148438, "__label__entertainment": 5.40614128112793e-05, "__label__fashion_beauty": 9.745359420776369e-05, "__label__finance_business": 0.0001277923583984375, "__label__food_dining": 0.00022804737091064453, "__label__games": 0.0004589557647705078, "__label__hardware": 0.0008225440979003906, "__label__health": 0.00029587745666503906, "__label__history": 0.00017833709716796875, "__label__home_hobbies": 6.300210952758789e-05, "__label__industrial": 0.0002486705780029297, "__label__literature": 0.0001747608184814453, "__label__politics": 0.00015878677368164062, "__label__religion": 0.00032258033752441406, "__label__science_tech": 0.01297760009765625, "__label__social_life": 8.589029312133789e-05, "__label__software": 0.00891876220703125, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0002048015594482422, "__label__transportation": 0.0003514289855957031, "__label__travel": 0.00016629695892333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57736, 0.00632]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57736, 0.25499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57736, 0.90724]], "google_gemma-3-12b-it_contains_pii": [[0, 5249, false], [5249, 11737, null], [11737, 17381, null], [17381, 22435, null], [22435, 28468, null], [28468, 34977, null], [34977, 39831, null], [39831, 45619, null], [45619, 50865, null], [50865, 57736, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5249, true], [5249, 11737, null], [11737, 17381, null], [17381, 22435, null], [22435, 28468, null], [28468, 34977, null], [34977, 39831, null], [39831, 45619, null], [45619, 50865, null], [50865, 57736, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57736, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57736, null]], "pdf_page_numbers": [[0, 5249, 1], [5249, 11737, 2], [11737, 17381, 3], [17381, 22435, 4], [22435, 28468, 5], [28468, 34977, 6], [34977, 39831, 7], [39831, 45619, 8], [45619, 50865, 9], [50865, 57736, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57736, 0.0487]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
4813939ba67bae7e223d7e7647c4348b6348b985
|
Question Answering over Knowledge Graphs via Structural Query Patterns
Weiguo Zheng\(^1\), Mei Zhang\(^2\)
\(^1\)Fudan University, China; \(^2\)Wuhan University of Science and Technology, China
zhengweiguo@fudan.edu.cn, zhangmeiontoweb@gmail.com
Abstract
Natural language question answering over knowledge graphs is an important and interesting task as it enables common users to gain accurate answers in an easy and intuitive manner. However, it remains a challenge to bridge the gap between unstructured questions and structured knowledge graphs. To address the problem, a natural discipline is building a structured query to represent the input question. Searching the structured query over the knowledge graph can produce answers to the question. Distinct from the existing methods that are based on semantic parsing or templates, we propose an effective approach powered by a novel notion, structural query pattern, in this paper. Given an input question, we first generate its query sketch that is compatible with the underlying structure of the knowledge graph. Then, we complete the query graph by labeling the nodes and edges under the guidance of the structural query pattern. Finally, answers can be retrieved by executing the constructed query graph over the knowledge graph. Evaluations on three question-answering benchmarks show that our proposed approach outperforms state-of-the-art methods significantly.
1 Introduction
Querying knowledge graphs like DBpedia, Freebase, and Yago through natural language questions has received increasing attentions these years. In order to bridge the gap between unstructured questions and the structured knowledge graph \(G\), a widely used discipline is building a structured query graph \(q\) to represent the input question such that \(q\) can be executed on \(G\) to retrieve answers to the question (Berant et al. 2013); Zheng et al. 2015; Hu et al. 2018). To the end, there are two streams of researches, i.e., semantic parsing based methods and template based methods, both of which suffer from several problems as discussed next.
Semantic parsing based methods. The aim of semantic parsing is translating the natural language utterances into machine-executable logical forms or programs (Gardner et al. 2018). For example, the phrase “director of Philadelphia” may be parsed as \(\lambda x.\text{Director}(x) \land \text{DirectedBy}(\text{Philadelphia}(\text{film}), x)\), where Director, DirectedBy, and Philadelphia(film) are grounded predicates and entities in the specific knowledge graph. Traditional semantic parsers (Zettlemoyer and Collins 2005; Wong and Mooney 2007; Kwiatkowski et al. 2010) require a lot of annotated training examples in the form of syntactic structures or logical forms, which is especially expensive to collect for large-scale knowledge graphs. Another problem is the mismatch between the generated logic forms and structures (including entities and predicates) that are specified in knowledge graphs (Kwiatkowski et al. 2013; Berant and Liang 2014; Reddy, Lapata, and Steedman 2014). In order to solve the problems above, several efforts have been devoted to lifting these limitations (Yih et al. 2015; Bao et al. 2016). They leverage the knowledge graph in an early stage by applying deep convolutional neural network models to match questions and predicate sequences. It is required to identify the topic entity \(e\) and a core inferential chain that is a directed path from \(e\) to the answer. Then the final executable query is iteratively constructed based on the detected chain. However, it is hard to pick out the correct inferential chains (35% of the errors are caused by the incorrect inferential chains in STAGG (Yih et al. 2015)). Moreover, it is unreasonable to restrain the chain as a directed path since it may be a general path regardless of the direction in many cases. For instance, Figure 1 presents the query graphs for two questions “\(q_1\): Who is starring in Spanish movies produced by Benicio del Toro?” and “\(q_2\): Which artists were born on the same date as Rachel Stevens?” targeting DBpedia, neither of which contains directed paths. In addition, the search space is uncertain and difficult to determine when to terminate.
Instead of training semantic parsers, several methods that are built upon the existing dependency parses have been proposed (Zou et al. 2014; Ruseti et al. 2015). They try to generate the query graphs according to the dependency parsing results and pre-defined rules. Clearly, it is extremely difficult to enumerate all the rules and eliminate conflicts among the rules.
Template-based methods. A bunch of researches focus on using templates to construct query graphs (Unger et al. 2012; Cui, Xiao, and Wang 2016; Abujabal et al. 2017; Zheng et al. 2018), where a template consists of two parts: the nat-
natural language pattern and SPARQL query pattern. The two kinds of patterns are linked through the mappings between their slots. In the offline phase, the templates are manually or automatically constructed. In the online phase, it tries to retrieve the template that maps the input question. Then the template is instantiated by filling the slots with entities identified from the question. The generated query graph is likely to be correct if the truly matched template is picked out. Nevertheless, the coverage of the templates may be limited due to the variability of natural language and a large number of triples in a knowledge graph, which will lead to the problem that many questions cannot be answered correctly. Furthermore, automatically constructing and managing large-scale high-quality templates for complex questions remains open problems.
**Our Approach and Contributions.** As discussed above, the semantic parsing based algorithms show good scalability to answer more questions, while the template-based methods exhibit advantage in precision. Hence, it is desired to design an effective approach that can integrate both of the two strengths. To this end, there are at least two challenges to be addressed.
**Challenge 1.** Devising an appropriate representation that can capture the query intention and is easy to ground to the underlying knowledge graph. The representation is required to intuitively match or reconstruct the query intention of the input question. Meanwhile, it is natural to be grounded to the knowledge graph, which is beneficial to improve the precision of the system.
**Challenge 2.** Completeness of representations should be as high as possible. Although the template-based methods perform good in terms of precision, they suffer the problem of template deficiency in real scenarios. Guaranteeing the completeness of representations is crucial to enhance the processing capacity. Moreover, in order to reduce the cost of building such a question answering system, the representations should be easy to construct.
Rather than using the semantic parsing or templates, we propose a novel framework based on structural query patterns to build a query graph for the input question in this paper. It comprises of three stages, i.e., structural query pattern (shorted by $SQP$) generation, $SQP$-guided query construction, and constraint augmentation. In principle, instead of parsing the question $q$ into a logic form that is equipped with specific semantic arguments (including entities and predicates), we just need to identify the shape or sketch of $q$’s query graph in the first stage. It benefits from two folds: (1) The number of structural patterns for most questions is limited so that they can be enumerated in advance. For instance, there are 4 structural patterns for the questions in LC-QuAD \cite{trivedi2017lcquad} that is a benchmark for complex question answering over DBpedia. (2) It is easy to produce a structured pattern with high precision compared to generating complicated logic forms. In the second stage, we build the query graph by extending one entity that is identified according to the question $q$. The construction proceeds under the guidance of the structural pattern. Hence, the search space can be reduced rather than examining all the predicates adjacent to an entity. Furthermore, it is straightforward to determine whether the extension procedure can terminate or not. Finally, the constraints specified in the question $q$ are detected to produce the complete structured query for $q$. Note that the procedure involves multiple steps, i.e., $SQP$ generation, entity linking, and relation selection. In summary, we make the following contributions in this paper:
- We propose a novel framework based on structural query patterns to answer questions over knowledge graphs;
- We present an approach that generates a query graph for the input question by applying structural query patterns;
- Experimental results on two benchmarks show that our approach outperforms state-of-the-art algorithms.
## 2 Preliminaries
We aim to build a query graph $g$ for the question such that $g$ can be executed on the knowledge graph, where the query graph $g$ is a graph representation of the input question and can be executed on $G$. Formally, it is defined as Definition \ref{def:query_graph}.
**Definition 2.1.** (Query graph, denoted by $g$). A query graph for the input question satisfies the following conditions: (1) each node in $g$ corresponds to an entity $e \in G$, a value $v \in G$, or a variable; (2) each edge in $g$ corresponds to a predicate or property in $G$; (3) $g$ is subgraph isomorphic to $G$ by letting the variable in $g$ match any node in $G$.
The existing semantic parsing based methods map a natural language question $q$ to logical forms that contain phrases from $q$ or entities/predicates in the knowledge graph. The template-based algorithms bridge the gap between unstructured $q$ and structured $G$ by using templates. Different from them, we propose a novel approach by applying structural query patterns.
**Definition 2.2.** (Structural query pattern, shorted as $SQP$). The structural query pattern is the structural representation of the question $q$, that is the remaining structure obtained by removing all the node and edge labels from the query graph $g$ of $q$.
Note that the type-node and its adjacent edges are removed as well. We can call a structural query pattern as a query sketch or a shape of the query graph.
Since most questions involve just a few entities in the knowledge graph $G$, the number of structural query patterns
is very limited in the real scenario. Therefore, we can enumerate all these patterns in advance. We observe that each structural query pattern of all the questions in LC-QuAD (Trivedi et al. 2017), QALD-7 (Usbeck et al. 2017), QALD-8 (Usbeck et al. 2018b), and QALD-9 (Usbeck et al. 2018a) contains 4 nodes at most. Note that LC-QuAD consists of 5,000 questions, 78% of which are complex questions with multiple relations and entities. The structural query patterns consisting of 4 nodes at most are listed in Figure 2. There are 12 structural query patterns in total, where the first pattern $p_0$ contains only one node as the type nodes are removed from the query graph as well. For instance, the structural query pattern for the question “give me all the automobiles” is $p_0$. The questions in LC-QuAD involve only structural query patterns $p_1$-$p_4$. The questions in QALD-7 and QALD-8 involve structural query patterns $p_0$-$p_6$ and $p_0$-$p_5$, respectively. The questions in QALD-9 involve structural query patterns $p_0$-$p_4$ and $p_7$. In comparison, to capture these questions precisely, it requires thousands of templates. Clearly, the structural query patterns exhibit a strong ability to capture the structural representations of natural language questions.
### 3 Structural Query Pattern Recognition
Given a question $q$ in the online phase, we need to produce the structural query pattern ($SQP$) for $q$. Since the structural query patterns have been listed in advance, we can take the problem of $SQP$ recognition as a classification task, where each entry corresponds to a structural query pattern.
**Data preparation.** First of all, we prepare the training data carefully to enhance overall performance. Notice that our defined structural query patterns do not contain any specific labels including phrases, entities, predicates, and properties. However, a natural language question consists of a sequence of words carrying semantic meanings. To make it fit the classification model well, we use the syntax tree of each question rather than the question itself. By utilizing the existing syntactic parsing tools, e.g., Stanford parser (Chen and Manning 2014), we can get the syntactic structure of each question. To avoid the effect of specific words, we remove them from the syntactic parsing results and only retain the syntactic tags and their relations.
According to the structure of the SPARQL query, each question $q$ is assigned a category label that corresponds to one of the 12 structural query patterns listed in Figure 2. Several question answering datasets are available, such as LC-QuAD and QALD, which provide both questions and the corresponding SPARQL queries. Thus it is easy to collect pairs of syntactic structure and the corresponding category label as the training data.
**Model training.** In this phase, we train the classification model to predict the category label for an input question. In this paper, we choose two models Text CNN model (Kim 2014) and RNN Attention model (Yang et al. 2016) to train the data collected from the previous subsection. The Text CNN model performs well in short text classification. Its basic principle is learning task-specific vectors through fine-tuning to offer further gains in performance. The RNN Attention model imports attention to capture the dependency of long text and can maintain key information from the text. However, these two models can only output a single label.
The output layers of the two models above just return the label with the largest confidence score. In order to increase the ability to deliver the correct label, we modify the two models so that they can assign each label $l$ a confidence score that represents the probability of being the correct label. In the online phase, we use the top-$k$ labels with the highest score.
**Model Ensemble.** Benefiting from the capability on capturing useful information in long text, the RNN Attention model performs better on complex and long questions which contains more than one entity. In contrast, we find that Text CNN model is better than RNN Attention model on dealing with short and simple questions.
**Example 1.** Let us consider the question “Name the city whose province is Metropolitan City of Venice and has leader as Luigi Brugnaro?” . It is a complex long question as multiple relations and entities are involved. The prediction result of Text CNN model is pattern $p_2$ in Figure 2. However, the correct pattern should be pattern $p_4$ as delivered by the RNN Attention model.
Since these two models exhibit different advantages when predicting category label, they can be assembled to make a prediction. We use a simple neural network as the training model to ensemble these two models. A soft max output layer is included to compute the top-$k$ structural query patterns with the largest score. The ranked list of $SQPs$ returned by RNN Attention model and Text CNN model for each question are integrated as the training data.

4 Query Graph Generation
Generally, a query graph contains one entity at least, which can help reduce the search space. Hence, we need to identify the entity \( e \) from the knowledge graph, where \( e \) corresponds to a phrase (named entity) in the question. It is actually the task of entity linking. Then the query graph is constructed by extending \( e \) under the guidance of \( SQP \).
4.1 Entity Linking
We perform entity linking that finds the entity corresponding to a phrase in the question from the underlying knowledge graph. Conducting entity linking involves two steps, i.e., identify named entities in the question, and then find their matching entities in the target knowledge graph \( G \).
In the first step, we use the named entity recognition (shorted as NER) model \( \text{Lample et al. 2016} \) to recognize entity keywords in the given question, where the model is based on bidirectional LSTMs and conditional random fields. The identified entity keywords are called entity phrases. In the second step, we link the entity phrase to the entity in \( G \) by computing the similarity between the entity phrase and candidate entities. Note that it is not necessary to identify all the entities in the question as we just need one entity to locate the candidate subgraphs in the knowledge graph.
We observe that there are two problems to be addressed, i.e., phrase truncation and multiple mapping entities. Algorithm 1 outlines the procedure.
1. **Phrase truncation.** A phrase may be truncated, which will lead to the false entity phrase and mapping/missing entity. For instance, in the question “Rashid Behbudov State Song Theatre and Baku Puppet Theatre can be found in which country?” a truncated phrase “Song Theatre” will lead to the false entity phrase and mapping/missing entity. For instance, in the question “Rashid Behbudov State Song Theatre and Baku Puppet Theatre can be found in which country?” a truncated phrase “Song Theatre” will lead to the false entity phrase and mapping/missing entity.
2. **Multiple mapping entities.** Finding the candidate mappings for each entity phrase by using DBpedia Lookup may return multiple entities. Generally, there is only one entity in \( G \) that matches an entity phrase in the question. Hence, it is desired to select the correct one. To the end, we compute a matching score for each candidate entity, where the matching score between each entity phrase \( phr \) and candidate entity \( e \) is computed as shown in Equation (1).
\[
ms(phr,e) = \alpha_1 \cdot \text{imp}(e) + \alpha_2 \cdot \text{sim}(phr,e) + \alpha_3 \cdot \text{rel}(q, evd(e))
\]
As defined above, the matching score consists of three components, i.e., the importance of the entity \( e \) (denoted as \( \text{imp}(e) \)), the similarity between \( phr \) and \( e \) (denoted as \( \text{sim}(phr,e) \)), and the relevance between \( q \) and the evidence text of \( e \) (denoted as \( \text{rel}(q, evd(e)) \)). They are formally defined as shown in Equations (2) - (4). The parameters \( \alpha_1 \), \( \alpha_2 \), and \( \alpha_3 \) are weights of the three components, respectively.
\[
\text{imp}(e) = \frac{1}{\text{rank}(e)}
\]
\[
\text{sim}(phr,e) = \frac{1}{\text{lev}(phr,e) + 1}
\]
where \( \text{lev}(phr,e) \) can be computed with the widely used metric, Levenshtein distance \( \text{(Levenshtein 1966)} \), for measuring the difference between two strings.
The relevance between \( q \) containing the entity \( e \) and the evidence text \( evd(e) \) of \( e \), e.g., the corresponding Wikipedia page \( doc \) of entity \( e \) can be taken as its evidence text. Then we compute the similarity between the question \( q \) and each sentence \( s_i \) in \( doc \) as shown in Equation (4), where \( vec(q) \) and \( vec(s_i) \) denote the vector representations of \( q \) and \( s_i \), respectively.
\[
\text{rel}(q, evd(e)) = \arg \max_{s_i \in doc} \frac{\text{vec}(q) \cdot \text{vec}(s_i)}{||\text{vec}(q)|| \cdot ||\text{vec}(s_i)||}
\]
Algorithm 1 Entity Linking
**Input:** Input question \( q \) and knowledge graph \( G \)
**Output:** Entity \( e_0 \) matching a phrase \( phr_0 \) in \( q \)
1. \( EP \leftarrow \) identify all the entity phrases using the NER tool
2. \( score \leftarrow 0 \)
3. for each phrase \( phr \) in \( EP \) do
4. \( PX \leftarrow \) the phrases containing \( phr \) whose length is not larger than \( \theta \)
5. \( PX \leftarrow PX \cup \{phr\} \)
6. \( EC \leftarrow \emptyset \)
7. for each phrase \( phr' \) in \( PX \) do
8. if \( phr' \) can match at least one entity in \( G \) then
9. \( EC \leftarrow EC \cup \) the candidate entities matching \( phr' \)
10. for each candidate entity \( e \) in \( EC \) do
11. compute the matching score \( ms(phr,e) \) between \( phr \) and \( e \)
12. \( e' \leftarrow \) the entity with the largest matching score in \( EC \)
13. if \( score < ms(phr,e') \) then
14. \( score \leftarrow ms(phr,e') \)
15. \( e_0 \leftarrow e', phr_0 \leftarrow phr \)
16. return \( e_0 \) and \( phr_0 \)
Finally, the entity with largest matching score is returned.
4.2 SQP-guided Query Graph Construction
With the predicted structural query pattern \( p \) and one identified entity \( e \), we are ready to construct the query graph. The basic idea is instantiating the pattern \( p \) through a data-driven search under the guidance of \( p \). Specifically, the search starts from the entity node \( e \), and retrieves a subgraph that contains \( e \) and is structurally isomorphic to the structural query pattern by ignoring all the node/edge labels. In order to construct the query graph, two tasks should be completed, i.e., locate the position of \( e \) in \( p \) and query graph extension.
**Task 1:** Locate the position of entity node \( e \) in the pattern \( p \).
Although both \( p \) and \( e \) can be obtained as discussed above, the position of the node \( e \) in \( p \) is unknown. To locate the position of entity \( e \) in the pattern \( p \), we introduce an important observation with the "non-redundancy assumption": if the question \( q \) has only one return variable, the words in \( q \) are all helpful to depict the query intent.
**Lemma 4.1.** The entity \( e \in G \) identified for the question \( q \) is not an intermediate node in the structural query pattern \( p \).
**Proof.** The underlying rationale is that the question will contain useless words if \( e \) is an intermediate node in \( p \). The proof can be achieved by contradiction. Let us assume that \( e \) is an intermediate node. It will lead to a triple \((e, r, x)\) at least, where \( x \) is an entity or a literal string, and \( r \) is the incident relation. Clearly, this triple contributes nothing to restraining the variable in other triples as they are all constant nodes. It indicates that the node \( x \) and relation \( r \) are useless to specify the answers, which contradicts the non-redundancy assumption aforementioned. \( \square \)
Lemma 4.1 works under the premise that the query graph of a given question does not contain any cycles. Note that all the query graphs in the two benchmarks used in the paper are trees. Furthermore, the answers can be retrieved even if the corresponding query graph is not a tree since the tree graphs in the two benchmarks used in the paper are trees. Furthermore, the answers can be retrieved even if the corresponding query graph is not a tree since the tree has fewer constraints than a graph. Then we can refine the answers according to the information in the question that is not covered by the tree pattern.
**Example 2.** Assume that \( p \) is the third pattern, i.e., \( p = p_2 \) in Figure 2 and \( e \) is the intermediate node. Thus one of the two leaf nodes represents the return variable, and the other node will be an entity \( e' \) or a literal string \( l \). It suggests that there is a triple \((e, r_1, ?, e)\), \((?, r_1, e)\), or \((e, r_1, ?)\), where \( r_1 \) is the adjacent relation to \( e' \) or \( l \). As both \( e \) and \( e' \) (resp. \( l \)) are specified entities (resp. literal string), the three triples will contribute nothing to restraining the variable in the other triple \((?, r_2, e)\) or \((e, r_2, ?)\), where \( r_2 \) is the incident relation to the variable node \( ? \). Hence, we can conclude that \( e \) is not an intermediate node in \( p \). The analysis holds for other patterns as well.
As a supporting proof through real data analysis, we find that all the entities are not intermediate nodes in the benchmarks LC-QuAD and QALD.
**Task 2:** Query graph extension. With the entity and its position in \( p \), we build the query graph in this task. The main principle is extending the query graph (initially just an entity node \( e \)) gradually by including relations, entities, or variables to \( p \) under the help of the structure in \( p \). The expanding procedure is depicted in Algorithm 2. Note that we select the pattern with the largest confidence score for simplicity.
Since a structural query pattern may contain multiple non-intermediate nodes, it is not trivial to determine the correct one. We propose an extension procedure following a data-driven manner. Initially, we make a copy \( Q \) of the structural query pattern \( p \). If the non-intermediate nodes in \( p \) have both incoming edges and outgoing edges, e.g., SQPs 1, 2, 6, 7, 9 and 10 in Figure 2 the incoming relations and outgoing relations of entity \( e \) will be collected. Otherwise, we just need to consider the incoming or outgoing relations (lines 5-8 in Algorithm 2). Then we compute the relevance between each candidate relation \( r \) and \( q \). The relation with the largest relevance is selected. As a relation \( r \) may be composed of multiple words, e.g., dateOfBirth, it should be split to get a sequence of words. Then the relevance \( rel(q, r) \) between \( q \) and each candidate relation \( r \) can be calculated by Equation (5), where \( \lambda \) is a weight ranging from 0 to 1. \( q \) and \( r \) represent the \( i \)th and \( j \)th words in \( q \) and \( r \), respectively.
\[
rel(q, r) = \sum_{i=1}^{\text{|q|}} \sum_{j=1}^{\text{|r|}} \lambda \cdot \cos(q_i, r_j) + (1 - \lambda) \cdot \frac{1}{lev(q_i, r_j) + 1} \tag{5}
\]
As shown in Equation (5), we use two metrics to measure the relevance between two words \( w_1 \) and \( w_2 \). The first one is the cosine score \( \cos(w_1, w_2) \) between two vectors of \( w_1 \) and \( w_2 \) which are obtained by training the glove data with
the model of word2vec. Two words are semantically closer to each other if their cosine score is larger. The other metric is Levenshtein distance $\text{lev}(w_1, w_2)$ that calculates the edit cost between two words. After obtaining the relation $r$ that is the most relevant to $q$ as shown in line 9 of Algorithm 2, the specific position of $e$ can be determined according to the direction of $r$. Then we can include $e$ and $r$ into $Q$. Taking the node and entities adjacent to the subgraph $Q_L$ (that has been labeled with entities, variables, and relations currently) as a starting node, the extension procedure proceeds iteratively until all the nodes and edges have been labeled. Finally, the query graph $Q$ is returned.
### 4.3 Constraint Augmentation
Actually, executing the query graph above can return a list of answers which contain the correct one. However, it may generate undesired entities or values as a question may put some constraints on the query graph. For instance, the question “What is the highest mountain in Italy?” specifies the ordinal constraint “highest” on mountains.
We divide the constraints into 4 categories as follows:
- **answer-type constraint**, e.g., “which actor”;
- **ordinal constraint**, e.g., “highest”;
- **aggregation constraint**, e.g., “how many”;
- **comparative constraint**, e.g., “larger than”.
Similar to the approaches [Yih et al. 2015](#), [Bao et al. 2016](#), we employ simple rules to detect these constraints and augment them to the query graph.
### 5 Experiments
In this section, we evaluate the proposed method systematically and compare it with the existing algorithms.
#### 5.1 Datasets and Experimental Settings
We use DBpedia ([Auer et al. 2007](#)) as the target knowledge graph. DBpedia is an open-domain knowledge graph that consists of 6 million entities and 1.3 billion triples as reported in the statistics of DBpedia 2016-10.
Two question answering benchmarks LC-QuAD ([Trivedi et al. 2017](#)) and QALD ([Usbeck et al. 2017](#)) delivered over DBpedia are used to evaluate our proposed approach.
- **LC-QuAD** is a gold standard question answering dataset that contains 5000 pairs of natural language questions and SPARQL queries, 728 of which are simple questions with single relation and single entity.
- **QALD**-8 ([Usbeck et al. 2018b](#)) and QALD-9 ([Usbeck et al. 2018a](#)). QALD is a long-running question-answering evaluation campaign. It provides a set of natural language questions, the corresponding SPARQL queries and answers. QALD-8 contains 219 training questions and 42 test questions. QALD-9 contains 408 training questions and 150 test questions.
We randomly select 500 questions from LC-QuAD as the test data. Our models are trained for 100 epochs, with early stopping enabled based on validation accuracy. We use the 80-20 split as train and validation data.
In RNN Attention model, we set the dimensionality of character embedding to 128 and dropout keep probability to 0.5. The number of hidden units is 128. The number of attention units is 128. The number of hidden size is 1. In Text CNN model, we set dimensionality of character embedding to 128. The number of filters per filter is 128. The dropout keep probability is 0.5. L2 regularization lambda is 0.001.
Following conventions as shown in gAnswer2 ([Hu et al. 2018](#)), the macro precision, recall, and F1-measure are used to evaluate the performance. We compare our method, denoted by qaSQP, with Frankenstein ([Singh et al. 2018](#)), QAKIS ([Cabrio et al. 2013](#)), QASystem ([Usbeck et al. 2018a](#)), TeBaQA ([Usbeck et al. 2018a](#)), WDAqua ([Usbeck et al. 2017](#)), gAnswer2 ([Hu et al. 2018](#)), and qaSearch, where qaSearch constructs the query graph following a data-driven search rather than using the $SQP$ as guidance.
### 5.2 Experimental Results
#### Comparing with the previous methods
Table 1 presents the performance in terms of generated query graphs on the three datasets, where qaSQP-CE represents the proposed method that is fed by one correctly identified entity initially. As can be seen from the table, our proposed method outperforms the existing approach by a large margin 17.4% absolute gain on LC-QuAD. The performance improves further if our system is given one correctly identified entity in DBpedia. The performance on QALD-8 and QALD-9 get worse than that on LC-QuAD. There are two main reasons: (1) QALD-8 and QALD-9 provide less training data; (2) QALD-8 and QALD-9 are more challenging as several questions are outside the scope of the system. Further analysis is discussed in the next subsection.
Table 2 and Table 3 report the question answering results on QALD-8 and QALD-9, respectively. It is clear that the proposed method qaSQP outperforms the state-of-the-art competitors greatly. Basically, it benefits from the novel
<table>
<thead>
<tr>
<th>Method</th>
<th>LC-QuAD Precision</th>
<th>LC-QuAD Recall</th>
<th>LC-QuAD F1-Measure</th>
<th>QALD-8 Precision</th>
<th>QALD-8 Recall</th>
<th>QALD-8 F1-Measure</th>
<th>QALD-9 Precision</th>
<th>QALD-9 Recall</th>
<th>QALD-9 F1-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>Frankenstein</td>
<td>0.480</td>
<td>0.490</td>
<td>0.485</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.198</td>
<td>0.191</td>
<td>0.193</td>
</tr>
<tr>
<td>qaSearch</td>
<td>0.357</td>
<td>0.336</td>
<td>0.344</td>
<td>0.243</td>
<td>0.243</td>
<td>0.243</td>
<td>0.401</td>
<td>0.413</td>
<td>0.405</td>
</tr>
<tr>
<td>qaSQP</td>
<td>0.748</td>
<td>0.704</td>
<td>0.718</td>
<td>0.439</td>
<td>0.439</td>
<td>0.439</td>
<td>0.625</td>
<td>0.568</td>
<td>0.568</td>
</tr>
<tr>
<td>qaSQP-CE</td>
<td>0.835</td>
<td>0.813</td>
<td>0.827</td>
<td>0.558</td>
<td>0.663</td>
<td>0.620</td>
<td>0.522</td>
<td>0.625</td>
<td>0.568</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Method</th>
<th>QALD-8 Precision</th>
<th>QALD-8 Recall</th>
<th>QALD-8 F1-Measure</th>
<th>QALD-9 Precision</th>
<th>QALD-9 Recall</th>
<th>QALD-9 F1-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>qaSearch</td>
<td>0.357</td>
<td>0.336</td>
<td>0.344</td>
<td>0.243</td>
<td>0.243</td>
<td>0.243</td>
</tr>
<tr>
<td>qaSQP</td>
<td>0.748</td>
<td>0.704</td>
<td>0.718</td>
<td>0.439</td>
<td>0.439</td>
<td>0.439</td>
</tr>
<tr>
<td>qaSQP-CE</td>
<td>0.835</td>
<td>0.813</td>
<td>0.827</td>
<td>0.558</td>
<td>0.663</td>
<td>0.620</td>
</tr>
</tbody>
</table>
The results of other methods are obtained from the result reports ([Usbeck et al. 2018b](#)) and ([Usbeck et al. 2018a](#)).
framework of answering questions. Specifically, the query graph is easy to retrieve by reducing the search space under the guidance of the recognized query sketch and one identified entity from the question. In contrast, the competitors are unaware of the query sketch, which will increase the difficulty in constructing the correct query graphs and retrieving the answers. For instance, the method qaSearch performs much worse than qaSQP, which confirms the superiority of SQP-based framework.
**Evaluation of prediction models.** Since a key component of the system is introducing the structural query patterns, the models that predict the SQP are very important. So we study the performance of these prediction models. As presented in Table 4, the ensemble model outperforms the two individual models RNN-Attention and Text-CNN in terms of precision, recall, and F1 score on both QALD-8 and QALD-9. It means that the proposed ensemble model is effective.
**Effect of SQP recognition and entity linking.** The modules of SQP recognition and entity linking are very critical in the proposed system. However, they are not guaranteed to produce the correct SQP patterns or mapping entities. In order to study their effect and the boundary of the question answering ability, we conduct the experiments by providing the correct structural query patterns or mapping entities. Let qaSQP-CP denote the method that is fed by the correct SQP. Let qaSQP-CE denote the method that is fed by one correctly identified entity initially.
### Table 2: Question answering results on QALD-8
<table>
<thead>
<tr>
<th>Method</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>QAKIS</td>
<td>0.061</td>
<td>0.053</td>
<td>0.056</td>
</tr>
<tr>
<td>WDAqua-core0</td>
<td>0.391</td>
<td>0.407</td>
<td>0.387</td>
</tr>
<tr>
<td>gAnswer2</td>
<td>0.386</td>
<td>0.390</td>
<td>0.388</td>
</tr>
<tr>
<td>qaSearch</td>
<td>0.244</td>
<td>0.244</td>
<td>0.244</td>
</tr>
<tr>
<td>qaSQP</td>
<td><strong>0.459</strong></td>
<td><strong>0.463</strong></td>
<td><strong>0.461</strong></td>
</tr>
</tbody>
</table>
### Table 3: Question answering results on QALD-9
<table>
<thead>
<tr>
<th>Method</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>Elon</td>
<td>0.049</td>
<td>0.053</td>
<td>0.050</td>
</tr>
<tr>
<td>QASystem</td>
<td>0.097</td>
<td>0.116</td>
<td>0.098</td>
</tr>
<tr>
<td>TeBaQA</td>
<td>0.129</td>
<td>0.134</td>
<td>0.130</td>
</tr>
<tr>
<td>WDAqua-core1</td>
<td>0.261</td>
<td>0.267</td>
<td>0.250</td>
</tr>
<tr>
<td>gAnswer2</td>
<td>0.293</td>
<td>0.327</td>
<td>0.298</td>
</tr>
<tr>
<td>qaSearch</td>
<td>0.236</td>
<td>0.241</td>
<td>0.237</td>
</tr>
<tr>
<td>qaSQP</td>
<td><strong>0.458</strong></td>
<td><strong>0.471</strong></td>
<td><strong>0.463</strong></td>
</tr>
</tbody>
</table>
### Table 4: Results of prediction models
<table>
<thead>
<tr>
<th>Method (on dataset)</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>RNN-Attention (QALD-8)</td>
<td>0.82</td>
<td>0.83</td>
<td>0.82</td>
</tr>
<tr>
<td>Text-CNN (QALD-8)</td>
<td>0.79</td>
<td>0.78</td>
<td>0.78</td>
</tr>
<tr>
<td>Ensemble model (QALD-8)</td>
<td><strong>0.82</strong></td>
<td><strong>0.86</strong></td>
<td><strong>0.84</strong></td>
</tr>
<tr>
<td>RNN-Attention (QALD-9)</td>
<td>0.83</td>
<td>0.78</td>
<td>0.80</td>
</tr>
<tr>
<td>Text-CNN (QALD-9)</td>
<td>0.78</td>
<td>0.72</td>
<td>0.75</td>
</tr>
<tr>
<td>Ensemble model (QALD-9)</td>
<td><strong>0.85</strong></td>
<td><strong>0.79</strong></td>
<td><strong>0.82</strong></td>
</tr>
</tbody>
</table>
As shown in Table 5, all the methods equipped with correct SQPs or entities outperform the original method qaSQP. Note that the results on LC-QuAD are reported with respect to the performance on constructed structural query patterns. We observe that the improvement gained by qaSQP-CP is subtle. Moreover, we can find that qaSQP-CE performs much better than qaSQP-CP on both QALD-8 and QALD-9. It indicates that the system qaSQP can almost find the correct structural query patterns. Meanwhile, there is still much room to improve the initial entity linking.
**Effect of the number of returned patterns** $k$. We also study the effect of the number of returned patterns, denoted by $k$, of the prediction model. Figures 3(a) and 3(b) depict the results on QALD-8 and QALD-9, respectively. The parameter $k$ is varied from 1 to 3. As shown in the two figures, the precision, recall, and F1 score tend to be stable when $k$ is 2 and 3. Hence, $k$ is set to 2 by default in our experiments.
**Error Analysis.** Although our approach substantially outperforms existing methods, there is still much space for improving the performance on QALD-8 and QALD-9. For instance, besides the errors (29%) caused by entity linking, the precision of predicted structural query patterns is 82% for the test questions in QALD-8. Moreover, many questions in QALD-8 are very challenging. We find that 12 of 42 questions in QALD-8 leave out some important information or require external knowledge to find the correct answers, which increases the difficulty of answering for a system (41%). For example, the question “How big is the earth’s diameter?” cannot be answered directly since there is only a property “meanRadius” in DBpedia. To answer this question, the external knowledge that the diameter is two times the radius is required. The correct SPARQL query should be “select distinct (xsd:double(?radius)*2 AS ?diameter) where res:Earth dbo:meanRadius ?radius.”. 17% of the errors are caused by incorrect label assignments in query graph extension.
6 Conclusion and Future Work
In this paper, we focus on constructing query graphs for answering natural language questions over a knowledge graph. Unlike previous methods, we propose a novel framework based on structural query patterns. Specifically, we define structural query patterns that just capture the structural representations of input questions. Under the guidance of structural query patterns, the query graphs can be formulated. Our experiments show that the proposed approach outperforms the competitors significantly in terms of building query graphs and generating answers. In the future, we will explore how to eliminate the effect of entity linking throughout the whole system. Applying structured learning techniques to SQP generation will also be investigated.
References
[Cui, Xiao, and Wang 2016] Cui, W.; Xiao, Y.; and Wang, W. 2016. KBQA: an online template based question answering system over freebase. In IJCAI.
|
{"Source-Url": "http://export.arxiv.org/pdf/1910.09760", "len_cl100k_base": 10026, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32940, "total-output-tokens": 12240, "length": "2e13", "weborganizer": {"__label__adult": 0.0005731582641601562, "__label__art_design": 0.0010690689086914062, "__label__crime_law": 0.0008044242858886719, "__label__education_jobs": 0.01342010498046875, "__label__entertainment": 0.0004732608795166016, "__label__fashion_beauty": 0.00046539306640625, "__label__finance_business": 0.0007162094116210938, "__label__food_dining": 0.00067138671875, "__label__games": 0.0024890899658203125, "__label__hardware": 0.0010251998901367188, "__label__health": 0.0011682510375976562, "__label__history": 0.0008378028869628906, "__label__home_hobbies": 0.000202178955078125, "__label__industrial": 0.0006833076477050781, "__label__literature": 0.003192901611328125, "__label__politics": 0.0005106925964355469, "__label__religion": 0.0008645057678222656, "__label__science_tech": 0.406982421875, "__label__social_life": 0.0003943443298339844, "__label__software": 0.0654296875, "__label__software_dev": 0.49658203125, "__label__sports_fitness": 0.0004296302795410156, "__label__transportation": 0.000713348388671875, "__label__travel": 0.00031375885009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43671, 0.05027]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43671, 0.51295]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43671, 0.85883]], "google_gemma-3-12b-it_contains_pii": [[0, 4861, false], [4861, 10513, null], [10513, 15578, null], [15578, 20654, null], [20654, 26204, null], [26204, 32905, null], [32905, 37899, null], [37899, 43499, null], [43499, 43671, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4861, true], [4861, 10513, null], [10513, 15578, null], [15578, 20654, null], [20654, 26204, null], [26204, 32905, null], [32905, 37899, null], [37899, 43499, null], [43499, 43671, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43671, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43671, null]], "pdf_page_numbers": [[0, 4861, 1], [4861, 10513, 2], [10513, 15578, 3], [15578, 20654, 4], [20654, 26204, 5], [26204, 32905, 6], [32905, 37899, 7], [37899, 43499, 8], [43499, 43671, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43671, 0.17857]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
d0bd4865ea7f942000533e5d1f6959d66ce309d0
|
NEURAL NETWORKS FOR MODELING SOURCE CODE EDITS
Anonymous authors
Paper under double-blind review
ABSTRACT
Programming languages are emerging as a challenging and interesting domain for machine learning. A core task, which has received significant attention in recent years, is building generative models of source code. However, to our knowledge, previous generative models have always been framed in terms of generating static snapshots of code. In this work, we instead treat source code as a dynamic object and tackle the problem of modeling the edits that software developers make to source code files. This requires extracting intent from previous edits and leveraging it to generate subsequent edits. We develop several neural networks and use synthetic data to test their ability to learn challenging edit patterns that require strong generalization. We then collect and train our models on a large-scale dataset consisting of millions of fine-grained edits from thousands of Python developers. From the modeling perspective, our main conclusion is that a new composition of attentional and pointer network components provides the best overall performance and scalability. From the application perspective, our results provide preliminary evidence of the feasibility of developing tools that learn to predict future edits.
1 INTRODUCTION
Source code repositories are in a state of continuous change, as new features are implemented, bugs are fixed, and refactorings are made. At any given time, a developer will approach a code base and make changes with one or more intents in mind. The main question in this work is how to observe a past sequence of edits and then predict what future edits will be made. This is an important problem because a core challenge in building better developer tools is understanding the intent behind a developer’s actions. It is also an interesting research challenge, because edit patterns cannot be understood only in terms of the content of the edits (what was inserted or deleted) or the result of the edit (the state of the code after applying the edit). An edit needs to be understood in terms of the relationship of the change to the state where it was made, and accurately modeling a sequence of edits requires learning a representation of the past edits that allows the model to generalize the pattern and predict future edits.
As an example, consider Figure 1. We show two possible edit sequences, denoted as History A and History B. Both sequences have the same state of code after two edits (State 2), but History A is in the process of adding an extra argument to the foo function, and History B is in the process of removing the second argument from the foo function. Based on observing the initial state (State 0) and the sequence of edits (Edits 1 & 2), we would like our models to be able to predict Edit 3. In the case of History A, the specific value to insert is ambiguous, but the fact that there is an insertion at that location should come with reasonably high confidence.
The main challenge in modeling sequences of edits is in how to develop good representations that will both capture the required information about intent, as well as scale gracefully with the length of the sequence. We consider two representations of edits that we call explicit and implicit representations. The explicit representation explicitly instantiates the state resulting from each edit in the sequence, while the implicit representation instantiates a full initial state and then subsequent edits in a more compact diff-like representation. On the explicit representation we consider a hierarchical recurrent pointer network model as a strong but computationally expensive baseline. On the implicit representation, we consider a vanilla sequence-to-sequence model, and a two-headed attention-based model with a pointer network head for producing edit positions and a content head for producing edit
Figure 1: Two illustrative edit sequences. History A and B share the same State 2, but based on the histories, it is more likely that History A will continue by modifying the call to `foo` to take an extra argument and History B will continue by modifying the definition of `foo` to take only one argument.
Contents. These models demonstrate tradeoffs that arise from different problem formulations and inform design decisions for future models of edit sequences.
On carefully designed synthetic data and a large-scale dataset of fine-grained edits to Python source code, we evaluate the scalability and accuracy of the models on their ability to observe a sequence of past edits and then predict future edits. We show that the two-headed attention model is particularly well-suited to achieving high accuracy, well-calibrated confidence, and good scalability on the real data, which makes us optimistic about future prospects of developer tools that learn to extract intent from developers as they make edits to large, real code bases. In total, this work formalizes the problem of learning from and predicting edit sequences, provides an initial exploration of model space, and demonstrates applicability to the real-world problem of learning from edits that developers make to source code.
2 Problem Formulation
Implicit vs. Explicit Data Representation. The first question is how to represent edit sequence data. We define two data formats having different tradeoffs. The explicit format (Figure 2 (a)) represents an edit sequence as a sequence of sequences of tokens in a 2D grid. The inner sequences index over tokens in a file, and the outer sequence indexes over time. The task is to consume the first t rows and predict the position and content of the edit made at time t. The implicit format (Figure 2 (b)) represents the initial state as a sequence of tokens and the edits as a sequence of (position, content) pairs.
The explicit representation is conceptually simple and is relatively easy to build accurate models on top of. The downside is that processing full sequences at each timestep is expensive and leads to poor scaling. Conversely, it is easy to build scalable models on the implicit representation, but it is challenging to recognize more complicated patterns of edits and generalize well. We show experimentally in Section 4 that baseline explicit models do not scale well to large datasets of long sequences and baseline implicit models are not able to generalize well on more challenging edit sequences. Ideally we would like a model that operates on the implicit data format but generalizes well. In Section 5 we develop such a model, and in Section 6 we show that it achieves the best of both worlds in terms of scalability and generalization ability.
Notation. A state s is a sequence of discrete tokens $s = (s_0, \ldots, s_M)$ with $s_m \in \mathcal{V}$, and $\mathcal{V}$ is a given vocabulary. An edit $e^{(t)} = (p^{(t)}, c^{(t)})$ is a pair of position $p^{(t)} \in \mathbb{N}$ and content $c^{(t)} \in \{\text{DELETE}\} \cup \mathcal{V}$. An edit sequence (which we also call an instance) is an initial state $s^{(0)}$ along with a sequence of edits $e = (e^{(1)}, \ldots, e^{(T)})$. We can also refer to the implied sequence of states $(s^{(0)}, \ldots, s^{(T)})$, where $s^{(t)}$ is the state that results from applying $e^{(t)}$ to $s^{(t-1)}$.
There are two representations for position. In the explicit representation, $p^{(t)} = p^{(t)}_e$ is the index of the token in $s^{(t-1)}$ that $c^{(t)}$ should be inserted after. If $c^{(t)}$ is a DELETE symbol, then $p^{(t)}_e$ is the index of the token in $s^{(t-1)}$ that should be deleted. In the implicit representation, we assign indices $0, \ldots, M$ to tokens in the initial state and indices $M + 1, \ldots, M + T$ to the edits that are made; i.e., if a token is inserted by $c^{(t)}$ then it is given an index of $M + t$. The position $p^{(t)} = p^{(t)}_i$ is the index of the token that the edit should happen after. For example, see Figure 2 (b). The second edit is to
We can now state the learning problems. The goal in the explicit problem is to learn a model that maximizes the likelihood of \( e^{(t)} \) given \( s^{(0)}, \ldots, s^{(t-1)} \) and the implicit problem is to learn a model that maximizes the likelihood of \( e^{(t)} \) given \( s^{(0)}, e^{(1)}, \ldots, e^{(t-1)} \) for all \( t \).
3 Baseline Models
Baseline Explicit Model. The baseline explicit model is a two-level Long Short-Term Memory (LSTM) neural network similar to hierarchical RNN models like in Serban et al. (2016). We refer to the hidden vector for the \( m^{th} \) token in timestep \( t \) as \( h^{(t,m)} \in \mathbb{R}^D \). In the simplest version of the model, the first level LSTM encodes each state sequence \( s^{(t)} \) in parallel and produces hidden states \( h^{(t,0)}, \ldots, h^{(t,M)} \). The second level LSTM takes as input the sequence of \( h^{(0,M)}, \ldots, h^{(T,M)} \) and produces hidden state \( \tilde{h}^{(t)} \in \mathbb{R}^D \) and output state \( o^{(t)} \in \mathbb{R}^D \) for each time step. We predict the distribution over position \( c^{(t+1)} \) as \( \text{softmax}(W^{(out)}o^{(t)}) \), where \( W^{(out)} \in \mathbb{R}^{(|V|+1) \times D} \). To predict \( p_{e_{(t+1)}} \), we use a pointer network construction (Vinyals et al., 2015) where \( p_{e_{(t+1)}} = p_{e_{(t+1)}} \) and \( P_{e_{(t+1)}}(m) = \exp \alpha^{(t,m)} \). A useful elaboration is to provide a second path of context for the first level LSTM by concatenating the previous value of the token in the same position as an input. That is, we provide \( (s^{(t-1)}, s^{(t)}) \) as input at position \( (t, m) \), letting \( s^{(t-1)}_m \) be a special padding symbol. See Figure 3(a) for an illustration.
Baseline Implicit Model. The natural application of the sequence-to-sequence framework is to consume the initial state \( s^{(0)} \) in the encoder and produce the sequence of \( (p^{(t)}, e^{(t)}) \) pairs in the decoder. The encoder is a standard LSTM. The decoder is slightly non-standard because each action is a pair. To deal with pairs as inputs, we concatenate an embedding of \( p^{(t)} \) with an embedding of \( e^{(t)} \). To produce pairs as outputs, we predict position and then content given position.
Formally, for position inputs, we embed each integer \( m \in \{0, \ldots, M + T - 1\} \) by taking the \( m^{th} \) column of a learned matrix \( W^{(p-in)} \in \mathbb{R}^{D \times (M + T)} \). These are concatenated with content embeddings to produce inputs to the decoder, yielding hidden state \( h^{(t)} \) at step \( t \) of the decoder. To predict position, we use a matrix \( W^{(p-out)} \in \mathbb{R}^{(M + T) \times D} \) and define the distribution over position \( p^{(t+1)} \) as \( \text{softmax}(W^{(p-out)}h^{(t)}) \). Content is predicted by concatenating \( h^{(t)} \) with \( W^{(p-in)} \), the embedding of the ground truth position, to get \( \tilde{h}^{(t)} \), and then the predicted distribution over content \( c^{(t+1)} \) is \( \text{softmax}(W^{(c-out)}\tilde{h}^{(t)}) \) where \( W^{(c-out)} \in \mathbb{R}^{(|V|+1) \times D} \) maps hidden states to content logits.
4 Implicit Attention Model
Here we develop a model that operates on the implicit representation but is better able to capture the sequence of the relationship of edit content to the context in which edits were made. The model is heavily inspired by Vaswani et al. (2017). At training time, the full sequence of edits is predicted in a single forward pass. There is an encoder that computes hidden representations of the initial state and all edits, then two decoder heads: the first decodes the position of each edit, and the second decodes the content of the edit.
Under review as a conference paper at ICLR 2019
Figure 3: Diagrams of (a) baseline explicit model and (b, c) implicit attention model.
decodes the content of each edit given the position. The cross-time operations are implemented via the positional encoding and attention operations of [Vaswani et al. (2017)] with masking structure so that information cannot flow backward from future edits to past edits. Here, we give an overview of the model, focusing on the intuition and overall structure. An illustration appears in Figure 3 (b, c). Further details are in the Appendix, and we refer the reader to [Vaswani et al. (2017)] for full description of positional encodings, multi-head attention (MHA), and masked attention operations.
**Encode Initial State and Edits.** The first step is to encode the initial state \( s = (s_0, \ldots, s_M) \) and the edits \( e = ((p_1^{(1)}, c_1^{(1)}), \ldots, (p_t^{(T)}, c_t^{(T)})) \) into a matrix of hidden states \( \tilde{H} \in \mathbb{R}^{D \times (M+T)} \). We embed tokens and edits independently and then exchange information across the initial sequence and edits by MHA operations. After this step, we hope \( \tilde{H} \) includes all of the relevant context about each initial position and edit position. A diagram of the encoder structure appears in Figure 3 (b).
**Assemble the Chosen Contexts.** The decoder first gathers a matrix of ‘contexts’ \( U \in \mathbb{R}^{D \times T} \) where \( U_{c,t} = \tilde{H}_{p_t^{(t)}} \). Intuitively, this step assembles hidden representations of the contexts in which previous edits were made into a new, more compact and more relevant subsequence.
**Predict the Next Positions.** The first head of the decoder looks for patterns in the sequence of columns of \( U \). It predicts a query vector \( \tilde{u}_t \) from \( U_{c,t} \) that can be interpreted as a prediction of what context is expected to be edited next given the previous contexts that have been edited. The query vector is compared via inner product against each column of \( \tilde{H} \) as in Pointer networks [Vinyals et al. 2015] to get a probability distribution over next edit positions: \( p(p_t^{(t)} = m) \propto \exp \tilde{u}_t \, \tilde{H}_{c,m} \).
As an example, consider an edit sequence that appends \( \text{B} \) after each \( \text{A} \) in left-to-right order. If the encoder just preserves the identity and position of the corresponding content in \( \tilde{H} \), then the sequence of vectors in \( U \) will just be encodings of \( \text{A} \) with ascending positions. From this, it should be easy to learn to produce a \( \tilde{u}_t \) with content of \( \text{A} \) at a position beyond where the last edit was made.
**Predict the Next Contents.** The second head of the decoder predicts the content \( c^{(t)} \) given all previous edits and the current position \( p_t^{(t)} \). It first embeds the contents of all of the edits \((c^{(1)}, \ldots, c^{(T)})\) into a matrix of embeddings \( V \in \mathbb{R}^{D \times T} \). Let \( \tilde{V} \) be a shifted version of \( V \); i.e., \( \tilde{V}_{c,t} = V_{c,t-1} \) with \( V_{c,-1} = 0 \). In the vanilla decoder, we let \( \tilde{A} = \tilde{V} + U \), which is the simplest way of combining information about previous content embeddings with current position embeddings. We pass \( \tilde{A} \) through a masked MHA operation to get \( \tilde{A} \). Each column \( t \) of \( \tilde{A} \) is passed through a dense layer and softmax to get predicted logits for \( c^{(t)} \).
The analogical decoder (see Figure 3 (c)) is motivated by the intuition that \( V - U \) can be thought of as a matrix of analogies, because \( V \) encodes the content that was produced at each timestep, and \( U \) encodes the context. Intuitively, we might hope that \( V_{c,t} \) encodes “insert \( \text{B} \)” while the corresponding \( U_{c,t} \) encodes “there’s a \( \text{B} \) and then an \( \text{A} \)” and the difference encodes “insert whatever comes before \( \text{A} \)”. If so, then it might be easier to predict patterns in these analogies. To implement this, we let \( \tilde{U} \) be a shifted version of \( U \) and \( \tilde{A} = \tilde{V} - \tilde{U} \), then we predict the next analogies \( \tilde{A} \) using MHA operations over \( \tilde{A} \). The predicted analogies are added to the unshifted current contexts \( \tilde{A} + U \), which
is intuitively the predicted analogy applied to the context of the position where the current edit is about to be made. From this, we apply a dense layer and softmax to predict logits for $c^{(t)}$.
5 Synthetic Datasets
To study the ability of the various models to learn specific edit patterns and to isolate the ability to strongly generalize, we developed a suite of synthetic datasets based on regular expression replacements. The datasets are inspired by the kinds of edits we might see in real data, but they are simplified to allow clearer interpretation of results.
The datasets are based on generating a uniformly random initial string $s$ of length $L$ from a vocabulary $V$ of size $V$. Each dataset is defined by a pattern and replacement. The pattern defines a criterion for matching a position in $s$ and the replacement defines what the pattern should be replaced with. Both pattern and replacement are sequences of characters from $V$ (denoted by $A$, $B$, ...) and meta characters (denoted by $x$, $y$, $z$), plus additional regular expression syntax: parentheses define groups in the pattern, and $\backslash N$ can be used in the replacement to refer to the sequence that was matched in the $N^{th}$ group. Meta characters will be replaced by different characters from $V$ in each edit sequence. For example, let the pattern be ‘(.)B’ and replacement be ‘\1x\1\1’. We might sample two initial strings BACA and DBBA, and sample $x$ to be replaced with $A$ and $B$ respectively. This would yield edit sequences with the following start and end states: $BACA \rightarrow BABBCACC$ and $DBBA \rightarrow DBDDBA$.
We define a snapshot to be the initial state and each intermediate state where a pattern has been replaced with a full replacement. We require that each synthetic instance be composed of at least four snapshots; otherwise the instance is rejected and we try again with a different random draw of initial state and meta character replacements. In the first example above, the snapshots would be $[BACA, BABBBCA, BABBBCACC]$. The full edit sequence is shown in Figure 2(a). To create the edit sequences, we compute diffs between successive snapshots and apply the edits in the diff one token at a time from left to right. The set of edits that comprise a diff between two snapshots is not unique, and we use Python’s built-in diff library to disambiguate between possible sets of edits. For each synthetic dataset, some number of edits need to be observed before the pattern becomes unambiguous. We call this the number of conditioning steps and give this many edits to the models before including their predictions in the total loss. We also create a “MultiTask” dataset that combines instances from all the above datasets, which is more challenging. A full list of the patterns and replacements that define the datasets appears in the Appendix.
6 Experiments
The goal of the experiments is to understand the capabilities and limitations of the models discussed above, and to evaluate them on real data. Two main factors are how accurately the models can learn to recognize patterns in sequences of edits, and how well the models scale to large data. In our first set of experiments we study these questions in a simple setting; in the second set of experiments we evaluate on real data. In this section we evaluate three methods: the explicit model abbreviated as $E$, the implicit RNN model abbreviated as $IR$, and the implicit attention model from Section 4 with the analogical decoder abbreviated as $IA$. In the Appendix we evaluate variants of the implicit attention model that use the vanilla decoder and different update functions inside the MHA operations.
6.1 Experiments on Synthetic Data
Accuracy. The first question is how well the various models perform in terms of accuracy. For each of the synthetic tasks described in the Appendix we generated datasets of size 10k/1k/1k instances for train/dev/test, respectively. Initial sequences are of length $L = 30$, and the vocab size is $V = 10$. For all meta problems, we give the model conditional steps to let it recognize the meta character for each example. For evaluation, we measure average accuracy for each edit conditional on given past edits, where both position and content must be correct for an edit to be considered correct.
The semantics are chosen to align with Python’s `re` library:
```python
import re; re.sub('(.B', r'\1B \1\1', 'DBBA')
```
yields ‘DBDDBA’.
---
\footnote{The semantics are chosen to align with Python’s `re` library.}
Table 1: Test accuracies on synthetic datasets from step and hyperparameter setting with best dev accuracy. Results that are within .5% of the best accuracy are bolded. POMP: Position-Oracle Match-Pattern; E: Explicit baseline model; IR: Implicit baseline model; IA: Improved implicit model.
<table>
<thead>
<tr>
<th>Non-Meta</th>
<th>POMP</th>
<th>E</th>
<th>IR</th>
<th>IA</th>
<th>Meta</th>
<th>POMP</th>
<th>E</th>
<th>IR</th>
<th>IA</th>
</tr>
</thead>
<tbody>
<tr>
<td>Append1</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>99.9</td>
<td>100.0</td>
<td>99.9</td>
<td>13.9</td>
<td>83.0</td>
<td></td>
</tr>
<tr>
<td>ContextAppend11</td>
<td>100.0</td>
<td>100.0</td>
<td>98.6</td>
<td>99.9</td>
<td>100.0</td>
<td>99.9</td>
<td>2.5</td>
<td>96.5</td>
<td></td>
</tr>
<tr>
<td>ContextAppend13</td>
<td>100.0</td>
<td>100.0</td>
<td>98.6</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>73.5</td>
<td>98.9</td>
<td></td>
</tr>
<tr>
<td>Delete2</td>
<td>100.0</td>
<td>100.0</td>
<td>99.9</td>
<td>99.9</td>
<td>100.0</td>
<td>99.9</td>
<td>94.9</td>
<td>99.8</td>
<td></td>
</tr>
<tr>
<td>Flip11</td>
<td>100.0</td>
<td>99.7</td>
<td>97.8</td>
<td>98.8</td>
<td>99.9</td>
<td>99.1</td>
<td>10.0</td>
<td>92.4</td>
<td></td>
</tr>
<tr>
<td>Replace2</td>
<td>100.0</td>
<td>100.0</td>
<td>99.7</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>93.7</td>
<td>98.5</td>
<td></td>
</tr>
<tr>
<td>Surround11</td>
<td>100.0</td>
<td>100.0</td>
<td>97.2</td>
<td>99.8</td>
<td>100.0</td>
<td>100.0</td>
<td>12.1</td>
<td>98.5</td>
<td></td>
</tr>
<tr>
<td>ContextAppend31</td>
<td>99.9</td>
<td>99.5</td>
<td>89.6</td>
<td>95.9</td>
<td>99.9</td>
<td>12.1</td>
<td>18.0</td>
<td>94.3</td>
<td></td>
</tr>
<tr>
<td>ContextReverse31</td>
<td>99.9</td>
<td>99.6</td>
<td>72.6</td>
<td>98.1</td>
<td>95.9</td>
<td>14.4</td>
<td>14.4</td>
<td>94.4</td>
<td></td>
</tr>
<tr>
<td>ContextAppend33</td>
<td>99.7</td>
<td>99.6</td>
<td>76.5</td>
<td>98.9</td>
<td>95.9</td>
<td>14.4</td>
<td>14.4</td>
<td>94.4</td>
<td></td>
</tr>
<tr>
<td>ContextAppend52</td>
<td>37.6</td>
<td>99.2</td>
<td>74.6</td>
<td>99.3</td>
<td>95.9</td>
<td>73.3</td>
<td>18.0</td>
<td>94.3</td>
<td></td>
</tr>
<tr>
<td>ContextReverse51</td>
<td>37.6</td>
<td>99.0</td>
<td>59.5</td>
<td>95.2</td>
<td>95.9</td>
<td>73.3</td>
<td>18.0</td>
<td>94.3</td>
<td></td>
</tr>
<tr>
<td>Flip33</td>
<td>11.8</td>
<td>98.7</td>
<td>73.6</td>
<td>98.3</td>
<td>95.9</td>
<td>73.3</td>
<td>18.0</td>
<td>94.3</td>
<td></td>
</tr>
<tr>
<td>Surround33</td>
<td>11.8</td>
<td>99.6</td>
<td>79.5</td>
<td>99.6</td>
<td>95.9</td>
<td>73.3</td>
<td>18.0</td>
<td>94.3</td>
<td></td>
</tr>
<tr>
<td>MultiTask</td>
<td>N/A</td>
<td>-</td>
<td>50.0</td>
<td>43.2</td>
<td>53.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td></td>
</tr>
</tbody>
</table>
Figure 4: (a)-(c) Time required to process sequences during training, across n-gram problems with different numbers of insertions (10, 50, 100). Note that the y-axis scale changes across plots. (d) Token-level accuracy on the real dataset when limiting predictions to the contexts where the model is most confident. See text for more details.
To better understand how strong of generalization is required, we develop the Position-Oracle Match-Pattern (POMP) baseline. POMP assumes an oracle identifies the position where an edit needs to be made and marks the pattern part of the current state (using terminology from Section 5). Predictions for the changes needed to turn the pattern into the replacement are then done via pattern matching. If a test pattern appears anywhere in the training data, POMP is assumed to get all predictions correct; otherwise it guesses uniformly at random. We report the expected performance of POMP. In the cases where POMP achieves low accuracy, few of the patterns seen at test time appeared at training time, which shows that these tasks require a strong form of generalization. We can also interpret POMP as an upper bound on performance of any model based on counts of (pattern, replacement) pairs seen in training data, as would happen if we tried to adapt n-gram models to this task.
In Table 1, we report test performance for the hyperparameter setting and step that yield best dev performance. The explicit model and the improved implicit model can solve nearly all the tasks, even those that involve meta characters and relatively long sequences of replacements. Note that the POMP accuracy for many of these tasks is near chance-level performance, indicating that most test replacements were never seen at training time. In the Appendix, we provide more statistics about the synthetic datasets that give additional explanation for the varying performance across tasks.
Evaluating Scalability. Here we explore questions of scalability as the length of the state and the number of edits grow. We use a simple dataset where the pattern is a single character and the replacement comes from a randomly sampled n-gram language model. In all cases we use \( n = 3 \) and set the number of insertions to one of \{10, 50, 100\}. Note that this simultaneously increases the size of the explicit state (\( M \)) and the number of edits (\( T \)). The scalability metric is the average time required to run training on 128 sequences on a single P100 GPU, averaged over 1000 training steps.
As shown in Figure 4, the explicit model is consistently more expensive than the implicit models, and the gap grows as the size of the data increases. The length-100 insertion sequences are ten times smaller than the sequences in the real dataset, but already there is an order of magnitude difference in runtime. The attention models generally take 50% to 75% of the time of the implicit RNN models.
6.2 Experiments on Real Data
We have obtained a large-scale dataset of code edits in Python. Each time a developer saved a file, a snapshot was recorded, making the dataset much more fine-grained than other collection methods like Git commits. First, we converted source code into tokens using Python’s built-in tokenization library and then converted tokens into subword tokens using the subword tokenizer of Vaswani et al. (2017) with a target subword vocabulary size of 4096. This yields sequences of snapshots of subword tokens that we process into sequences of edits as described in Section 5. In total the dataset includes 8 million edits from 5700 software developers. We set the number of conditioning steps to 0 for real data. We grouped snapshots into instances involving about 100 edits each, and we pruned instances that included states with more than 1k subword tokens. We divided instances randomly into train, dev, and test with proportions 80%/10%/10%.
For this experiment, we evaluate the performance of the E, IR, and IA models. The subtoken-level test accuracies are 51% for the explicit baseline (E), 55.5% for the implicit baseline (IR), and 61.1% for the improved implicit model (IA). These numbers are calculated using parameters from the step and hyperparameter setting that achieved the best subtoken-level accuracy on the dev set. Our main take-away from these experiments is that the IA model provides a good trade-off. It achieves the best accuracy, and it had less issue with memory constraints than the explicit model. As we move towards training on larger datasets with longer initial states, we are optimistic about building on the IA model.
Finally, we evaluate the models on their ability to auto-regressively predict a sequence of subtokens up to the next token boundary. We are particularly interested in the setting where models only make predictions when they are confident, which could be important for usability of an eventual edit suggestion system. To decode we use a greedy decoding strategy of generating the most likely next subtoken at each step until reaching the end of a token. As a confidence measure, we use the log probability assigned to the sequence of subtoken predictions.
In Figure 4d, we sort predictions based upon their confidence, and then for each possible confidence threshold we report results. The x-axis denotes the percentile of confidence scores on which we make predictions (so at confidence percentile 75%, we make predictions on the 25% of instances where the model is most confident), and the y-axis shows the average accuracy amongst the instances where a prediction is made. The IA model outperforms the other models across confidence thresholds and when the model is confident, accuracy is correspondingly high. This suggests that the model’s confidence could be useful for deciding when to trigger suggestions in, e.g., an edit suggestion tool.
7 Related Work
Sequence to Sequence and Attention Models. Modeling sequences of edits could be approached via minor modifications to the sequence-to-sequence paradigm (Sutskever et al., 2014) as in our implicit baseline. For explicit data, we can apply hierarchical recurrent neural network-like models (Serban et al., 2016) to encode sequences of sequences. The baseline explicit model is an elaboration on this idea.
Attention mechanisms (Bahdanau et al., 2015) are widely used for machine translation (Bahdanau et al., 2015), summarization (Rush et al., 2016), and recognizing textual entailment (Rocktäschel et al., 2016). Recently, Vaswani et al. (2017) showed that a combination of attention and positional encodings can obviate the need for standard recurrent connections in autoregressive neural networks. Pointer networks (Vinyals et al., 2015) use an attention-like mechanism to select positions in an input sequence. These are important components for the model in Section 4.
Generation by Iterative Refinement. There are several works that generate structured objects via iterative refinement. The structured objects could be images (Gregor et al., 2015; Denton et al., 2015; Karras et al., 2018), translations (Li et al., 2017; Novak et al., 2016; Niehues et al., 2016; Lee et al., 2018), or source code (Gupta et al., 2017; Shin et al., 2018). These methods produce “draft” objects
that are refined via one or more steps of edits. Of these, the most similar to our work is DeepFix [Gupta et al., 2017], which iteratively predicts line numbers and replacement lines to fix errors in student programming assignments. However, like other iterative refinement methods, the refinement is aimed towards a pre-defined end goal (fix all the errors), and thus is not concerned with deriving intent from the past edits. In DeepFix, for example, the only information about past edits that is propagated forward is the resulting state after applying the edits, which will not generally contain enough information to disambiguate the intent of the edit sequence. Schmaltz et al. (2017) studies the problem of grammar correction. While the use of diff-like concepts is superficially similar, like the above, the problem is phrased as mapping from an incorrect source sentence to a corrected target sentence. There is no concept of extracting intent from earlier edits in order to predict future edits.
To emphasize the difference, note that all the approaches would fail on our Meta problems, because the goal only reveals itself after some edits are observed. Any task or method that has only initial and final states would suffer from the same issues.
Software Engineering and Document Editing. Some work in software engineering has modeled edits to code, but it operates at a coarser level of granularity. For example, Ying et al. (2004); Zimmermann et al. (2005); Hu et al. (2010) look at development histories and identify files that are commonly changed together. Our work is more fine-grained and considers the temporal sequence of edits. There is a large body of work on statistical models of source code (see Allamanis et al. (2017)). There are source code models based on \( n \)-grams [Hindle et al., 2012]; Allamanis & Sutton (2013), grammars [Allamanis & Sutton (2014)], neural networks [Raychev et al., 2014]; White et al., 2015]; Ling et al. (2016); Bhoochand et al. (2016), and combinations of the two (Maddison & Tarlow, 2014; Yin & Neubig, 2017). Other notable models include Bielik et al. (2016), Raychev et al. (2016), and Hellendoorn & Devanbu (2017). All of these model a single snapshot of code.
Our formulation, in contrast, models the process of constructing code as it happens in the real world. The benefit is there are patterns available in edit histories that are not be present in a static snapshot, and the models are trained to make predictions in realistic contexts. A limitation of our work is that we treat code as a sequence of tokens rather than as tree-structured data with rich semantics like some of the above. However, source code representation is mostly orthogonal to the edits-based formulation. In future work it would be worth exploring edits to tree- or graph-based code representations.
One of the few works we know of that infers intent from a history of edits is Raza et al. (2014). Given past edits to a presentation document, the task is to predict the analogous next edit. A major difference is that they create a high-level domain specific language for edits, and predictions are made with a non-learned heuristic. In contrast, we define a general space of possible edits and learn the patterns, which is better suited to noisy real-world settings. In concurrent work, Paletov et al. (2018) studies the problem of deriving rules for usage of cryptography APIs from changes in code, which is similar in spirit to our work in trying to derive intent from a history of changes to code.
8 Discussion
In this work, we have formulated the problem of learning from past edits in order to predict future edits, developed models of edit sequences that are capable of strong generalization, and demonstrated the applicability of the formulation to large-scale source code edits data.
An unrealistic assumption that we have made is that the edits between snapshots are performed in left-to-right order. An alternative formulation that could be worth exploring is to frame this as learning from weak supervision. One could imagine a formulation where the order of edits between snapshots is a latent variable that must be inferred during the learning process.
There are a variety of possible application extensions. In the context of developer tools, we are particularly interested in conditioning on past edits to make other kinds of predictions. For example, we could also condition on a cursor position and study how edit histories can be used to improve traditional autocomplete systems that ignore edit histories. Another example is predicting what code search queries a developer will issue next given their recent edits. In general, there are many things that we may want to predict about what a developer will do next. We believe edit histories contain significant useful information, and the formulation and models proposed in this work are a good starting point for learning to use this information.
REFERENCES
Appendix
Overview
- Section A gives more information on the synthetic datasets.
- Section B gives additional details for the synthetic data experiments.
- Section C gives additional details for the real data experiments.
- Section D gives a more detailed description of the IA models and its variants.
- Section E gives more details on the variant of multihead attention used in the IA models.
- Section F provides additional experiments on the IA model variants.
A Additional Synthetic Dataset Information
The regular expression patterns and replacements that define the synthetic datasets are listed in Table 2. In this table we also show additional computed properties of the synthetic datasets.
The number of edits per replacement (EPR) helps to explain why some problems are harder than others. Problems that only repeat their context-based action a single time, rather than multiple times, are harder and have correspondingly lower accuracies. For example, MetaContextAppend11 is harder than MetaContextAppend13. This is because accuracy is measured in a setting where the model conditions on ground truth for past edits, so previous correct instances of the action are available for predicting future instances of the action. This is visible in the higher accuracies for those problems with multiple wildcards in their pattern and higher EPR, corresponding to multiple actions.
A higher average context displacement (ACD) also is indicative of harder problems, and correlates with lower accuracies from models E, IA-ag, and IR. ACD measures the distance between characters in the regex patterns and where they are used in the replacement.
We also observe a moderate correlation between accuracy of E, IA-ag, and IR and the accuracy of POMP on the non-meta problems. This suggests the problems where fewer of the test time patterns appeared at training time are harder than those where more of the test time patterns appeared at training time. The IR model in particular tends to have lower accuracy for the problems that have more novel patterns at test time.
Table 2: Regular expressions used to generate the synthetic datasets, and properties of the synthetic datasets. Edits per replacement (EPR) is the number of edits required for a typical replacement of the pattern with the replacement. Average context displacement (ACD) measures the average distance between a regex capture group in the pattern and where the characters in that capture group appear in the replacement.
B SYNTHETIC DATA EXPERIMENTS
B.1 ADDITIONAL EXPERIMENT DETAILS
For the synthetic dataset experiments, for each method, we performed a grid search over learning rate (.005, .001, .0005, .0001) and hidden size (128, 256, 512). We use the ADAM optimization algorithm with default Tensorflow settings for other parameters. We clip gradients at 1.0 by global gradients norm.
B.2 ADDITIONAL SCALABILITY EXPERIMENT DETAILS
We test all models on hidden sizes (128, 256, 512) with batch size 64. The metric we use for scalability is the average time required to run training on 128 sequences on a single P100 GPU, averaged over 1000 training steps. Under this metric, the explicit model pays an additional cost for its high memory usage. On the larger datasets (and in the code edits dataset), it is not possible to fit a batch size of 64 in memory for hidden size 512, so we need to run several training steps with batch size 32. While this may appear to unfairly penalize the explicit model, this is the tradeoff that we had to face when running experiments on the real data, so we believe it to be an accurate reflection of the practical ability to scale up each type of model.
B.3 ANALYZING ERRORS
C REAL DATA EXPERIMENTS
C.1 ADDITIONAL EXPERIMENT DETAILS
In this experiment, we evaluate the performance of the E, IR, and IA models on the code edits dataset. For each of these models, we performed a grid search over the hyperparameter space. We evaluate the learning rates .005, .001, .0005, and .0001. For the implicit models, we evaluate the hidden sizes 128, 256, and 512, and batch size 64. For the explicit models, we evaluate hidden sizes 32 and 64 (with batch size 10), 128 (with batch size 5), and 256 and 512 (with batch size 1). We decrease the batch size considered as we increase the hidden size so that the model fits in memory. We trained each model variant on a single P100 GPU for 48 hours.
D IMPLICIT ATTENTIONAL MODEL - LONGER DESCRIPTION
D.1 ENCODER
In contrast to before, here the task of the encoder is to convert the $M$ initial state tokens and the $T$ edits into a matrix of hidden states $\tilde{H} \in \mathbb{R}^{D \times (M+T)}$. The first $M$ columns represent the context around each position in the initial state, and these embeddings must only depend on $s^{(0)}$. The last $T$
columns represent the context around each token produced by the edit sequence \( e \). The \( t^{th} \) of these columns can only depend on \( s^{(t)} \) and edits \( e_1, \ldots, e_t \).
One design constraint imposed by Vaswani et al. (2017) is that there should be no sequential dependence in the encoder, and we would like to impose the same constraint on our models to aid in scalability. That is, given embeddings of each of the inputs and edits, the hidden states for each of the inputs and edits can be computed in parallel. However, this raises a challenge of how to compute a positional embedding of the tokens generated by edits. Suppose we have an initial sequence \( ABC \) and then insert \( XY \) after \( A \). How should we encode the position of \( X \) and \( Y \) such that attention operations in the encoder can produce a hidden representation of \( Y \) encoding the information that \( Y \) comes one position after \( X \) and two positions after \( A \)?
Our approach to this problem is to compute the embedding of an edit as a sum of three terms: an embedding of the content, a positional encoding of the explicit position, and a shared learned embedding to represent that the content was produced by an edit (as opposed to being present in the initial state). Note that using the explicit position means that multiple tokens may have the same position; however, the justification is that relative positions still make sense, and the third component of the sum can be used to disambiguate between positions in the initial state and those produced by edits. From the embeddings of initial state and edit sequence, we apply a sequence of MHA operations illustrated in Figure 5(a) to produce the encoder output \( \tilde{H} \).
D.2 Decoder
Given \( \tilde{H} \), we can use the implicit position indexing illustrated in Figure 5(b) to reference the hidden state at the position of edit \( t \) as \( \tilde{H} = \tilde{H}_{p_i(t)} \). Thus we can assemble a matrix of “contexts” \( \tilde{U} \in \mathbb{R}^{D \times T} \) where \( \tilde{U}_{:,t} = \tilde{H}_{p_i(t)} \) for each \( t \). We can also assemble a matrix of “contents” of each edit \( \tilde{V} \in \mathbb{R}^{D \times T} \), where \( \tilde{V}_{:,t} \) is an embedding of \( e^{(t)} \). Intuitively, we hope the context vectors to capture the relevant context around the position that was chosen (e.g., it is after a \( B \)), while the content vectors represent the content of the edit (e.g., in that context, insert a \( B \)).
To predict position, we first shift \( \tilde{U} \) forward in time to get \( \check{U} \) where \( \check{U}_{:,t} = \tilde{U}_{:,t-1} \) (letting \( \check{U}_{:,0} = \mathbf{0} \)). We then apply a masked MHA operation to get \( \check{U} = \text{MHA}(\check{U}) \). The result is used as keys for a pointer network-like construction for predicting position. Specifically, we let \( \alpha_{t,m} = \langle \check{U}_{:,t}, \tilde{H}_{:,m} \rangle \) be the compatibility between the predicted context \( \check{U}_{:,t} \) and the \( m^{th} \) context in \( \tilde{H} \), and then define the predicted distribution for position to be \( P(p_i^{(t)} = m) = \frac{\exp \alpha_{t,m}}{\sum_{m'} \exp \alpha_{t,m'}} \). See the left column of Figure 5(b) and (c) for an illustration of the position prediction component of the decoder.
To predict content \( e^{(t)} \), we also condition on \( p_i^{(t)} \) because of our chosen convention to predict position and then content given position. We consider two content decoders. The “Vanilla” decoder (see Figure 5(b)) takes the un-shifted contexts matrix \( \tilde{U} \) used in the position decoder. Recall that the \( t^{th} \) column has an encoding of the context where the \( t^{th} \) edit was made. This matrix is added to a shifted version of the content matrix \( \check{V} \) (i.e., \( \check{V}_{:,t} = \check{V}_{:,t-1} \)). Thus, the \( t^{th} \) column of the result has information about the content of the previous edit and the context of the current edit. This summed matrix is passed through a separate masked MHA module that passes information only forward in time to yield \( \check{V} \), which incorporates information about previous contents and previous plus current contexts. From this, we predict the distribution over \( e^{(t)} \) as \( \text{softmax}(W^{(c\text{-out})}\check{V}_{:,t}) \).
The “Analogical” decoder (see Figure 5(c)) is motivated by the intuition that \( V - U \) can be thought of as a matrix of analogies, because \( V \) encodes the content that was produced at each timestep, and \( U \) encodes the context. We might hope that \( \check{V}_{:,t} \) encodes “insert \( \mathbb{B}^{'} \)” while the corresponding \( \check{U}_{:,t} \) encodes “there’s a \( \mathbb{A}^{'} \)” and the difference encodes “insert whatever comes before \( \mathbb{A}^{'} \)”. If so, then it might be easier to predict patterns in these analogies. To implement this, we construct previous analogies \( \check{A} = \check{V} - \check{U} \), then we predict the next analogies as \( \check{A} = \text{MHA}(\check{A}) \). The next analogies are added to the unshifted current contexts \( \check{A} + \check{U} \), which is intuitively the predicted analogy applied to the context of the position where the current edit is about to be made. From this, we apply a dense layer and softmax to predict the distribution over \( e^{(t)} \).
High level pseudocode is provided below. \texttt{s2s\_mha} denotes exchanging information between tokens within the initial state; \texttt{s2e\_mha} denotes passing information from tokens in the initial state to edits; \texttt{e2e\_mha} denotes exchanging information between edits and applies masking to avoid passing information backward in time.
```python
# Encoder
state_embeddings = embed_tokens(initial_state_tokens)
state_hiddens = s2s_mha(state_embeddings)
edit_embeddings = embed_edits(edit_positions, edit_contents)
edit_hiddens = e2e_mha(edit_embeddings)
edit_hiddens = s2e_mha(state_hiddens, edit_hiddens)
H = concat(state_hiddens, edit_hiddens)
# Decoder
U = gather(H, target_positions)
# Predict position
prev_U = shift_forward_in_time(U) + timing_signal(U)
predicted_U = e2e_mha(prev_U)
position_logits = pointer_probs(query=predicted_U, keys=H)
# Predict content
V = embed_tokens(target_contents)
if vanilla_decoder:
prev_V = shift_forward_in_time(V) + timing_signal(V)
predicted_V = e2e_mha(U + prev_V)
else:
prev_V = shift_forward_in_time(V) + timing_signal(V)
predicted_V = e2e_mha(prev_V - prev_U) + U
```
E ADDITIONAL MULTIHEAD ATTENTION MODULE DETAILS
In its most general form, a multihead attention (MHA) module described in the main text takes as input three matrices:
- A keys matrix $K \in \mathbb{R}^{D \times M}$,
- A values matrix $V \in \mathbb{R}^{D \times M}$, and
- A queries matrix $Q \in \mathbb{R}^{D \times N}$.
If only one matrix is provided, then it is used as $K$, $V$, and $Q$. If two matrices are provided, the first is used as $Q$, and the second is used as $K$ and $V$.
Deviating slightly from [Vaswani et al., 2017] by grouping together surrounding operations, the module is composed of the following operations:
- Add positional encoding to $Q$,
- Apply attention to get a result matrix $R \in \mathbb{R}^{D \times N}$,
- Apply an aggregation operation $\text{Agg}(Q, R)$ and return the result.
The positional encoding is the same as in [Vaswani et al., 2017]; we add their sinusoidal positional embedding of integer $n$ to the $n^{th}$ column of $Q$. The attention operation is the multihed attention operation as described in [Vaswani et al., 2017], where we use 8 heads throughout. The aggregation operation is either a simple sum $Q + R$ or a GRU operation. That is, we treat $Q$ as a matrix of previous hidden states and $R$ as inputs, and then we apply the update that a GRU would use to compute the current hidden states. The potential benefit of this construction is that there is a learned
Table 3: Comparing variations of the Improved Implicit models. Table reports test accuracy at hyperparameter and step that achieved best dev accuracy. IA-v: Vanilla decoder with sum aggregator; IA-vg: Vanilla decoder with GRU aggregator; IA-a: Analogy decoder with sum aggregator; IA-ag: Analogy decoder with GRU aggregator.
<table>
<thead>
<tr>
<th></th>
<th>IA-v</th>
<th>IA-a</th>
<th>IA-v</th>
<th>IA-ag</th>
</tr>
</thead>
<tbody>
<tr>
<td>Append1</td>
<td>99.9</td>
<td>99.9</td>
<td>100.0</td>
<td>99.9</td>
</tr>
<tr>
<td>Append3</td>
<td>100.0</td>
<td>99.9</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>ContextAppend1</td>
<td>99.9</td>
<td>99.5</td>
<td>99.5</td>
<td>99.9</td>
</tr>
<tr>
<td>ContextAppend3</td>
<td>99.9</td>
<td>99.8</td>
<td>99.8</td>
<td>100.0</td>
</tr>
<tr>
<td>ContextReverse1</td>
<td>98.0</td>
<td>98.5</td>
<td>98.3</td>
<td>98.5</td>
</tr>
<tr>
<td>ContextReverse3</td>
<td>99.2</td>
<td>98.9</td>
<td>98.9</td>
<td>98.9</td>
</tr>
<tr>
<td>ContextReverse5</td>
<td>97.9</td>
<td>97.5</td>
<td>97.3</td>
<td>98.1</td>
</tr>
<tr>
<td>MetaAppend1</td>
<td>96.5</td>
<td>96.1</td>
<td>96.5</td>
<td>95.2</td>
</tr>
<tr>
<td>MetaAppend3</td>
<td>99.9</td>
<td>99.9</td>
<td>99.9</td>
<td>99.9</td>
</tr>
<tr>
<td>MetaFlip1</td>
<td>99.3</td>
<td>98.8</td>
<td>98.9</td>
<td>98.8</td>
</tr>
<tr>
<td>MetaFlip3</td>
<td>98.8</td>
<td>98.2</td>
<td>98.9</td>
<td>98.3</td>
</tr>
<tr>
<td>MetaReverse1</td>
<td>100.0</td>
<td>99.9</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>MetaReverse3</td>
<td>99.8</td>
<td>99.8</td>
<td>99.7</td>
<td>99.8</td>
</tr>
<tr>
<td>MetaReverse5</td>
<td>99.6</td>
<td>99.5</td>
<td>99.5</td>
<td>99.6</td>
</tr>
<tr>
<td>MetaSurround1</td>
<td>95.8</td>
<td>95.2</td>
<td>92.8</td>
<td>96.3</td>
</tr>
<tr>
<td>MetaSurround3</td>
<td>99.1</td>
<td>98.6</td>
<td>98.3</td>
<td>98.9</td>
</tr>
<tr>
<td>MetaSurround5</td>
<td>93.1</td>
<td>91.6</td>
<td>92.5</td>
<td>94.3</td>
</tr>
<tr>
<td>Metareverse1</td>
<td>98.1</td>
<td>97.7</td>
<td>98.1</td>
<td>97.5</td>
</tr>
<tr>
<td>Metareverse3</td>
<td>97.5</td>
<td>97.8</td>
<td>97.7</td>
<td>97.5</td>
</tr>
<tr>
<td>Metareverse5</td>
<td>93.0</td>
<td>91.1</td>
<td>92.4</td>
<td>94.4</td>
</tr>
<tr>
<td>MetaDelete2</td>
<td>93.0</td>
<td>92.2</td>
<td>91.6</td>
<td>92.4</td>
</tr>
<tr>
<td>MetaFlip1</td>
<td>99.5</td>
<td>99.4</td>
<td>99.5</td>
<td>99.8</td>
</tr>
<tr>
<td>MetaFlip3</td>
<td>98.6</td>
<td>98.3</td>
<td>99.7</td>
<td>98.5</td>
</tr>
<tr>
<td>MetaSurround1</td>
<td>98.1</td>
<td>96.9</td>
<td>96.5</td>
<td>98.5</td>
</tr>
<tr>
<td>MetaSurround3</td>
<td>99.1</td>
<td>99.1</td>
<td>99.1</td>
<td>99.0</td>
</tr>
</tbody>
</table>
The gating function that can decide how much to use \( Q \) and how much to use \( R \) in the final output of the module in an instance-dependent way.
When we use GRU aggregation, we append a \( g \) to the method name, and when we use a sum aggregation we do not append a character. Paired with the Vanilla (v) and Analogical (a) decoders described in the main text, this describes all the implicit model variants:
- IA-v: vanilla decoder, sum aggregation,
- IA-vg: vanilla decoder, GRU aggregation,
- IA-a: analogical decoder, sum aggregation,
- IA-ag: analogical decoder, GRU aggregation.
## F Additional Improved Implicit Model Experiments
We repeated the synthetic experiments from the main paper but on all the implicit attentional model variants. Results appear in Table 3.
|
{"Source-Url": "https://openreview.net/pdf?id=Sklr9i09KQ", "len_cl100k_base": 13590, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 58857, "total-output-tokens": 16006, "length": "2e13", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.0004074573516845703, "__label__crime_law": 0.00027942657470703125, "__label__education_jobs": 0.0007038116455078125, "__label__entertainment": 7.647275924682617e-05, "__label__fashion_beauty": 0.00018036365509033203, "__label__finance_business": 0.00022494792938232425, "__label__food_dining": 0.0003147125244140625, "__label__games": 0.0005469322204589844, "__label__hardware": 0.0008492469787597656, "__label__health": 0.0004634857177734375, "__label__history": 0.00019693374633789065, "__label__home_hobbies": 0.00010716915130615234, "__label__industrial": 0.0003974437713623047, "__label__literature": 0.0002779960632324219, "__label__politics": 0.0002263784408569336, "__label__religion": 0.0004172325134277344, "__label__science_tech": 0.024993896484375, "__label__social_life": 8.404254913330078e-05, "__label__software": 0.005832672119140625, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.0003199577331542969, "__label__transportation": 0.0005021095275878906, "__label__travel": 0.0001964569091796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56717, 0.06515]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56717, 0.36803]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56717, 0.87095]], "google_gemma-3-12b-it_contains_pii": [[0, 3949, false], [3949, 8023, null], [8023, 11738, null], [11738, 16137, null], [16137, 20694, null], [20694, 25084, null], [25084, 29811, null], [29811, 34775, null], [34775, 38081, null], [38081, 41418, null], [41418, 43492, null], [43492, 43911, null], [43911, 46231, null], [46231, 51663, null], [51663, 54228, null], [54228, 56717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3949, true], [3949, 8023, null], [8023, 11738, null], [11738, 16137, null], [16137, 20694, null], [20694, 25084, null], [25084, 29811, null], [29811, 34775, null], [34775, 38081, null], [38081, 41418, null], [41418, 43492, null], [43492, 43911, null], [43911, 46231, null], [46231, 51663, null], [51663, 54228, null], [54228, 56717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56717, null]], "pdf_page_numbers": [[0, 3949, 1], [3949, 8023, 2], [8023, 11738, 3], [11738, 16137, 4], [16137, 20694, 5], [20694, 25084, 6], [25084, 29811, 7], [29811, 34775, 8], [34775, 38081, 9], [38081, 41418, 10], [41418, 43492, 11], [43492, 43911, 12], [43911, 46231, 13], [46231, 51663, 14], [51663, 54228, 15], [54228, 56717, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56717, 0.18966]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
30e6a063b8dec481d77d263584558880ec8003fb
|
Sahara: Guiding the Debugging of Failed Software Upgrades
Rekha Bachwani, Olivier Crameri†, Ricardo Bianchini, Dejan Kostić†, and Willy Zwaenepoel†
Rutgers University
{rbachwan,ricardob}@cs.rutgers.edu
{olivier.crameri,dejan.kostic,willy.zwaenepoel}@epfl.ch
Abstract—Today, debugging failed software upgrades is a long and tedious activity, as developers may have to consider large sections of code to locate the bug. We argue that failed upgrade debugging can be simplified by exploiting the characteristics of upgrade problems to prioritize the set of routines to consider. In particular, previous work has shown that differences between the computing environment in the developer’s and users’ sites cause most upgrade problems. Based on this observation, we design and implement Sahara, a system that identifies the aspects of the environment that are most likely the culprits of the misbehavior, finds the subset of routines that relate directly or indirectly to those aspects, and selects an even smaller subset of routines to debug first. To achieve its goals, Sahara leverages feedback from a large number of user sites, machine learning, and static and dynamic source analyses. We evaluate Sahara for three real upgrade problems with the OpenSSH suite, one synthetic problem with the SQLite database, and one synthetic problem with the uServer Web server. Our results show that the system produces accurate recommendations comprising only a small number of routines.
I. INTRODUCTION
Modern software systems are complex and comprise many interacting and dependent components. Frequent upgrades are required for some or all components to fix bugs, patch security vulnerabilities, add or remove features, and other critical tasks. Unfortunately, many of the upgrades either fail or produce unwanted behavior. A survey conducted by Crameri et al. [8] showed that 90% of system administrators perform upgrades at least once a month, and that 5 – 10% of these upgrades is problematic. Interestingly, they also found that the most common source of upgrade problems is the difference between the environment (i.e., version of operating system and libraries, configuration settings, environment variables, hardware, etc) at the developer’s site and the users’ sites. Such problems are difficult (or maybe impossible) to prevent because the developer cannot foresee, much less test her software for, every possible environment in which the software might be used.
When upgrades misbehave at some user sites, the developers receive bug reports and complaints. In some cases, the developers may also receive logs of failed executions and/or core dumps. Developers often undergo several exchanges with the users to gather all the pertinent information. Thereafter, the developers examine the information to locate the likely causes of the misbehavior. This process is long and tedious, as developers may have to consider large chunks of code to locate the root cause of the misbehavior.
In this paper, we propose Sahara, a system that simplifies the debugging of environment-related upgrade problems by pinpointing the subset of routines and variables that is most likely the source of misbehavior. Sahara’s design was motivated by two observations: (1) since the problem was caused by one or more aspects of the user environment, it is critical to identify these suspect aspects and their effects throughout the code; and (2) since the previous version of the software behaved properly, it is critical to identify the behavioral differences between the previous and upgraded versions.
Given these observations, the root cause of an upgrade problem is most likely to be in the code that is both (1) affected by the suspect aspects of the environment and (2) whose behavior has deviated after the upgrade. To isolate this code, Sahara combines information collected from many users of the software, machine learning techniques, static and dynamic source analyses. The machine learning and the static analysis run at the developer’s site, whereas the data collection and dynamic analysis run at the users’ sites (for those users who are willing to run Sahara). Sahara targets C applications written for Unix-like operating systems.
In more detail, Sahara applies feature selection [34] on the environment and upgrade success/failure information received from users to rank the aspects of the environment that are most likely to be the source of the misbehavior. Then, it uses feature selection on the set of variables whose values derive directly or indirectly from the suspect aspects. The routines in which these variables are used become the first set of potential culprits. At this point, Sahara deploys instrumented versions of the current and upgraded version of the code to the user sites that reported misbehaviors. It then runs the instrumented versions automatically (and with the same inputs) to collect information about all routine calls and returns. Using this information, it uses value spectra [35] to identify the set of routines that caused the behavior to deviate from one execution to the other at each misbehaving site. These sets of routines are also considered suspects. Finally, Sahara intersects the sets of suspect routines resulting from the static and dynamic analyses; those in the intersection should
To evaluate Sahara, we study three real upgrade problems with the OpenSSH suite, one synthetic problem in the SQLite database engine, and one synthetic problem with the uServer Web server. Our results demonstrate that Sahara produces recommendations that always include the routines responsible for the bugs. The exact number of recommended routines depends on the characteristics of the information received from users. In experiments where we varied these characteristics widely, Sahara recommends 2-21 suspect routines that should be debugged first. These numbers can be 20x smaller than the number of routines affected by the upgrades. Compared to static and dynamic analyses alone, Sahara reduces the numbers of suspect routines by 1.4x-6x and 14x-40x, respectively. Given its accuracy and these large reductions, we expect that Sahara can significantly reduce debugging time in practice.
Perhaps the most similar work to Sahara is [17]. It collects execution information in the form of predicates, such as the number of times a branch is taken, and ranks the predicates based on their correlation with the failures. Developers can then inspect the highly ranked predicates and use them as hints to locate bugs. Jiang and Su [15] built upon this infrastructure to compute the control flow paths connecting the highly ranked predicates. Unfortunately, these approaches do not consider the user environment when ranking predicates, and require users to constantly run instrumented code to sample the predicates and send feedback, both of which have overheads.
In summary, our main contributions are:
- We introduce a new approach for simplifying upgrade debugging that is driven by user environments and includes a novel combination of techniques;
- We build a system, called Sahara, that implements the approach; and
- We evaluate Sahara for five upgrade problems with three widely used applications.
II. SAHARA: PRIORITIZING UPGRADE DEBUGGING
A. A Motivating Example
To make our exposition more concrete, let us look at a simple example in Listing 1. The example takes the name of an environment variable as input using a call to `getenv()` (line 18). It then checks if the length of the string is smaller than or equal to 9 (line 4). Depending on the outcome of the comparison, a different output is produced (lines 21-24).
Let us assume that the upgrade simply changes the sign in line 4 from "$<" to "<". This upgrade will fail at user sites where the $SHELL environment variable is set to /bin/bash or /bin/tcsh, but not /bin/csh or /bin/ksh, for instance. More generally, the upgrade will fail where the length of the value of the $SHELL environment variable is exactly 9. However, the program ran successfully at these sites before the upgrade. This upgrade failure is similar to the ProxyCommand bug [27] that we detail in Section III-A.
The failure has two interesting characteristics. First, the upgrade fails only at a subset of user sites, which may have been the reason the bug went undetected during development. Second, despite the fact that the two versions of the code are input-compatible, the execution behavior changes with the upgrade both in terms of the path executed and the output produced.
Given these characteristics, identifying the aspects of the environment that correlate with the failure is a necessary first step for efficiently diagnosing the failure. In this simple example, the name of the shell is the aspect of the environment that triggers the failure. It is also important to identify the variables and routines in the code that are directly or indirectly affected by the environment. Note that the name of the shell is initially assigned to the $uname array; only later does variable env2 become related to the environment. Thus, variables $uname and env2, as well as routines main and checklength are suspect. However, identifying these suspects is not sufficient, because the program behaved correctly before the upgrade was applied in the same environment. We also need to determine how the upgraded version of the program has deviated from the current version. This analysis would then show that routine checklength and secondfunction behave differently in the two versions, meaning that they are also suspects. The root cause of the failure is most likely to be contained in the code that is affected by both the suspect environment and whose behavior has changed after the upgrade, i.e. routine checklength. This routine is exactly where the bug is in our example.
B. Design and Implementation
Overview. Figure 2 illustrates the steps involved in Sahara. First, Sahara deploys the upgrade to any users that request it (step 1). As the software executes at each user’s site, Sahara collects information about the environment and inputs used (step 2). At the end of the execution, Sahara obscures and then transfers the collected environment information (the inputs are never transferred on the network) to the developer’s site, along with a success/failure flag provided by the user (step 3). (Obviously, some users may decide not to allow any sort of input transfer.)
of information to be collected or provided to Sahara.) The information about the environment includes the version of the operating system, the version of the libraries, the configuration settings, the name and version of the other software packages installed, and a description of the hardware. A failure flag may mean that (a) the upgrade could not be properly installed or executed, (b) the upgrade caused incorrect behavior or a crash, or (c) the upgrade caused another software to misbehave [8].
Now suppose that the upgrade misbehaved at one user site at least. With the environment and success/failure information at the developer’s site, Sahara runs a machine learning algorithm to determine the aspects of the environment that are most likely to have caused the misbehavior (step 4). Next, based on def-use static analysis, Sahara isolates the variables in the code that derive directly or indirectly from those aspects; the routines that use these variables are considered suspect (step 5).
Sahara then deploys instrumented versions of the current and upgraded code to the user sites that reported failures (step 6). At each of those sites, Sahara can now execute both versions with the inputs collected in step 2 and collect dynamic routine call/return information (step 7). Sahara then compares the logs from the two executions to determine the routines that exhibited different dynamic behavior (step 8). This step is done at the failed user sites to avoid transferring the potentially large execution logs back to the developer’s site. Sahara then transfers the list of routines that deviated at each failed user site back to the developer’s site (step 9); the routines on these lists are considered suspect as well.
Finally, Sahara intersects the set of suspects resulting from the static and dynamic analyses (step 10). This set is reported to the developer as the routines to debug first. If the problem is not found in this first set, other suspect routines should be considered.
Next, we detail the implementation of these steps.
Upgrade deployment, tracing, and user feedback (steps 1-3). Upgrade deployment in Sahara is trivial. The upgraded code is available via a Web interface and can be downloaded as a package/patch by any user that wants it.
Sahara uses the Mirage tracing infrastructure, which has been described in detail in [8]. For this reason, next we only describe the most important aspects of it. The infrastructure identifies the “environmental resources” an application depends on and then fingerprints (i.e., derives a compact representation for) them.
The infrastructure creates a log of all the external resources accessed by an application by intercepting process creation, read or write, file descriptor-related and socket-related system calls. For environment variables, it intercepts the calls to the GETENV() function in libc. The log may include data files, in addition to environmental resources. To separate them out, Sahara uses a four-part heuristic to identify the environment resources from multiple runs of the application. The heuristic identifies as environmental resources: (1) all files accessed in the longest common prefix of the sequence of files accessed in the logs; (2) all files accessed read-only in all logs; (3) all files of certain types (such as libraries) accessed in any single log; and (4) all files named in the package of the application to be upgraded. This heuristic allows Sahara to exclude unimportant files, such as temporary and log files, that are written but never read by the application. To complement the heuristic, Sahara also includes an API that allows the developer to include or exclude files or directories. In addition to the data accessed during application execution, Sahara collects information about the hardware and software installed, such as type and amount of memory, CPU data, the types and number of devices present, and the list of kernel symbols and modules.
Again as in Mirage, Sahara creates a concise representation (fingerprint) for each environmental resource. Depending on the resource type, a different fingerprint is generated. First, Sahara provides parsers that produce the fingerprint for common types such as libraries and executables. A parser knows how to extract the relevant information from a file based on its type. Second, the developer may provide parsers for certain application-specific resources, such as configuration files. Third, if there are no parsers for a resource, the fingerprint is a sequence of hashes of chunks of the file that are content-delineated using Rabin fingerprinting [30]. In practice, we expect most resources to be handled by parsers, so resorting to Rabin fingerprinting should be the exception.
In each fingerprint, the name of the resource serves as a key and the hash of its contents as the value. The parsers for the most common resource types produce fingerprints in the following formats:
- Environment variables: Name:HASH
- Libraries: Name:HASH+Version
- Configuration files: Filename.KEY:HASH
- Binary files: Filename:CHUNK_HASH
The content-based fingerprints are of the form: File-name:CHUNK_HASH. These fingerprints are more coarse grained than what is possible with parsers, since a parser can choose the granularity at which the fingerprint for an environment resource is produced. For instance, the granularity at which binary files are fingerprinted is typically coarser than that for configuration files. We use SHA-1 to compute fingerprints of the resources.
For the users that choose to participate, Sahara sends the tracing infrastructure and the parsers to their sites. During the first several executions of the upgraded software (the number of executions can be defined by the developer), Sahara collects the environment resource information and produces the respective fingerprints. After each of these executions, Sahara also queries the user about whether the upgrade has succeeded or failed. We ask the user to provide this success/failure flag, because it may be difficult to determine failure in some cases. For example, a software misbehavior is considered a failure, even if it does not cause a crash or any other OS-visible event. In addition, the upgrade may cause another software to misbehave [8].
When the user provides a succeed/fail flag, Sahara sends this information, along with the environment resource fingerprints, back to the developer’s site. This data represents the profile of the corresponding user site. After the first several executions, Sahara turns its data collection off to minimize overheads. User profiles from all sites serve as the input to the feature selection step. Section III systematically studies the impact of user profiles with various characteristics.
**Feature selection (step 4).** Based on the information received from the user sites, this step selects environment resources (called features) with the strongest correlation to the observed upgrade failures. The fingerprints are never “unhashed” during feature selection (or after it); it is enough for Sahara to know how many different fingerprints there are for each feature.
Sahara uses the decision tree algorithm with feature ranking from the WEKA tool [www.cs.waikato.ac.nz/ml/weka/] for selection. The algorithm builds a decision tree by first selecting a feature to place at the root node, and creating a tree branch for each possible value of the feature. This splits up the dataset into subsets, one for each value of the feature. The choice of the root feature is based on Gain Ratio [29], a measure of a feature’s ability to create subsets with homogeneous classes. In Sahara, there are only two classes: success or failure. The Gain Ratio is higher for the features that create subsets with mostly success or mostly failure user profiles. For instance, in the example of Listing 1, the root feature would be the SHELL environment variable. The subsets that include SHELL strings of length different than 9 are successes, whereas those that have strings of exactly 9 characters are failures.
After selecting the root feature, the process is repeated recursively for each branch, using only those profiles that actually reach the branch. When all the profiles at a node have the same classification, the algorithm has completed that part of the tree. The output of the algorithm is a set of features, their Gain Ratios, and their ranks.
To validate the feature selection, Sahara uses 10-fold cross-validation [16] to compute the standard deviation of the ranks of each feature. When the standard deviations of the top-ranked features are high, Sahara warns the developer that its results are not to be trusted, i.e. the reason for the failures is unlikely to be the environment.
When this condition is not met, Sahara considers all the features that have Gain Ratios within 30% of the highest ranked feature as **Suspect Environment Resources (SERs)**. These SERs serve as input to the static analysis step. We assess the impact of the accuracy of the feature selection step in Section III.
**Static analysis and suspect routines (step 5).** Sahara analyzes the upgraded software using the C Intermediate Language (CIL) [24]. Specifically, it implements two CIL modules, the call-graph module and the def-use module. As the name suggests, the call-graph module computes a whole-program static call graph by traversing all the source files, a routine at a time. Every node in the call graph is a routine, and its children nodes are the routines it calls. The root of the call graph is always the main() routine.
The def-use module creates def-use chains [1] for each SER. A def-use chain links all the variables that derive directly or indirectly from one SER. Each array is handled as a single variable, whereas struct and union fields are handled separately. Figure 3 shows the def-use chain (thin arrows) for our example program, linking variables uname, env2, and retvall.
To find suspect routines, Sahara traverses all the routines in the order they appear in the call graph, starting with the root. During the course of the traversal, Sahara maintains three lists: (1) a list of global suspect variables (SuspectVars); (2) a list of per-routine suspect variables (LsuspectVars); and (3) a list of routines that are suspect (SuspectRoutines). SuspectVars is initialized with the variables corresponding to SERs.
Sahara proceeds through each routine statement-by-statement, starting with the root routine. For every variable access, it checks whether the variable is a suspect or depends on any suspect, either directly or indirectly. If so, the accessed variable becomes a suspect. If it is a local variable, it is added to SuspectVars of the routine where the access appears; otherwise, it is added to SuspectVars. The routine containing the access is added to SuspectRoutines. In addition, if a routine calls another with a suspect variable as a parameter, the caller is added to SuspectRoutines and the corresponding formal parameter is added to the LsuspectVars of the callee. The callee becomes a suspect if the suspect parameter is used in the function, and not otherwise. Furthermore, a routine becomes suspect if the return value of any of its callees is suspect, and it is used in the routine. Similarly, a routine becomes suspect if any parameter passed by reference to one of its callees becomes suspect, and it is used in the routine. This

step outputs SuspectRoutines, after the entire graph has been traversed.
This step provides the developer with a set of routines that are highly correlated with the failures. For the example in Listing 1, main and checklength are the two suspect routines. The block arrows in Figure 3 show why these routines were included as suspects.
Creating and distributing instrumented versions (step 6). After the SuspectRoutines are identified, Sahara generates the instrumented versions of the current and upgraded versions of the software.
Sahara uses CIL to automatically instrument the application. The instrumentation is introduced by two new CIL modules, instrument-calls and ptr-analysis. The instrument-calls module inserts calls to our C runtime library to log routine signatures for all the routines executed in a particular run. A routine’s signature consists of the number, name, and values of its parameters, its return value, and any global state that is accessed by the routine. The global state comprises the number, name, and values of all the global variables accessed by the routine. This module works well for logging parameters of basic data types. However, in order to correctly log pointer variables and variables of complex data types, we have implemented the ptr-analysis module. This module inserts additional calls to our C library to keep track of all the heap allocations and deallocations.
Re-execution, value spectra analysis, and deviated routines (steps 7-9). As we do not want to transfer inputs or large logs across the network, these steps are performed at the failed users’ sites themselves. To do so, Sahara first deploys infrastructure to those sites that is responsible for re-execution and value spectra analysis. It then transfers the instrumented binaries of the current and upgraded versions.
Sahara leverages Mirage’s re-execution infrastructure, which has been described in detail in [8]. Specifically, this infrastructure executes the instrumented binaries of both versions at the failed user sites, feeding them the same inputs that had caused the upgrade to fail. These inputs were collected in the logs recorded during step 2. To allow for some level of non-determinism during re-execution, Sahara maps the recorded inputs to the appropriate input operations (identified by their system calls and thread ids), even if they are executed in a different order in the log.
As the instrumented versions execute, their dynamic routine call/return information is collected. Listing 4 shows the log for the current version, whereas Listing 5 does so for the upgraded version of the program.
With these routine call/return logs, Sahara determines the set of routines, called DeviatedRoutines, whose dynamic behavior has deviated after the upgrade. Specifically, we implement fDiff, a diff-like tool that takes two execution logs as input, and converts each of them into a sequence of routine signatures. It uses the longest common subsequence algorithm to compute the difference between the two sequences of signatures. A routine has deviated, if one or more of the following differs between the two versions: (1) the number of arguments passed to it; (2) the value of any of its arguments; (3) its return value; (4) the number of global variables accessed by it; or (5) the value of one or more global variables accessed by it. This notion of deviation is similar to that proposed for value spectra [35].
In Listings 4 and 5, two routines have deviated: checklength and secondfunction. Checklength has deviated in its return value (line 8), whereas secondfunction has deviated in its argument (line 13).
Sahara transfers the DeviatedRoutines list to the developer’s site for the final step.
Intersection and list of primary suspects (step 10). Finally, Sahara computes the union of the DeviatedRoutines from the failed user sites. It then intersects this larger set with SuspectRoutines. The intersection forms the set of prime suspects, i.e. the routines most likely to contain the root cause of the upgrade failure. For the example, checklength is the prime suspect, despite the fact that all three routines have some relationship to the users’ environment. The root cause of the failure is indeed checklength.
C. Discussion
Sahara and other systems. Sahara simplifies the debugging of upgrades that fail due to the user environment. As such, Sahara is less comprehensive than systems that seek to identify more classes of software bugs (e.g., [32]). However, Sahara takes advantage of its narrower scope to guide failed upgrade debugging more directly towards environment-related bugs (which are the most common in practice [8]).
In essence, we see Sahara as complementary to other systems. In fact, an example combination of systems is the following. Steps 1-4 of Sahara would be executed first. If the user environment is likely the culprit (as determined by the output of step 4), the other steps are executed. Otherwise, another system is activated.
Dealing with multiple bugs. The feature selection algorithm is the only part of Sahara that could be negatively affected by an upgrade with multiple bugs. The other components of Sahara are unaffected because (1) information about each execution (the resource fingerprints and a success/failure flag) represents at most one bug, (2) static analysis is independent of the number of bugs, (3) each dynamic analysis finds deviations associated with a single bug, and (4) the union+intersection step is independent of the number of bugs.
Sahara is effective when faced with multiple bugs, even when feature selection does not produce the ideal results. To understand this, consider the two possible scenarios: (1) all bugs are environment-related; and (2) one or more bugs are unrelated to the environment.
When all bugs are environment-related and involve the same environment resources, feature selection works correctly and Sahara easily produces the prime suspects for all bugs. If different bugs relate to different sets of environment resources, feature selection could misbehave. In particular, if there is not enough information about all bugs, feature selection could misrank the environment resources that are relevant to the less frequent bugs to the point that they do not become SERs.
<table>
<thead>
<tr>
<th>1. Function main numArgs 0</th>
<th>6. Return: retVal Size: 4 Type: int Value: 10</th>
</tr>
</thead>
<tbody>
<tr>
<td>2. Globals at ENTRY: 0</td>
<td>7. Global: env2 Size: 4 Type: int Value: 9</td>
</tr>
<tr>
<td>3. Function checklength numArgs 0</td>
<td>8. Return: retVal Size: 4 Type: int Value: 9</td>
</tr>
<tr>
<td></td>
<td>10. Return: retVal Size: 4 Type: int Value: 10</td>
</tr>
</tbody>
</table>
This would cause the remaining steps to eventually produce the prime suspects for the more frequent bugs only. After those bugs are removed, Sahara can be run again to tackle the less frequent bugs. This second time, feature selection would rank the environment resources of the remaining bugs more highly. Other systems rely on similar multi-round approaches for dealing with multiple bugs, e.g. [11].
When one or more bugs are not related to the environment, feature selection could again misbehave if there is not enough information about the bugs that are environment-related. This scenario would most likely cause feature selection to low-rank all environment resources. In this case, the best approach is to resort to a different system, as discussed above. In contrast, if there is enough information about the environment-related bugs, feature selection would select the proper SERs. Despite this good behavior, the dynamic analysis at some failed sites would identify DeviatedRoutines corresponding to bugs that are not related to the environment. However, those routines would not intersect with those from the static analysis, leading to the proper prime suspect results.
**Limitations of Sahara’s current implementation.** Sahara currently implements simple versions of its components. As a proof-of-concept, the goal of this initial implementation is simply to demonstrate how to combine different techniques in a useful and novel way. However, as we discuss below, more sophisticated components can easily replace the existing ones.
Sahara limits the amount of user information transferred to the developer site to the resource fingerprints (inputs are never transferred). In our current implementation, the fingerprints are transferred in hashed form (SHA-1), which does not provide foolproof privacy guarantees. However, Sahara can easily use more sophisticated schemes for these transfers. Regardless of the privacy scheme, the bandwidth required by these transfers (and that of the DeviatedRoutines) should be negligible. Sahara requires substantially more communication bandwidth for transferring the re-execution and value spectra infrastructures, but only for failed user sites.
Sahara employs static and dynamic analyses to narrow the set of routines that are likely to contain the root cause of the failure. However, under certain conditions, these analyses may be unable to do so. In the worst case, all routines may be affected by the SERs, making static analysis ineffective. Similarly, all routines could be found to deviate from their original behaviors. Fortunately, these worst-case scenarios are extremely unlikely in a single upgrade.
Execution replay at the failed sites is currently performed without virtualization. Using virtual machines would enable us to automatically handle applications that have side-effects, but at the cost of becoming more intrusive and transferring more data to the failed sites. Sahara can be extended to use replay virtualization. On the positive side, Sahara performs a single replay at a failed site, which is significantly more efficient than the many replays of techniques such as delta debugging [38].
Our current approach for handling replay non-determinism is very simple: Sahara tries to match the recorded inputs to their original system calls when re-executing each version of the application. Internal non-determinism (e.g., due to random numbers or race conditions) is currently not handled and may mislead the dynamic analysis if it changes: the number or value of the arguments passed to any routines, the number or value of the global variables they touch, or their return values. Sahara can be combined with existing deterministic replay systems to eliminate these problems.
Finally, Sahara guides the debugging process by pinpointing a set of routines to debug first. Pinpointing a single routine or even a single line causing the failure may not even be possible, since the root cause of the failure may span multiple lines and routines. Moreover, the systems that attempt such pinpointing (e.g., [17], [32], [38]) often incur substantial overhead at the users’ sites, such as running instrumented code all the time, checkpointing state at regular intervals, and multiple replays.
### III. Evaluation
In this section, we describe our methodology and evaluate Sahara by analyzing three real bugs in OpenSSH, a synthetic bug in SQLite, and a synthetic bug in uServer.
We chose OpenSSH because it is widely deployed in diverse user environments. Its upgrades are fairly frequent, typically once every 3-6 months [26]. OpenSSH comprises many components: (1) `sshd`, the daemon that listens for connections coming from clients; (2) `ssh`, the client that logs and executes commands on a remote machine; (3) `scp`, the program to copy files between hosts; (4) `sftp`, an interactive file transfer program atop the SSH transport; and (5) utilities such as `ssh-add`, `ssh-agent`, `ssh-keysign`, `ssh-keyscan`, `ssh-keygen`, and `sftp-server`. In all, OpenSSH has around 400 distinct files and 50-70K lines of code (LOC).
SQLite is the most widely deployed SQL database [31]. It implements a serverless, transactional SQL engine. SQLite has
A. Methodology
**OpenSSH: Port forwarding bug.** Port forwarding is commonly used to create an SSH tunnel. To setup a tunnel, one forwards a specified local port to a port on the remote machine. SSH tunnels provide a means to bypass firewalls, so long as the site allows outgoing connections. The bug [4] was a regression bug in OpenSSH version 4.7. When using SSH port forwarding for large transfers, the transfer aborts. Some users observed the following buffer error:
```plaintext
buffer_get_string: buffer error
```
These transfers executed successfully until version 4.6, but the behavior changed after upgrading to version 4.7. The failure was observed at a small subset of user sites. The abort was not reproducible at the developer site, so the developer needed volunteer users to reproduce the bug and test its fix. A correct and complete fix was submitted and tested by the users on the second attempt after almost three months from the time it was submitted [4].
The failure was caused by the following issues: (a) the users had enabled port forwarding in the `ssh` configuration file; (b) change in default window size from 128KB to 2MB in the `ssh` client code in version 4.7; (c) port forwarding code advertising the default window size as the default packet size; and (d) the maximum packet size set to 256KB in `sshd`. Given these characteristics, when users issued large transfers through the `ssh` tunnel, some of the packets had size larger than the daemon’s maximum, resulting in the buffer error after the upgrade. The port forwarding code using the default window size as the default packet size was not an issue before the upgrade, as the size was always below the maximum.
**OpenSSH: X11 forwarding bug.** This bug [3] manifested when users upgraded to OpenSSH version 4.2p1 from 4.1p1 and tried to start X11 forwarding. The failure was observed at the user sites that had SSH forwarding support enabled and the command was executed in the background. Users observed the following error:
```plaintext
xterm X11 error: Can't open display: localhost:10.0
```
In version 4.2p1, developers modified the X11 forwarding code to fill a number of X11 channel leaks, including destroying the X11 sessions whose session has ended. As a result, when the X11 forwarding process is started in the background, the child (and the channel) starting it would exit immediately. It took the developers more than two weeks to fix this bug [3].
**OpenSSH: ProxyCommand bug.** The `ProxyCommand` option specifies the command that will be used by the SSH client to connect to the remote server. The bug [27] was a regression in OpenSSH version 4.9; `ssh` with `ProxyCommand` would fail for some users with a "No such file" error.
Until version 4.7, `ProxyCommand` would use `/bin/sh` to execute the command. However, in version 4.9, the code changed to use the `$SHELL` environment variable, causing the command to fail at user sites where `$SHELL` was set to an empty string. The developers fixed this bug in one week, after one user had already done a large amount of debugging [27].
**SQLite and uServer bugs.** To demonstrate Sahara's generality, we synthetically created one buggy upgrade for SQLite version 3.6.14.2 and one for uServer version 0.6.0. Note that these two bugs are trivial and could be identified by simpler tools than Sahara. However, our goal is simply to demonstrate that Sahara works without modification for a variety of applications.
Before the upgrade of SQLite, the option `echo on` caused its shell to output each command before executing it. After our synthetic upgrade, it does not output the command when executing in interactive mode. The bug we injected into the upgrade of uServer is not environment-related. The bug is a typo in the function that parses user input causing dropped requests and occasional crashes.
We do not present complete results for the `ProxyCommand`, SQLite, or uServer bugs due to space limitations. However, we do include a summary of their results in the end of the next subsection.
**Upgrade deployment.** To simulate a real-world deployment of a software upgrade to a large number of users with varied environment settings, we collected environment data from 87 machines at our site across two clusters. The settings of the machines within a cluster are similar, but they are different across clusters.
We used the methodology described in Section II-B to identify the environmental resources in OpenSSH, SQLite, and uServer. Table I lists the parsers used to parse and fingerprint these environmental resources. CHUNKS and CHUNKS2 chunk and fingerprint the binary files, such as the kernel symbols; KEYVAL parses and chunks any file in the key-delimiter-value format, such as shell environment or cpu data; LIBS chunks and fingerprints all the libraries; LINES.c parses and fingerprints a file one line at a time, such as the file containing the list of kernel modules; and SSH and SSHD are application-specific parsers to parse and fingerprint the `sshd_config` and `sshd_config` configuration files, respectively. It took us only 8 person-hours to implement these parsers. SQLite and uServer did not require any application-specific parsers. The environmental resources of a single machine, parsed/chunked and fingerprinted, along with the success/failure flag constitute a single user profile.
In our experiments, we assume by default that 20 profiles include environment settings that can activate a bug, whereas 67 of them do not. We study the impact of this parameter below.
**User site environments.** To evaluate Sahara’s behavior in the face of the uncertainties that may occur in practice, we perform six types of experiments: `random_perfect`, `random_imperfect_60`, `random_imperfect_20`, `realconfig_perfect`, `realconfig_imperfect_60`, and `realconfig_imperfect_20`. In the `random_perfect` experiments, the values of all the environment resources related to the application are chosen at random, ex-
cept for the resources that relate directly to the bug. Moreover, the 20 profiles with environment settings that can activate the bug are classified as failed profiles, whereas the other 67 are classified as successful ones. As a result, there is 100% correlation between those resources and the failure. This is the best case for feature selection in Sahara, as it finds the minimum set of SERs.
In the two random_imperfect cases, the environment settings are the same as in the random_perfect case. However, not all profiles with environment settings that cause the failure are labeled as failures. In particular, only 60% of these profiles are labeled failures in the random_imperfect_60 case, and only 20% in the random_imperfect_20 case. These imperfect experiments mimic the situation where some users simply have not activated the bug yet, possibly because they have not exercised the part of the code that uses the problematic settings. These scenarios may lead feature selection to pick more SERs than in the random_perfect case.
In the three types of experiments described above, the application-related environment includes random values. For more realistic (realconfig) scenarios, we downloaded eight different complete OpenSSH configuration files from the Web. For each of the bugs, we modify three of these files to include the settings that activate the bug. One of these eight configuration files (three with problematic settings and five with only good settings) is assigned to each of the 87 user profiles randomly, but in the same proportion as before: 20 users should get problematic settings and 67 should not. In the realconfig_perfect case, all the 20 profiles with problematic settings are labeled as failures, whereas the 67 others are labeled as successful. In the realconfig_imperfect_60 and realconfig_imperfect_20 experiments, only 60% and 20% of the profiles with these settings are labeled as failures, respectively. The realconfig experiments are likely to lead to more SERs than the random ones. We do not study realconfig scenarios for SQLite because the bug we inject into it is synthetic.
In the six types of experiments described above, we assume that there are 20 users with problematic settings for the OpenSSH-related environment. To assess the impact of having different numbers of sites with these bad settings, we consider four more types of experiments: random_perfect_30, random_perfect_10, realconfig_perfect_30, and realconfig_perfect_10. The 30 and 10 suffixes refer to the number of profiles that exhibit the environment settings that can cause the upgrades to fail.
In all of our experiments, we consider the features ranked within 30% of the highest ranked feature as suspects. In addition, we use inputs that we know will activate the bugs.
### TABLE I
<table>
<thead>
<tr>
<th><strong>Parser Name</strong></th>
<th><strong>Description</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>CHUNKS</td>
<td>Chunks and fingerprints a binary file into 1KB chunks</td>
</tr>
<tr>
<td>CHUNKS2</td>
<td>Chunks and fingerprints a file into variable sized chunks</td>
</tr>
<tr>
<td>KEYVAL</td>
<td>Chunks and fingerprints a key-value pair file</td>
</tr>
<tr>
<td>LINES.c</td>
<td>Chunks and fingerprints a library and all its dependencies</td>
</tr>
<tr>
<td>LIBS4</td>
<td>Chunks and fingerprints a library and all its dependencies</td>
</tr>
<tr>
<td>SSHD</td>
<td>Application-specific parser to fingerprint the sshd_config file</td>
</tr>
<tr>
<td>SSH</td>
<td>Application-specific parser to fingerprint the ssh_config file</td>
</tr>
</tbody>
</table>
### B. Results
**OpenSSH: Port forwarding bug.** Recall that this bug was introduced in the ssh code by version 4.7. This version has 58K LOC and 1529 routines (729 routines in ssh). The diff between versions 4.6 and 4.7 comprises approximately 400 LOC and 65 routines. Sahara identified 101 environmental resources, including the parameters in the configuration files, the operating system and library dependencies, hardware data, and other relevant files. Many of these resources, such as library files, are split into smaller chunks; for others, such as configuration files, each parameter is considered a separate feature. Overall, there are 325 features, forming the input to the feature selection step.
Table II shows the results for each of the analyses in Sahara and all techniques combined for every experiment. The feature selection step results in merely 1 feature (out of 325) chosen as suspect in the random_perfect, random_imperfect_60, and random_imperfect_20 cases. In these experiments, the environment resource that is actually determinant in the failures, configuration parameter Tunnel, was the only suspect because the other environmental resources were assigned random values in all user profiles. This resulted in a very high correlation between the failure and this resource, even in the random_imperfect cases. The Tunnel parameter corresponds to 4 suspect variables in ssh.
In contrast, in the realconfig_perfect, realconfig_imperfect_60 and realconfig_imperfect_20 experiments, 3 features are selected: configuration parameters Tunnel, BatchMode, and RSAAuthentication. Features BatchMode and RSAAuthentication have 3 possible values: yes, no, or missing. In the real configurations we collected, it so happened that RSAAuthentication was set to yes, and BatchMode to no in two of the three failed profiles, causing them to be highly correlated with the failure. Recall that we did not assign these values; we retrieved the configurations from the Web and changed only the setting of the Tunnel parameter. These three parameters correspond to 8 suspect variables in ssh.
The static analysis results in 12 suspect routines in the random cases, and 22 in the realconfig cases. The 12 routines comprise those that (1) read the configuration file (main and process_config_line) and initialize the environment of the ssh client (initialize_options and fill_default_options); (2) create, enable, or disable a tunnel (tun_open and a2tun); (3) place the tunnel data into a buffer or a packet (buffer_put_int and packet_put_int); and (4) enable the port forwarding over this tunnel and create a channel for it (ssh_init_forwarding, channel_new, client_request_tun_fwd, ssh_init_forwarding, channel_new, client_request_tun_fwd, client_request_tun_fwd).
Routine channel_new contains the root cause of this failure.
In the realconfig cases, the same 12 routines are suspect, in addition to those affected by RSAAuthentication (check_host_key, confirm, key_free, key_sign, load_identity_file, ssh_userauth1, try_challenge_response_authentication, try_password_authentication, try_rsa_authentication, and userauth_pubkey). BatchMode is used only during the initialization in ssh, so it does not produce other suspects.
The dynamic analysis identifies 124 routines whose behavior has deviated when going from version 4.6 to 4.7. Note that the number of deviations is higher than the number of routines that actually changed. The reason is that the command succeeds before the upgrade and many more routines are invoked, as compared to after the upgrade when the command fails. In our fDiff implementation, the routines that were not called after the upgrade are considered deviations.
The intersection of SuspectRoutines and DeviatedRoutines is only 6 routines in the random cases and 7 routines in the realconfig cases. In the random cases, the four routines pertaining to reading the configuration file and setting up the environment, and two routines pertaining to enabling or disabling the tunnel, were pruned out after intersection; their behavior did not change after the upgrade. In the realconfig perfect case, confirm was the additional routine identified as primary suspect. The 6 or 7 primary suspects reported by Sahara include the actual culprit (routine channel_new).
From the top six rows in Table II, we can see that the number of primary suspects output by Sahara is 2x-3x lower than that by static analysis, 17x-20x lower than that by dynamic analysis, and 9x-10x lower than the number of routines that were modified in the upgrade. Furthermore, we can see that Sahara is resilient to users that do not report their upgrades to have failed despite having problematic settings for the environment resources that cause the failure.
### Table II
<table>
<thead>
<tr>
<th>Bug</th>
<th>Experiment</th>
<th>diff Routines</th>
<th>SEFs (feature selection)</th>
<th>Suspect Routines (static analysis)</th>
<th>Deviated Routines (dynamic analysis)</th>
<th>Primary suspects (Sahara)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Port</td>
<td>random_perfect</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>random_imperfect_60</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>random_imperfect_20</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>realconfig_perfect</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td></td>
<td>realconfig_imperfect_60</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td></td>
<td>realconfig_imperfect_20</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td>X11</td>
<td>random_perfect</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>random_imperfect_60</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>random_imperfect_20</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>realconfig_perfect</td>
<td>137</td>
<td>3</td>
<td>22</td>
<td>157</td>
<td>7</td>
</tr>
<tr>
<td></td>
<td>realconfig_imperfect_60</td>
<td>137</td>
<td>3</td>
<td>20</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>realconfig_imperfect_20</td>
<td>137</td>
<td>3</td>
<td>20</td>
<td>157</td>
<td>6</td>
</tr>
</tbody>
</table>
In the realconfig_perfect experiment, Sahara selects 3 features: configuration parameters X11Forwarding, AuthorizedKeysFile, and ChallengeResponseAuthentication. In the realconfig_imperfect_60 and realconfig_imperfect_20 cases, Sahara also selects three features: configuration parameters X11Forwarding, AuthorizedKeysFile, and PidFile. AuthorizedKeysFile and PidFile were assigned the default value in two out of the three failed real user profiles, whereas ChallengeResponseAuthentication was set to no value in two of them. These four features correspond to seven actual variables in sshd.
The static analysis results in 18 suspect routines in the random_perfect and random_imperfect cases, 21 in realconfig_perfect, and 20 in the realconfig_imperfect cases. The 18 routines comprise those that: (1) read the configuration file (auth_clear_options and auth_parse_options) and initialize the environment of sshd (initialize_server_options and fill_default_server_options); (2) authenticate the incoming client connection with the options specified and setup the connection (do_authenticated1, do_child, do_exec, do_exec_pwp, do_exec_no_pwp, and do_login); (3) start a packet for X11 forwarding (packet_start); and (4) setup X11 forwarding, create the channel, process X11 requests, and do the cleanup (server_input_channel_req, session_input_channel_req, server_input_channel_req, session_x11_req, session_setup_x11_fwd, session_close, and disable_forwarding).
In the realconfig cases, all the 18 routines mentioned above are suspect, in addition to those affected by AuthorizedKeysFile (authorized_keys_file and expand AuthorizedKeys) and ChallengeResponseAuthentication (do_authentication). PidFile did not result in additional suspect routines, because it is used once in the initialization to store the pid of sshd, and never again. As a result, the realconfig_perfect case has 1 more routine reported as suspect than the realconfig_imperfect cases.
cases.
The dynamic analysis identifies 157 routines whose behavior has deviated when going from version 4.1 to 4.2. Again, the number of deviations is higher than the number of modified routines, because the upgraded code fails much earlier than the original one.
The intersection of the two analyses results in only 6 routines (do_child, do_exec, do_exec_no_pty, packet_start, session_setup_x11_fwd, and session_close) in the random case, and 7 (do_authentication2 is the additional routine) in the realconfig cases. 3 of the 6 (or 7) primary suspect routines are key to understanding the failure. However, the single modification in the upgrade that directly causes the failure is in the session_setup_x11_fwd routine.
From these results, we can see that the number of primary suspects found by Sahara is at least 3x lower than when using static analysis alone, at least 20x lower than when using dynamic analysis alone, and 15x lower than the number of routines that were actually modified. Again, these results illustrate Sahara’s ability to focus the debugging of failed upgrades on a small number of routines, even when many users do not experience failures despite having environment resources that could trigger bugs in the upgrade.
### Impact of number of profiles with failure-inducing settings
**Table III** presents the “perfect” results from these experiments. The default results (random perfect and realconfig perfect) and the dynamic analysis results are included for clarity. As expected, the number of SERs (as well as suspect routines and primary suspects) tends to increase when we lower the number of profiles with failure-inducing settings. Interestingly, the realconfig results for the X11 forwarding bug show that lowering noise (going from realconfig perfect to realconfig perfect 10) can indeed improve results as well.
**Impact of feature selection accuracy.** Feature selection is a major component of Sahara in that it defines the scope of the static analysis. Recall that Sahara’s feature selection considers all the features that are within 30% of the highest ranked feature as SERs by default. Here, we study two additional scenarios: (1) all features that are within 50% of the highest ranked feature are considered SERs, and (2) all OpenSSH configuration parameters are considered SERs. These scenarios cause an increasing number of unnecessary SERs.
For the port forwarding bug and scenario (1), the number of SERs remains the same in all the random cases and the realconfig perfect case. In the realconfig imperfect 60 case, the SERs increase from 3 to 4 and the prime suspects from 7 to 14. In the realconfig imperfect 20 case, the SERs increase from 3 to 6 and the prime suspects from 7 to 18. In scenario (2), the number of SERs is 22 (all ssh parameters) and the number of prime suspects is 34.
For the X11 forwarding bug and scenario (1), the number of SERs remain the same in all the random cases. In the realconfig perfect case, the SERs increase to 9 and the prime suspects to 10. In the realconfig imperfect 60 case, the SERs increase to 11 and the prime suspects to 10, whereas in the realconfig imperfect 20 case, the SERs increase to 12 and the prime suspects to 11. In scenario (2), the number of SERs increases to 51 (all ssd parameters) and the number of prime suspects to 43.
These results illustrate the behavior we expected: the less accurate feature selection is, the more prime suspects Sahara finds. Defining a few more SERs than necessary does not increase the number of prime suspects excessively (roughly by 2x at most, in comparison to our default results). However, adding too many unnecessary SERs can increase the number of prime suspects by 6x-7x, as in scenario (2).
**OpenSSH: ProxyCommand bug.** This bug affected ssh in version 4.9, which comprises 58K LOC and 1535 routines (712 routines in ssh). The upgrade to this version modified 122 routines. We performed the same 10 experiments with this upgrade as above. Depending on the type of experiment, feature selection produces 2-5 SERs and static analysis pro-
duces 10-29 suspect routines. Dynamic analysis produces 284 deviated routines. In contrast, Sahara outputs 7 or 11 primary suspects in all but one experiment (realconfig_perfect_10, for which it recommends 21 routines). Overall, Sahara improves on static analysis by 1.4x and on dynamic analysis by 14x-40x for this bug.
**SQLite bug.** We injected this bug in SQLite version 3.6.14.2, which comprises 67K LOC and 1338 routines. The upgrade modified two routines. We ran only the random family of experiments, since this was not a real upgrade bug. These results show that feature selection identified 2-3 SERs, static experiments, since this was not a real upgrade bug. These modified two routines. We ran only the random family of which comprises 67K LOC and 1338 routines. The upgrade modified two routines. We ran only the random family of experiments, since this was not a real upgrade bug. These results show that feature selection identified 2-3 SERs, static analysis produced 12-13 SuspectRoutines, and dynamic analysis identified 14 DeviatedRoutines. Sahara outputs 2 primary suspects in each of the three random cases (exactly the routines that were modified); one of the prime suspects is the root cause of the failure. Again, although trivial, these experiments illustrate that Sahara can be used without modification for a variety of applications.
**uServer bug.** We injected this bug in uServer version 0.6.0, which comprises 37K LOC and 404 routines. The upgrade modified 10 routines. Again, we ran only the random family of experiments, since this was not a real upgrade bug. The experiments stopped at the feature selection step, since the ranks of the top-ranked features consistently exhibit high standard deviations. Thus, feature selection properly flags this bug as unrelated to the environment.
**Summary.** The Sahara results for the five bugs and the different imperfections we studied indicate that our system may significantly reduce the time and effort required to diagnose the root cause of upgrade failures.
IV. RELATED WORK
A. Upgrade Deployment and Testing
A few studies [8], [21], [22] have proposed automated upgrade deployment and testing techniques. McCamant and Ernst [21], [22] automatically identify incompatibilities when upgrading a component in a multi-component system. However, neither of these works attempted to isolate the root cause of these incompatibilities. Similarly, Cramer et al. [8] did not seek to determine the root cause of upgrade failures at the users’ sites.
B. Automated Debugging
**Troubleshooting misconfigurations.** The idea of PeerPressure [33] and Snitch [23] is to identify the root cause of software misconfigurations using machine learning techniques. PeerPressure performs statistical analysis of Windows registry snapshots from a large number of machines. After a misconfiguration is detected, PeerPressure re-executes the program in a special tracing environment to capture the relevant registry data. It then uses Bayesian estimation to compare each misconfigured machine’s registry values with those of the machines that can successfully run the same program. Rare registry values that correlate well with misconfigurations are coerced to the more common values. Snitch introduces Interactive Decision Trees (IDT) to allow the developer to guide the troubleshooting process, starting from configuration traces from many users.
ConfAid [2] helps debug misconfigurations without information from other users. Instead, it instruments the binaries to track the causal dependencies between application-level configuration parameters and output behavior. The binaries, parameters, and outputs of interest are specified manually. These three systems assume that the software is correct, but was misconfigured by its users. Sahara is fundamentally different; it seeks to help find upgrade bugs that are triggered by proper configurations and environments. Moreover, Sahara goes well beyond finding the environment resources most likely to be related to a bug (i.e., feature selection).
Qin et al. [28] observe that many bugs are correlated with the “execution environment” (which they define to include configurations and the behavior of the operating and runtime systems). Based on this observation, they propose Rx, a system that tries to survive bugs at run time by dynamically changing the execution environment. A follow-up to Rx, Triage [32] goes further by dynamically changing the execution environment while attempting to diagnose failures at users’ sites.
Sahara focuses on upgrade bugs or misbehavior, rather than software bugs in general as Rx and Triage do. For this reason, Sahara can be much more specific about which variables and routines should be considered first during debugging. Moreover, Sahara can handle bugs due to aspects of the environment that would be difficult (or impossible) to change without semantic knowledge of the application. Finally, Rx and Triage do not leverage data from many users, machine learning, or static analysis. Using any of these features could speed up Triage’s diagnosis. In fact, as we argue in Section II-C, Sahara is complementary to systems like Triage.
**Statistical debugging with user site feedback.** Several previous papers [7], [11], [18], [19], [20], [38] rely on low-overhead, privacy-preserving instrumentation infrastructures to provide user execution data back to developers. For example, Cooperative Bug Isolation (CBI) [17] constitutes a feedback loop between developers and users. Developers provide instrumented software to users, and users provide data about that software’s behavior in their environments. The instrumentation consists of predicates placed at different points of the program. Developers then use sophisticated statistical and regression algorithms to rank predicates based on how well they correlate to bugs. Based on this ranking, developers manually try to find the root cause of the bugs. To reduce the manual work, [15] extended CBI to find the control flow paths connecting the highly ranked predicates.
Sahara also relies on information gathered at user sites, but the data collection only lasts temporarily to lower overheads. In addition, Sahara restricts its statistical analysis (feature selection) to the aspects of the environment that may have caused an upgrade to misbehave. Moreover, Sahara goes further by automatically relating the results of the statistical analysis to the variables and routines that most likely caused the misbehavior.
**Dynamic invariants.** Some studies [10], [12] automatically
extract likely program invariants based on dynamic program behavior (possibly after running multiple times with different inputs to increase coverage). The detection of invariants may involve significant overhead. Software can be deployed to users with instrumentation to check the invariants. Developers can then use the invariants and any violations of them to aid in debugging, just as the predicates above can be used.
Sahara focuses on misbehavior relating to the user’s environment, involves less overhead than these approaches, and automatically guides debugging.
**Delta debugging.** Delta debugging aims to resolve regression faults automatically and effectively. Several studies [7], [14], [38] have focused on comparing program states of failed and successful runs to identify the space of variables or rank program statements that are correlated with the failure.
Sahara’s dynamic analysis also considers the difference between two runs of a program. However, our approach is driven by environment resources and combines information from a collection of users, machine learning, static analysis, and dynamic analysis. Furthermore, unlike delta debugging, Sahara requires neither instrumenting the production code nor replaying the execution multiple times at the users’ sites.
**Dynamic behavior deviations.** Xie and Notkin [35] proposed program spectra to compare versions and get insights into their internal behavior. Harrold et al. [13] found that the deviations between spectra of two versions frequently correlate with regression faults.
Sahara uses value spectra to compare the execution call traces from before and after the upgrade is applied. However, merely identifying the deviations in the upgraded version leads to a large number of candidates for exploration, as our experiments demonstrate. The same is likely to occur for most large applications or major upgrades. Sahara further narrows down the deviation sources by cross-referencing them with suspect routines found through information from users, machine learning, and static analysis.
The aim of [25], [37] is to detect the root cause of regression failures automatically. Ness and Ngo [25] used a linear search algorithm on the fully-ordered source management archive to identify a single failure-inducing change. In [37], the authors proposed an algorithm to determine the minimal set of failure-inducing changes.
These studies sought to isolate the fault-inducing change after a regression test fails at the developer’s site. In contrast, Sahara assumes that the upgrade has been tested thoroughly at the developer’s site and is deployed after all tests have passed. Sahara helps isolate the fault-inducing code that is affected by specific user environments. These failures are not easily reproducible at the developer’s site because of environmental differences.
**Other approaches.** Researchers have actively been considering other approaches to automated debugging, such as static analysis, model checking, and symbolic execution, e.g. [5], [9], [36]. Sahara is not closely related to any of these approaches, except peripherally for its use of static def-use analysis. However, Sahara’s use of static analysis differs in a major way from most other approaches: it does not use it to find the bugs themselves; rather, we use it to constrain the set of routines of interest.
**V. CONCLUSION**
In this paper, we sought to reduce the effort developers must spend to debug failed upgrades. We proposed Sahara, a system that prioritizes the set of routines to consider when debugging. Driven by the fact that most upgrade failures result from differences between the developers’ and users’ environments, Sahara combines information from user site executions and environments, machine learning, and static and dynamic analyses. We evaluated our system for five bugs in three widely used applications. Our results showed that Sahara produces accurate recommendations with only a small set of routines. Importantly, the set of recommended routines remains small and accurate, even when the user site information is misleading or limited.
**REFERENCES**
|
{"Source-Url": "https://rucore.libraries.rutgers.edu/rutgers-lib/58209/PDF/1/play/", "len_cl100k_base": 14009, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45614, "total-output-tokens": 16401, "length": "2e13", "weborganizer": {"__label__adult": 0.0002522468566894531, "__label__art_design": 0.00021219253540039065, "__label__crime_law": 0.00019121170043945312, "__label__education_jobs": 0.0006070137023925781, "__label__entertainment": 4.678964614868164e-05, "__label__fashion_beauty": 0.00010842084884643556, "__label__finance_business": 0.00014543533325195312, "__label__food_dining": 0.00019037723541259768, "__label__games": 0.00048232078552246094, "__label__hardware": 0.0006227493286132812, "__label__health": 0.00020754337310791016, "__label__history": 0.00016319751739501953, "__label__home_hobbies": 6.020069122314453e-05, "__label__industrial": 0.00018465518951416016, "__label__literature": 0.0001807212829589844, "__label__politics": 0.00013899803161621094, "__label__religion": 0.0002574920654296875, "__label__science_tech": 0.005542755126953125, "__label__social_life": 6.210803985595703e-05, "__label__software": 0.006954193115234375, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.0001809597015380859, "__label__transportation": 0.00025343894958496094, "__label__travel": 0.0001418590545654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74362, 0.01828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74362, 0.33598]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74362, 0.91088]], "google_gemma-3-12b-it_contains_pii": [[0, 5404, false], [5404, 10534, null], [10534, 16073, null], [16073, 22089, null], [22089, 28389, null], [28389, 34087, null], [34087, 40251, null], [40251, 46469, null], [46469, 53000, null], [53000, 57091, null], [57091, 63684, null], [63684, 70739, null], [70739, 74362, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5404, true], [5404, 10534, null], [10534, 16073, null], [16073, 22089, null], [22089, 28389, null], [28389, 34087, null], [34087, 40251, null], [40251, 46469, null], [46469, 53000, null], [53000, 57091, null], [57091, 63684, null], [63684, 70739, null], [70739, 74362, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74362, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74362, null]], "pdf_page_numbers": [[0, 5404, 1], [5404, 10534, 2], [10534, 16073, 3], [16073, 22089, 4], [22089, 28389, 5], [28389, 34087, 6], [34087, 40251, 7], [40251, 46469, 8], [46469, 53000, 9], [53000, 57091, 10], [57091, 63684, 11], [63684, 70739, 12], [70739, 74362, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74362, 0.12609]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
d079178383e1ad9992d10ac9c2c4444b9972a344
|
qEndpoint: A Novel Triple Store Architecture for Large RDF Graphs
Antoine Willerval a,b,* , Dennis Diefenbach a and Angela Bonifati b
a The QA Company, France
E-mails: antoine.willerval@the-qa-company.com, dennis.diefenbach@the-qa-company.com
b CNRS Liris, Lyon 1 University, IUF, France
E-mail: angela.bonifati@univ-lyon1.fr
Abstract.
In the relational database realm, there has been a shift towards novel hybrid database architectures combining the properties of transaction processing (OLTP) and analytical processing (OLAP). OLTP workloads are made up by read and write operations on a small number of rows and are typically addressed by indexes such as B+trees. On the other side, OLAP workloads consists of big read operations that scan larger parts of the dataset. To address both workloads some databases introduced an architecture using a buffer or delta partition.
Precisely, changes are accumulated in a write-optimized delta partition while the rest of the data is compressed in the read-optimized main partition. Periodically, the delta storage is merged in the main partition. In this paper we investigate for the first time how this architecture can be implemented and behaves for RDF graphs. We describe in detail the indexing-structures one can use for each partition, the merge process as well as the transactional management.
We study the performances of our triple store, which we call qEndpoint, over two popular benchmarks, the Berlin SPARQL Benchmark (BSBM) and the recent Wikidata Benchmark (WDBench). We are also studying how it compares against other public Wikidata endpoints. This allows us to study the behavior of the triple store for different workloads, as well as the scalability over large RDF graphs. The results show that, compared to the baselines, our triple store allows for improved indexing times, better response time for some queries, higher insert and delete rates, and low disk and memory footprints, making it ideal to store and serve large Knowledge Graphs.
Keywords: RDF, qEndpoint, HDT, RDF4J, Wikidata
1. Introduction
Hybrid transactional and analytical processing (HTAP) is a term coined in the relational database world, to indicate database system architectures performing real-time analytics combining read and write operations on a few rows with reads and writes on large snapshots of the data [1]. Combining transactional and analytical query workloads is a problem that has not been yet tackled for RDF graph data, despite its importance in future Big graph ecosystems [2]. Inspired by the relational database literature, we propose a triple store architecture using a buffer to store the updates[3][4]. The key idea of the buffer is that most of the data is stored in a read-optimized main partition and updates are accumulated into a write-optimized buffer partition we call delta. The delta partition grows over time due to insert operations. To avoid the situation that the delta partition becomes too large thus leading to...
deteriorated performances, the delta partition is merged into the main partition. One would expect the following advantages of such an architecture:
1. higher read performance, due to the read-optimized main partition
2. faster insert performance, since only the write-optimized partition is affected;
3. faster delete performance, since the write-optimized partition is smaller and deletions in the main partition are just marking data as deleted;
4. smaller indexing size and smaller memory footprint, since the main partition is read only and therefore higher compression on the data can be applied
5. faster indexing speed, since all initial data does not need to be stored in a data structure that is updated over time while more and more data is indexed.
6. better performance on analytical queries, since the read-optimized partition allows for faster scans over the data
By leveraging the above insights, we provide the design and implementation of the first differential-update architecture for graph databases showing how it behaves under different query workloads and with respect to state of the art graph systems (such as Virtuoso, Blazegraph\(^1\), Neo4j\(^2\) and Apache Jena\(^3\)). To achieve this, we compare our system on two RDF benchmarks, the Berlin SPARQL Benchmark (BSBM) and the recent Wikidata Benchmark (WDBench). We aim at checking our implementation against the above expected advantages.
This paper is organized as follows. In section 2, we describe the related works. In section 3, we describe the data-structures that we use for the main-partition and for the delta-partition. In section 4, we describe how SELECT, INSERT and DELETE operations are carried out on top of the two partitions. In section 5, we describe the merge operation. In section 6, we carry out a series of experiments that compare the performance of this new architecture with existing ones. A Supplemental Material Statement in Section 8. We conclude with section 9 where we discuss the advantages and limitations of our proposed architecture.
2. Related Work
Relational database systems are the most mature systems combining transaction processing (OLTP) and analytical processing (OLAP)\(^4\). OLTP workloads are made up by read-and write operations on a small number of rows and are typically addressed by B+Trees. On the other side OLAP are made up by big read and write transactions that scan larger parts of the data. These are typically addressed using a compressed column-oriented approach\(^5\). To address both workloads, the differential update architecture has been introduced\(^6\) combining a write-optimized delta partition and a read-optimized main partition. This process is called merge\(^6\). One of the main advantages of the main partition is not only that it is read-optimized, but also that it is compressed allowing to load in memory much bigger parts of the data.
Many different architectures have been explored for graph databases. We limit ourselves to describe the common options and we refer to \[9\] for a more extensive survey. A very common architecture for triple stores is based on B+–trees. These are used for example in RDF-3X\[^{10}\], RDF4J \[^{11}\] Blazegraph and Apache Jena. These allow for fast search as well as delete and insert operations.
Another line of work tries to map the graph database model on an existing data model. For example, Trinity RDF \[^{12}\] maps the graph data model to a key value store. Another approach is to use the relational database model. This is done for example in Virtuoso\[^{13}\] or SAP HANA Graph\[^{14}\] where all RDF triples are stored in a table with three columns (subject, predicate, object). Also, Wilkinson et al. \[^{15}\] uses the relational database model but in this case a table is created for each property in the dataset.
There are two lines of work that are most similar to ours. The first are works that use read-only compressed data structures rather than B+–trees to store the graph like QLever\[^{16}\]. However, they do not support updates and are only limited to the main partition. The second are versioning systems where the data is stored in a main partition and
---
\(^{1}\)https://blazegraph.com
\(^{2}\)https://jena.apache.org/index.html
changes are stored in a delta partition. This is the case of x-RDF-3X[17] relaying on RDF-3X and OSTRICH[18] relaying on HDT. OSTRICH, unlike our system, gives access to the data only via triple pattern searches with the possibility to specify a start or end version. Their aim is not to have an efficient SPARQL endpoint. x-RDF-3X on the other side, is built on top of RDF-3X which, as mentioned above, is a B+-trees and not on compressed data structure. Moreover this system isn’t maintained anymore.
3. Data structures for main and delta-partition
In order to construct a differential update architecture we need to choose data-structures that are well suited for the main-partition and the delta-partition, we respectively HDT and the RDF4J native store. We describe our choices for qEndpoint.
3.1. RDF
In qEndpoint, we are using RDF graphs. RDF is a widely used data model in the Semantic Web. It relies on the notions of RDF triples and RDF triple patterns.
**RDF triple and RDF triple pattern** Given an infinite set of terms \( \mathcal{N} = \mathcal{I} \cup \mathcal{B} \cup \mathcal{L} \cup \mathcal{V} \), where \( \mathcal{I}, \mathcal{B}, \) and \( \mathcal{L} \) are mutually disjoint, and \( \mathcal{I} \) are IRI references, \( \mathcal{B} \) are Blank Nodes, \( \mathcal{L} \) are Literals and \( \mathcal{V} \) are variables.
- An RDF triple is a tuple \((s, p, o) \in (\mathcal{I} \cup \mathcal{B}) \times \mathcal{I} \times (\mathcal{I} \cup \mathcal{B} \cup \mathcal{L})\), where “s” is the subject, “p” is the predicate and “o” is the object.
- An RDF triple pattern is a tuple \((S, P, O) \in (\mathcal{I} \cup \mathcal{B} \cup \mathcal{V}) \times (\mathcal{I} \cup \mathcal{V}) \times (\mathcal{I} \cup \mathcal{B} \cup \mathcal{L} \cup \mathcal{V}) = TP\).
**RDF graph** An RDF graph \( G \) is a set of RDF triples of the form \((s, p, o)\). It can be represented as a directed labeled graph whose edges are \( s \xrightarrow{p} o \). We denote with \( G \) the set of all RDF graphs.
**RDF triple pattern resolution function** Let \( \mathcal{G} \) be an RDF graph.
- We say that an RDF triple \((s, p, o)\) matches a triple pattern \((S, P, O)\) over \( \mathcal{G} \) if \((s, p, o) \in \mathcal{G} \) and
- if \( S \in \mathcal{I} \cup \mathcal{B} \) then \( S = s \)
- if \( P \in \mathcal{I} \) then \( P = p \)
- and if \( O \in \mathcal{I} \cup \mathcal{B} \cup \mathcal{L} \) then \( O = o \)
- We call a function:
\[ TPR : G \times TP \longrightarrow G : (\mathcal{G}, (S, P, O)) \mapsto \{(s, p, o) \in \mathcal{G} | (s, p, o) \text{ matches } (S, P, O)\} \]
, i.e. a function that for a given triple pattern returns all triple matching the triple pattern, a triple pattern resolution function.
3.2. HDT: the main partition
The main partition should be a data-structure that allows for fast triple-pattern retrieval and has high data compression. For this we choose HDT[19]. HDT is a binary serialization format for RDF based on compact data structures. These aim to compress data close to the possible theoretic lower bound, while still enabling efficient query operations. More concretely HDT has been shown to compress the data to an order of magnitude similar to the gzipped size of the corresponding n-triple serialization, while having a triple pattern resolution speed that is competitive with existing triples-stores. This makes it an ideal data structure for the main-partition.
In the following we are describing the internals of HDT that are needed to understand the following of the paper. HDT consists of three main components, namely: the header (H), the dictionary (D) and the triple (T) partition. The header component is not relevant, as it stores metadata about the dataset like number of triples, number of
The dictionary is a map that assigns to each IRI, blank node, and literal (which we will call resources in the follow) a numeric ID. The dictionary is made up by 4 sections \((SEC)\): shared \((SH)\), subjects \((S)\), objects \((O)\) and predicates \((P)\). The shared section contains all resources that are appearing both as subject and object. The subject and object section are containing resources that are appearing either as subject or object but not as both. The predicate section is containing all resources appearing in the predicates. Note that the \(P\) section and the \(SH, S, O\) sections are not disjoint, i.e. a same resource can have an ID in the \(P\) section and another ID in one of the \(SH, S, O\) sections. We denote the number of elements in each section with the notation \(N_{SEC}, SEC \in \{S, P, O, SH\}\). Each section is a lexicographically ordered list of resources. This list naturally gives a correspondence between resources and IDs without the need to store the IDs. Moreover, the resources are compressed. The sections are divided into blocks. In each block, the first entry is an uncompressed resource. The following resources are encoded as diffs to the previous one achieving compression. HDT offers a function that converts a term and a section to the corresponding ID in the section.
\[
HDTDictionary: (I \cup B \cup L) \times SEC \rightarrow N: t \rightarrow t_{id}
\]
The ID is 0 if the term does not exist in the corresponding section.
The triples are encoded using the IDs in the dictionary and then lexicographically ordered with respect to subject, predicate and object. These are encoded using two numeric compact arrays and two bitmaps. This data structure allows for fast triple pattern resolution for fixed subject or fixed subject and predicate. With HDT FoQ[20], an additional index data structure is added to query the triple patterns "?O, ?PO and ?P?".
Most importantly, HDT offers APIs to search for a given triple pattern either by using resources or by using IDs. It returns triples together with their index in the HDT file, which we denote with \(i_{id, index}\).
\[
HDTTriples: (N \cup V) \times (N \cup V) \times (N \cup V) \rightarrow N \times N \times N: TP_{id} \rightarrow t_{id, index}
\]
Last but not least, we would like to point out that an HDT file can be queried either in "load" or "map" mode. In the "load mode", the entire data set is loaded into memory. In the "map mode", the HDT file is "mapped" in the endpoint memory, only parts of the file are loaded into memory when required.
### 3.3. RDF-4J native store: the delta-partition
The delta-partition should be a write-optimized data structure. B+-trees offer a good trade-off between read speed while maintaining good write performance. Moreover B+-trees are widely used in many triple-stores showing that they are a good choice for triple-stores in general. Finally B+-trees are also used as the delta-partition in the relational database world. We therefore choose for the delta-partition the RDF4J native store which is an open source, maintained and well optimized triple-store back-end that relies on B+-trees. Due to space reasons, we do not provide details about the internals of RDF4J since they are not important to understand the following. Despite HDT, an RDF4J store \(R\) offers not only an API to search (SELECT), but also one to insert (INSERT) and one to remove (DELETE) triples.
\[
\text{RDF4JSelect}: (R \times (I \cup B \cup V) \times ((I \cup V) \times (I \cup B \cup L \cup V))) \rightarrow (I \cup B) \times ((I) \times (I \cup B \cup L)) \quad \text{Store, } TP \rightarrow TP
\]
\[
\text{RDF4JInsert}: (R \times (I \cup B) \times ((I) \times (I \cup B \cup L))) \rightarrow () \quad \text{Store, } TP \rightarrow ()
\]
\[
\text{RDF4JDelete}: (R \times (I \cup B \cup V) \times ((I \cup V) \times (I \cup B \cup L \cup V))) \rightarrow () \quad \text{Store, } TP \rightarrow ()
\]
4. Select, Insert and Delete Operations
In qEndpoint, we are using the RDF4J SPARQL 1.1 API, giving us access to a premade SPARQL implementation. To connect the API to our store we need to indicate how to SELECT, INSERT and DELETE one RDF triple pattern. All other SPARQL operations are build on top of these elementary operations. For now the transaction aren’t supported above the NONE level.
It is important to make the distinction between the RDF4J Native Store and the RDF4J SPARQL API. These are two projects of the RDF4J library, but are independent components of qEndpoint. The NativeStore is used in the storage and the SPARQL API to provide an interface between the endpoint and the storage.
The qEndpoint system architecture is inspired by that of the relational database SAP HANA [6], one of the first commercial database to propose updates with a buffer architecture. This paper is inspired by the SAP HANA’s merge process, i.e. which data structures are used, how transactions are locked, how the data is moved from the main to the delta partition [8]. In a nutshell, the data is stored in a compressed read-optimized main partition and a write-optimized delta partition kept alongside. (see Figure 1) The main partition is handled via HDT, which is highly compressing RDF datasets and can reach 10x compression factors. Moreover, the data structures of HDT are read-optimized for fast triple-pattern resolution and can compete in this respect with traditional triple stores. The delta partition is handled with the RDF4J native store, which is known to have good performances especially for dataset sizes of up to 100M triples.4. The system can be accessed using either the user interface (UI) or via the web API.
When a dataset is uploaded to the endpoint, we are using a custom indexing method to index the dataset into the main partition with a low-memory footprint. Once the delta partition becomes too big, a merge process (qEndpoint merger in the figure) is triggered to add the triples from the delta partition to the main partition.
- SELECT: both the HDT main partition and the RDF4J delta partition offer triple pattern resolution functions, we denote them as HDTTriples($s_id$, $p_id$, $o_id$) and RDF4JSelect (RDF4JStore, $s$, $p$, $o$). The general idea when we resolve a triple pattern over the qEndpoint, is to resolve the triple pattern over HDT and RDF4J, and merge the result sets. Note that both functions are in practice iterators, thus the result sets do not need to be fully held in memory. Moreover, we optimize the process in different ways:
---
3https://rdf4j.org/documentation/programming/repository/#transaction-isolation-levels
4https://rdf4j.org/documentation/programming/repository/#native-rdf-repository
Algorithm 1: HDTTriplesID Retrieve HDT id from triple pattern
Data: Triple pattern \( TP = (s, p, o) \in (I \cup B \cup V) \times ((I \cup V) \times (I \cup B \cup L \cup V)) \)
Result: \((s_{id}, p_{id}, o_{id}) \in \mathbb{N}^3\)
if \( s \in V \) then
\( s_{id} \leftarrow -1 \)
else
\( s_{id} \leftarrow \) HDTDictionary\((s, S)\)
if \( s_{id} = 0 \) then
\( s_{id} \leftarrow \) HDTDictionary\((o, SH)\)
if \( s_{id} \neq 0 \) then
\( s_{id} \leftarrow s_{id} + N_{SH} \)
if \( p \in V \) then
\( p_{id} \leftarrow -1 \)
else
\( p_{id} \leftarrow \) HDTDictionary\((p, P)\)
if \( o \in V \) then
\( o_{id} \leftarrow -1 \)
else
\( o_{id} \leftarrow \) HDTDictionary\((o, O)\)
if \( o_{id} = 0 \) then
\( o_{id} \leftarrow \) HDTDictionary\((o, SH)\)
if \( o_{id} \neq 0 \) then
\( o_{id} \leftarrow o_{id} + N_{SH} \)
* Using the strategy above, we would, for each triple pattern resolution, make calls to the HDT dictionary and convert a triple ID back to a triple pattern. This is very costly and in many cases not necessary. When joining multiple triples we do not need to know the triples themselves, but only if the subject, predicate, object are equal (and the ID suffice for this operation). Whenever possible, we therefore avoid the tripleID conversions.
On the other hand, for example for FILTER operations we need the value of some of the resources in the triple and in these cases we convert the IDs to their corresponding string representations. This is also done when the actual result is returned to the user.
In order to make all joins over IDs, the triples stored in the delta-partition need to be stored via IDs (otherwise a conversion of the IDs via the dictionary is unavoidable). We therefore store every resource in the delta-partition with its HDT ID (if it exists) using the particular IRI format http://hdt.org/{section}{id}, where section is S/SH/P/O for each HDT section and id the HDT ID.
Finally note that, as described above in subsection 3.2, a predicate and a subject/object can have the same ID but represent different resources. This means that if a subject or object ID is used to query in predicate position (or the other way around) then the conversion over the dictionary is unavoidable.
* When searching over IDs, some HDT internals are exploited to cut down certain search operations. For example, from an object ID, one can check whether it also appears as a subject. Triple patterns that in the subject position have IDs of objects that by their ID range cannot appear as subject, also do not need to be resolved.
* Using the above strategy, we would, for each triple pattern, search over HDT and also search over RDF4J. In general, we assume that most of the data is contained in HDT and most of these calls will return empty results. On the other hand, these calls are expensive. To avoid them we add a new data structure in qEndpoint. For each entry in the HDT dictionary, we add a bit that indicates if the corresponding entry is used in the RDF4J store (as explained later in the INSERT part). We call it XYZBits. If a triple pattern contains resources contained in HDT, we check the XYZBits. We search the triple pattern over the RDF4J store if and only if the bit for all the resources in the triple pattern are set.
SPARQL algebra is built upon these operations. We carry out two further optimizations, as follows.
**Algorithm 2:** Select a triple of the RDF graph
**Data:** Triple pattern \(TP = (s, p, o) \in (I \cup B \cup V) \times ((I \cup V) \times (I \cup B \cup L \cup V))\)
**Result:** Triple patterns set \(R \in ((I \cup B) \times ((I \cup B \cup L))^n, n \geq 0\)
1. \(s_{\text{id}} = 0 \land p_{\text{id}} = 0 \land o_{\text{id}} = 0\) then
2. for all \(s_{\text{id}}, p_{\text{id}}, o_{\text{id}} \in \text{HDTTriples}(s, p, o)\) do
3. if \(\text{DeletedBit}[^{\text{index}}] = 0\) then
4. \(s_2, p_2, o_2 \leftarrow \text{HDTTriples}^{-1}(s_{\text{id}}, p_{\text{id}}, o_{\text{id}})\)
5. if \(s_{\text{id}} > 0 \land XBit[s_{\text{id}}] = 0\) then
6. return
7. return \(R \cup \{s_2, p_2, o_2\}\)
**Algorithm 3:** Insert a triple into the RDF graph
**Data:** Triple pattern \(TP = (s, p, o) \in ((I \cup B) \times ((I \cup V) \times (I \cup B \cup L))\)
1. \(s_{\text{id}}, p_{\text{id}}, o_{\text{id}} \leftarrow \text{HDTTriples}(s, p, o)\)
2. if \(s_{\text{id}} \neq 0 \land p_{\text{id}} \neq 0 \land o_{\text{id}} \neq 0\) then
3. for all \(s_{\text{id}}, p_{\text{id}}, o_{\text{id}} \in \text{HDTTriples}(s, p, o)\) do
4. if \(\text{DeletedBit}[^{\text{index}}] = 0\) then
5. return
6. if \(s_{\text{id}} = 0\) then
7. \(XBits[s_{\text{id}}] \leftarrow 1\)
8. if \(p_{\text{id}} = 0\) then
9. \(YBits[p_{\text{id}}] \leftarrow 1\)
10. if \(o_{\text{id}} = 0\) then
11. \(ZBits[o_{\text{id}}] \leftarrow 1\)
12. RDF4JInsert \((\text{RDF4JStore}, s, p, o)\)
Notice that the above operations allow to achieve a SPARQL 1.1-compliant SPARQL endpoint, given that the SPARQL algebra is built upon these operations. We carry out two further optimizations, as follows. First, we reuse...
Algorithm 4: Delete a triple from the RDF graph
**Data:** Triple pattern $TP = (s, p, o) \in ((I \cup B \cup V) \times ((I \cup V) \times (I \cup B \cup L \cup V))$
$s_{id}, p_{id}, o_{id} \leftarrow$ HDTTriplesID$(s, p, o)$
if $s_{id} \neq 0 \land p_{id} \neq 0 \land o_{id} \neq 0$ then
for all $\_\_\_\_\_\_\_\_, index \in$ HDTTriples$(s_{id}, p_{id}, o_{id})$ do
DeletedBit[index] $\leftarrow 1$
if $s_{id} \neq 0 \land p_{id} \neq 0 \land o_{id} \neq 0$ then
return RDF4JDelete($RD4J\text{s}\_\text{tore}, s, p, o$)
---
the query plan generated by RDF4J. In particular, this means that all join operations are carried out as nested joins.
Second, we need to provide the query planner with an estimate of the cardinality for the different triple patterns.
These are used to compute the correct query plan. We compute the cardinality by summing the cardinality given by HDT with the one given by RDF4J. While the cardinalities provided by RDF4J are estimations, the cardinalities provided by HDT are exact. This allows the generation of more accurate query plans.
Our system was built on top of the RDF4J API to provide a SPARQL endpoint, allowing the delta partition to be replaced with any triple store integrated using the RDF4J Sail API\(^5\). Unlike with the delta partition, with the main partition the optimizations explained in section doesn’t allow a trivial replacement to another main partition.
---
5. Merge
As the database is used, more data accumulates in the delta. This is problematic since the delta store cannot scale.
We therefore trigger merges in which the data in the delta is moved to the HDT main partition so that the initial state of an empty delta is restored.
There are two problematic aspects there. The first is how to move the data from the delta-partition to the main partition in an efficient way. The second is how to handle the transactions.
5.1. HDTCat and HDTDiff
To move the data from the delta-partition to the main partition the naive idea would be to dump all data from the delta-partition uncompress the HDT main partition, merge the data and compress it back. This approach is not efficient, neither in terms of time nor in terms of memory footprint. We therefore rely on HDTCat[21], a tool that was created to join two HDT files without the need of uncompressing the data. The main idea of HDTCat is based on the following observation. HDT is a sorted list of resources (i.e. the dictionary containing URIs, Literals and blank-nodes) as well as a sorted list of triples. This is true up to, the splitting of the dictionary in different sections, and the compression of the sorted lists. This means that merging to HDTs corresponds to merge two ordered lists which is efficient both in time and in memory.
On the other side in the merge operation we do not need only to add data, but also to remove the triples that are marked as deleted. We therefore developed HDTDiff, a method to remove from an HDT file triples marked as deleted using the main partition delete bitmap.
HDTDiff will first create a bitmap for each section of the HDT dictionary (Subject, Predicate and Object) and fill them using the delete bitmap. If a bit is set to one at the index $i$ of a bitmap for the section $S$, it means the element $i$ of the section $s$ is required by the future HDT. At the end of the bitmap building, with the result HDT, we use a similar method, HDTCat to compute the final HDT without the triples.
Note that HDTCat and HDTDiff do not assume that the underlying HDTs are loaded in memory, the HDT file and the bitmaps are only mapped from disk. By reading the HDT components sequentially without any random memory access, these operations are not memory intensive and hence scalable.
\(^5\)https://rdf4j.org/documentation/reference/sail/
5.2. Transaction Handling
In the following, we detail the merge operation (see Figure 2), that takes place in 3 steps.
5.2.1. Step 1
This step is triggered by the fact that the delta has exceeded a certain number of triples (which we call threshold). Step 1 locks all new incoming update connections. Once all existing update connections terminate, a new delta is initialized, which will co-exist with the first one during the merge process. We call them deltaA and deltaB, respectively, and they ensure that the endpoint is always available during the merge process. Also, a copy of DeletedBit is made called DeletedBit_tmp. The lock on the updates ensures that the data in the delta and in DeletedBit is not changed during this process. Once the new store is initialized, the lock on update connections is released.
5.2.2. Step 2
In this step, all changes in the delta are moved into the main partition. In particular, the deleted triples in (DeletedBit_tmp) and the triples in the deltaA storage are merged into the main partition. This is carried out into two steps using hdtDiff and hdtCat. The use of hdtCat and hdtDiff is essential to maintain the process scalable since decompressing and compressing an HDT file is resource intense (in particular with respect to memory consumption). When the hdtDiff and hdtCat operations are finished, a new HDT is generated that needs to be replaced with the existing one and step 3 is triggered. In step 2, SELECT, INSERT and DELETE operations are allowed. SELECT operations need access to the HDT file as well as the deltaA and deltaB store. INSERT operations will only affect deltaB, while DELETE operations will affect the delete bitmap bits, deltaA and deltaB. Moreover, all deleted triples will also be stored in a file called DeletedTriples.
5.2.3. Step 3
This step is triggered when the new HDT is generated. At the beginning, we lock all incoming connections, both read and write. At the beginning of this step, the XYZBits are initialized using the data contained in deltaB. Moreover, a new DeletedBit is initialized. To achieve this, we iterate over the triples stored in DeletedTriples and mark these as deleted. Furthermore, a new DeletedBit is initialized. To achieve this, we iterate over the triples stored in DeletedTriples and mark these as deleted. Moreover, the IDs used in deltaB are referring to the current HDT. We therefore iterate over all triples in deltaB and replace them with the IDs in the new HDT. During this process there is a mixture of IDs used in the old and the new HDT, which explains why we also lock read operations. We finally switch the current HDT with the new HDT and release all locks restoring the initial state.
6. Experiments
In this section we show two evaluation results of the qEndpoint. In the first we evaluate qEndpoint over the Berlin SPARQL benchmark\[22\], a synthetic benchmark that allows to test a SPARQL endpoint under different query loads in read (select), read-write (update) and analytic (business intelligence) scenarios. In the second we evaluate WDBench\[23\], a SPARQL benchmark based on the Wikidata query logs. You can find the git repository with the experiments at \[24\]. It gives us a synthetic and a real world data benchmark to have the two references in our results.
6.1. Berlin SPARQL benchmark
The Berlin SPARQL benchmark allows to generate synthetic data about an e-commerce platform containing information like products, vendors, consumers and reviews. The benchmark itself contains 3 sub-tasks which reflect different usage of a triple-store:
- Explore: this task loads into the triple-store the dataset and executes a mix of 12 types of SELECT queries which are of transactional type.
- Update: this task is similar to explore, but the data is changed via update queries over time.
- Bi: this task loads the datasets into the triple-store and runs analytic queries over it.
We benchmark all three tasks on the qEndpoint (both in the case where the HDT file is loaded to memory (indicated as "QEP-L") as well as when it is mapped to memory("QEP-M")). We compare the results with the RDF4J native store (indicated as "native"). All experiments were run on a AMD EPYC™ 7281 with 16 virtual CPUs with 64GB RAM.
The benchmark is returning 2 values, the Query Mixes per Hours (QMpH) for all the queries and the Query per Seconds (QpS) for each query type. These values are computed by,
\[
\text{QMpH} = \frac{\text{numberOfRuns}}{\text{totalRuntime}} \times 3600
\]
\[
\text{QpS} = \frac{\text{totalRuntimeForAQueryType}}{\text{numberOfRunsForAQueryType}}
\]
Our main objective is to evaluate qEndpoint on a variety of different scenarios and compare its performance with the RDF4J native store which is an established baseline.
6.1.1. Explore
We executed the explore benchmark task for 10k, 50k, 100k, 500k, 1M and 2M products. The loading times are reported in Table 1. We see that the RDF4J indexing method is quickly taking a long time to index a dataset. From
<table>
<thead>
<tr>
<th>Product count</th>
<th>Triple count</th>
<th>NTriple size</th>
<th>qEndpoint index Loading</th>
<th>RDF4J index Loading</th>
</tr>
</thead>
<tbody>
<tr>
<td>10 k</td>
<td>3.53M</td>
<td>871 MB</td>
<td>38s 191 MB</td>
<td>1m 38s 476 MB</td>
</tr>
<tr>
<td>50 k</td>
<td>17.5M</td>
<td>4.3 GB</td>
<td>3m 17s 933 MB</td>
<td>10m 8s 2.3 GB</td>
</tr>
<tr>
<td>100 k</td>
<td>34.9M</td>
<td>8.5 GB</td>
<td>6m 47s 1.9 GB</td>
<td>26m 4.6 GB</td>
</tr>
<tr>
<td>200 k</td>
<td>69.5M</td>
<td>18.2 GB</td>
<td>13m 17s 3.6 GB</td>
<td>55m 9 GB</td>
</tr>
<tr>
<td>500 k</td>
<td>174M</td>
<td>45.6 GB</td>
<td>35m 9 GB</td>
<td>3h 25m 23 GB</td>
</tr>
<tr>
<td>1M</td>
<td>347M</td>
<td>86 GB</td>
<td>1h 23m 18 GB</td>
<td>9h 25m 45 GB</td>
</tr>
<tr>
<td>2M</td>
<td>693M</td>
<td>183 GB</td>
<td>2h 54m 36 GB</td>
<td>4sh 90 GB</td>
</tr>
</tbody>
</table>
Table 1
Loading times and result index size for different dataset sizes and stores
6http://wbsg.informatik.uni-mannheim.de/bizer/berlinsparqlbenchmark/
Table 2: QMpH on the Explore/BI/Update task. Result: best values
<table>
<thead>
<tr>
<th>Task</th>
<th># triples</th>
<th>QEP-load</th>
<th>QEP-Map</th>
<th>RDF4J</th>
</tr>
</thead>
<tbody>
<tr>
<td>Explore</td>
<td>3.53M</td>
<td>18757.14</td>
<td>7468.38</td>
<td>1976.90</td>
</tr>
<tr>
<td></td>
<td>17.54M</td>
<td>10540.44</td>
<td>2880.74</td>
<td>1418.20</td>
</tr>
<tr>
<td></td>
<td>34.87M</td>
<td>9333.95</td>
<td>2478.39</td>
<td>220.46</td>
</tr>
<tr>
<td></td>
<td>69.49M</td>
<td>2846.65</td>
<td>1264.32</td>
<td>42.80</td>
</tr>
<tr>
<td></td>
<td>173.53M</td>
<td>1392.47</td>
<td>654.04</td>
<td>22.27</td>
</tr>
<tr>
<td></td>
<td>346.56M</td>
<td>756.90</td>
<td>327.98</td>
<td>21.33</td>
</tr>
<tr>
<td></td>
<td>692.62M</td>
<td>oom</td>
<td>115.57</td>
<td>6.58</td>
</tr>
<tr>
<td>Update</td>
<td>3.53M</td>
<td>12018.90</td>
<td>5593.14</td>
<td>1857.59</td>
</tr>
<tr>
<td></td>
<td>17.54M</td>
<td>8116.45</td>
<td>2608.08</td>
<td>405.74</td>
</tr>
<tr>
<td></td>
<td>34.87M</td>
<td>7056.18</td>
<td>2227.01</td>
<td>215.36</td>
</tr>
<tr>
<td></td>
<td>69.49M</td>
<td>2571.12</td>
<td>1207.04</td>
<td>107.87</td>
</tr>
<tr>
<td>BI</td>
<td>3.53M</td>
<td>488.12</td>
<td>416.34</td>
<td>259.39</td>
</tr>
<tr>
<td></td>
<td>17.54M</td>
<td>94.13</td>
<td>88.93</td>
<td>40.89</td>
</tr>
<tr>
<td></td>
<td>34.87M</td>
<td>64.06</td>
<td>61.87</td>
<td>22.27</td>
</tr>
<tr>
<td></td>
<td>69.49M</td>
<td>25.88</td>
<td>24.21</td>
<td>8.56</td>
</tr>
</tbody>
</table>
500k to 1M products the import time is increasing by a factor 3 while from 1M to 2M for a factor 5. qEndpoint keeps a linear indexing time increasing as expected. We also report the index size for the qEndpoint and the RDF4J native store. The index size is drastically reduce. The reduction is bigger for bigger datasets. For the biggest dataset the index size is reduced by a factor of 3. This is particularly important because it is possible to fit more data in memory and was expected. The compression rate for this dataset is lower than for other RDF datasets. The data produced by the Berlin Benchmark contains long product description. These are not well compressed in HDT. In Table 2 we report the QMpH. The qEndpoint achieves much higher QMpH rates. For the smallest dataset (3,35M triples) the RDF4J store outperforms the qEndpoint (except for Q9 and Q11). For bigger datasets qEndpoint performs better for most of the queries both in load and mapped mode. The performance of the "load" mode is much superior than the "map" mode. This is inline with even if higher speeds might be expected.
6.1.2. Update
We executed the update benchmark task for 10k, 50k, 100k and 200k products. We execute 50 warm-up rounds and 500 query mixes. The queries Q3-Q14 are the same as in the explore benchmark. Q1 is an INSERT query, Q2 is a DELETE query.
We do not report the loading times and the store sizes since they are similar to the ones in the explore use case. The QMpH are indicated in Table 2. The performance of the INSERT query Q1 is higher for the qEndpoint except for very small datasets which is as expected. It is faster to insert triples into a small native store than into a larger one. Since in the qEndpoint the delta is small the INSERT performance is improved. The performance of the DELETE query Q2 is by several orders of magnitude higher for the qEndpoint which we also expected. This is due to the fact that triples are not really deleted in the qEndpoint but only marked as deleted.
The query performance is not negatively affected when comparing between the "update" and the "explore" task. This shows that the combination of HDT and the delta is efficient and does not introduce an overhead. The performance of the "load" mode is again much superior than the "map" mode.
6.1.3. Business Intelligence task
We executed the business intelligence (bi) benchmark task for 10k, 50k, 100k, 200k and 500k products. We execute 25 warm-up rounds and 10 query mixes. We ignored Q3 and Q6 since we encountered scalability problems both for the qEndpoint and the native store. Due to the large usage of the graph in business intelligence queries and the lack of optimization of the graph queries with few resources, we think that the query optimization is the problem.
We do not report the loading times and the store sizes since they are similar to the ones in the explore use case.
The QMpH are indicated in Table 2. We can see that the qEndpoint performs nearly double as fast for nearly all the queries which we also expected \[ \text{[A6]} \]. Overall, the QMpH is form 2 to 4 times higher depending on the dataset size. This is the effect of the HDT main partition that is read-optimized. This is particularly evident in this task since the queries access large parts of the data. On the other side we believe that due to the fact that all joins are "nested joins" the analytical performance can be further increased.
### 6.1.4. Merge time comparison
To test the efficiency of the merge step, we ran the BSBM update benchmark task using qEndpoint with 10k, 50k, 100k and 200K products, each with two different configurations. One using a high triples threshold of 1,000,000 triples, so our endpoint wasn’t running a merge process and one with a low threshold to force our endpoint to trigger merge operations during the experiment. We used a laptop with 16GB of RAM and a 1TB SSD.
To decide the right threshold, we looked at the amount of changes for each query mix in the update experiment of the BSBM. This amount was of 1,000 changes for the 3.53M triples dataset and of 2,000 changes for the 69.5M triples dataset. We decided to choose a threshold of 1,000. Like this the system would constantly be in a merge state, allowing us to compare the worst case scenario (constantly merging) with the perfect case scenario (nearly no update, never merging). The count of merges was added as a second metric.
During the previous experiments we saw that the performance of the native store lack behind the qEndpoint. Also we saw that the mapped mode (versus the loaded mode) is more relevant since it allows to scale to bigger datasets. For these reasons we only indicate the results of the qEndpoint in mapping mode. We report the results in Table 3. In the results, we can see that the gap between an extensive amount of merges and no usage of the merge is negligible compared to the overall time of the benchmark. We can also notice that the number of merges is reduced with an increased datasets size. This can be explained by the time to run a merge which depends on the amount of data to merge, we explain this difference with the difference between the factor of runtime increase and triples increase. Where we have 5 times the count of triples from 3.53 to 17.5 with only 2 times the runtime. The merge processes are then longer, so we can’t fit as many in the new runtime.
<table>
<thead>
<tr>
<th># Triples</th>
<th>QMpH</th>
<th>Merges</th>
<th>Runtime (s)</th>
<th>QMpH</th>
<th>Merges</th>
<th>Runtime (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.53M</td>
<td>7439</td>
<td>0</td>
<td>241</td>
<td>5982</td>
<td>13</td>
<td>300</td>
</tr>
<tr>
<td>17.5M</td>
<td>3868</td>
<td>0</td>
<td>465</td>
<td>3180</td>
<td>8</td>
<td>565</td>
</tr>
<tr>
<td>34.9M</td>
<td>3793</td>
<td>0</td>
<td>474</td>
<td>2912</td>
<td>4</td>
<td>617</td>
</tr>
<tr>
<td>69.5M</td>
<td>3650</td>
<td>0</td>
<td>494</td>
<td>2456</td>
<td>4</td>
<td>732</td>
</tr>
</tbody>
</table>
Table 3
Merges comparison using BSBM update benchmark task
To compare qEndpoint with existing triple stores on a large KG, we decided to use the recent WDBench\cite{23} benchmark, based on the real-world Wikidata RDF dataset and a selection of queries from the Wikidata query logs. WDBench is only using the "direct properties" of Wikidata, in other words, it is not containing reified statements, leading to 1.2 billion triples. Due to that, it lacks analytical queries, but it is a good benchmark to compare queries with joins or paths. Something our main partition with HDT should be better.
To run our experiments, we used an AMD EPYC 7281 with 16 virtualized cores, 64GB of RAM and a 100GB HDD. The performances for the other systems, namely Blazegraph, Jena, Neo4j and Virtuoso are drawn from the paper [23], which rely on a similar machine in terms of per-thread rating of the CPU, disk type and available RAM.
First we indexed the WDBench dataset. Table 4 compares the index size for the different triple stores. As shown in Table 4, the index is from 3 to 5 times smaller for our system compared to the competitors which reflects the advantage A4.
<table>
<thead>
<tr>
<th>Engine</th>
<th>NTriples</th>
<th>Jena</th>
<th>Neo4j</th>
<th>Virtuoso</th>
<th>Blazegraph</th>
<th>qEndpoint</th>
</tr>
</thead>
<tbody>
<tr>
<td>Index size</td>
<td>156 GB</td>
<td>110 GB</td>
<td>112 GB</td>
<td>70 GB</td>
<td>70 GB</td>
<td>19.7 GB</td>
</tr>
</tbody>
</table>
Table 4
Size of the different indexes
Thereafter we run each query provided by WDBench one. For each of them we check if an error occurs or a timeout, otherwise we measure the execution time. The timeout is set to 1 minute and is as in the WDBench paper, is considered when computing the average and median times. The queries are splitted into 5 types.
- Basic graph pattern (BGP) with two sub types: Single or Multiple
- Optional queries containing OPTIONAL fields
- Path queries
- Navigational graph patterns
We report the WDBench results in Table 5. We can see that as predicted, thanks to the main-partition, our system is faster to run most of the queries A1. We can see that the query performance is at least 3 times faster on BGP and at least 2 times faster on path, navigational and optional queries.
6.3. Indexing and querying Wikidata
In the following we describe how to index and query Wikidata as well as a comparison with existing alternatives.
6.3.1. Loading data
To load the full Wikidata dump into qEndpoint, one needs to run a few instructions. This is done using our indexing method to efficiently create the HDT partition (Point 1 in the figure 1)
With few commands, it allows everyone to quickly have a SPARQL endpoint over Wikidata without the need of special configurations. Note that the Wikidata dump is not un compressed during this process. In particular, the disk footprint needed to generate the index is lower than the uncompressed dump (which is around 1.5 Tb). In the Table 6, we can see the time spent for the different indexing steps.
For comparison, to date, it exists only few attempts to successfully index Wikidata [25]. Since 2022, when the Wikidata dump exceeded 10B triples, only 4 triple stores are reported to be capable of indexing the whole dump, namely: Virtuoso [13], Apache Jena 10, QLever\cite{16} and Blazegraph.
Virtuoso is a SQL based SPARQL engine where the changes on the RDF graph are reflected to a SQL database. QLever is a SPARQL engine using an RDF custom binary format to index and compress RDF graphs. It supports a text search engine, but only the SPARQL engine will be compared during this experiment.
\footnote{https://wikidata.org/}
\footnote{https://jena.apache.org}
<table>
<thead>
<tr>
<th>Query type</th>
<th>Engine</th>
<th>Error</th>
<th>Timeout</th>
<th>Average time (s)</th>
<th>Median time (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Single BGPs</strong></td>
<td>Jena</td>
<td>0</td>
<td>25</td>
<td>9.92</td>
<td>0.46</td>
</tr>
<tr>
<td></td>
<td>Neo4J</td>
<td>0</td>
<td>47</td>
<td>15.28</td>
<td>2.03</td>
</tr>
<tr>
<td></td>
<td>Blazegraph</td>
<td>0</td>
<td>3</td>
<td>1.73</td>
<td>0.07</td>
</tr>
<tr>
<td></td>
<td>Virtuoso</td>
<td>0</td>
<td>1</td>
<td>2.12</td>
<td>0.28</td>
</tr>
<tr>
<td></td>
<td>qEndpoint</td>
<td>0</td>
<td>0</td>
<td><strong>0.53</strong></td>
<td><strong>0.02</strong></td>
</tr>
<tr>
<td><strong>Multiple BGPs</strong></td>
<td>Jena</td>
<td>0</td>
<td>54</td>
<td>11.06</td>
<td>3.16</td>
</tr>
<tr>
<td></td>
<td>Neo4J</td>
<td>1</td>
<td>159</td>
<td>22.17</td>
<td>6.75</td>
</tr>
<tr>
<td></td>
<td>Blazegraph</td>
<td>0</td>
<td>52</td>
<td>8.47</td>
<td>1.34</td>
</tr>
<tr>
<td></td>
<td>Virtuoso</td>
<td>3</td>
<td>7</td>
<td>8.71</td>
<td>8.34</td>
</tr>
<tr>
<td></td>
<td>qEndpoint</td>
<td>0</td>
<td>10</td>
<td><strong>3.21</strong></td>
<td><strong>1.54</strong></td>
</tr>
<tr>
<td><strong>Optionals</strong></td>
<td>Jena</td>
<td>0</td>
<td>59</td>
<td>13.56</td>
<td>4.34</td>
</tr>
<tr>
<td></td>
<td>Neo4J</td>
<td>1</td>
<td>146</td>
<td>27.09</td>
<td>17.87</td>
</tr>
<tr>
<td></td>
<td>Blazegraph</td>
<td>0</td>
<td>37</td>
<td>8.55</td>
<td>2.2</td>
</tr>
<tr>
<td></td>
<td>Virtuoso</td>
<td>2</td>
<td>69</td>
<td>17.29</td>
<td>9.5</td>
</tr>
<tr>
<td></td>
<td>qEndpoint</td>
<td>0</td>
<td>4</td>
<td><strong>2.31</strong></td>
<td><strong>1.6</strong></td>
</tr>
<tr>
<td><strong>Path</strong></td>
<td>Jena</td>
<td>0</td>
<td>96</td>
<td>11.74</td>
<td>0.81</td>
</tr>
<tr>
<td></td>
<td>Neo4J</td>
<td>6</td>
<td>134</td>
<td>20.89</td>
<td>9.74</td>
</tr>
<tr>
<td></td>
<td>Blazegraph</td>
<td>0</td>
<td>87</td>
<td>11.00</td>
<td>0.82</td>
</tr>
<tr>
<td></td>
<td>Virtuoso</td>
<td>27</td>
<td>24</td>
<td>4.71</td>
<td>0.70</td>
</tr>
<tr>
<td></td>
<td>qEndpoint</td>
<td>0</td>
<td>17</td>
<td><strong>2.33</strong></td>
<td><strong>0.43</strong></td>
</tr>
<tr>
<td><strong>Navigational</strong></td>
<td>Jena</td>
<td>0</td>
<td>245</td>
<td>30.98</td>
<td>29.83</td>
</tr>
<tr>
<td></td>
<td>Neo4J</td>
<td>0</td>
<td>211</td>
<td>31.07</td>
<td>24.83</td>
</tr>
<tr>
<td></td>
<td>Blazegraph</td>
<td>0</td>
<td>180</td>
<td>22.32</td>
<td>2.58</td>
</tr>
<tr>
<td></td>
<td>Virtuoso</td>
<td>2</td>
<td>37</td>
<td>10.42</td>
<td>4.36</td>
</tr>
<tr>
<td></td>
<td>qEndpoint</td>
<td>0</td>
<td>51</td>
<td><strong>8.73</strong></td>
<td><strong>0.97</strong></td>
</tr>
</tbody>
</table>
Table 5: Results with the WDBench from the different types of queries for qEndpoint and our baseline, timeout = 60s. **result**: best values
<table>
<thead>
<tr>
<th>Task</th>
<th>Time</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dataset download</td>
<td>7 h</td>
<td>Download the dataset⁶</td>
</tr>
<tr>
<td>HDT compression</td>
<td>45 h</td>
<td>Creating HDT</td>
</tr>
<tr>
<td>HDT co-index gen</td>
<td>5 h</td>
<td>Creating OPS/PSO/POS indexes</td>
</tr>
<tr>
<td>Loading the index</td>
<td>2 min</td>
<td>Start the endpoint</td>
</tr>
</tbody>
</table>
Table 6: Time split during the loading of the Wikidata dataset.
Blazegraph is a SPARQL engine using a B+Tree implementation to store the RDF graph, it is currently used by Wikidata.
Jena is a SPARQL engine also using a B+Tree implementation to store the RDF graph. Unlike Blazegraph the Jena’s implementation wasn’t considered for large RDF graph storage.
We report in Table 7 the loading times, the number of indexed triples, the amount of needed RAM, the final index size and the documentation for indexing Wikidata.
Overall, we can see that the qEndpoint has the lowest RAM footprint as well as the lowest disk footprint. Also, it offers an easy documentation for compressing the whole Wikidata dump. The time to index is the second best compared to the other systems. Most notably, the qEndpoint is the only setup that allows currently to index Wikidata on commodity hardware.
6.3.2. Loading a pre-computed index
HDT is meant to be a format for sharing RDF datasets[26] and was therefore designed to have a particularly low disk footprint. Table 8 shows the sizes of the components needed to currently set up a Wikidata SPARQL endpoint.
Table 7
<table>
<thead>
<tr>
<th>System</th>
<th>Loading Time</th>
<th>#Triples</th>
<th>RAM</th>
<th>Index size</th>
<th>Doc</th>
</tr>
</thead>
<tbody>
<tr>
<td>Apache Jena</td>
<td>9d 21h</td>
<td>13.8 B</td>
<td>64 GB</td>
<td>2TB</td>
<td>1</td>
</tr>
<tr>
<td>Virtuoso</td>
<td>several days(^5) (preprocessing) + 10h</td>
<td>11.9 B</td>
<td>378 GB</td>
<td>NA</td>
<td>2</td>
</tr>
<tr>
<td>Blazegraph</td>
<td>~5.5d</td>
<td>11.9 B</td>
<td>128 GB</td>
<td>1.1 T</td>
<td>3</td>
</tr>
<tr>
<td>Stardog</td>
<td>9.5 h</td>
<td>16.7 B</td>
<td>256 GB</td>
<td>NA</td>
<td>4</td>
</tr>
<tr>
<td>QLever</td>
<td>14.3 h</td>
<td>17 B</td>
<td>128 GB</td>
<td>823 GB</td>
<td>5</td>
</tr>
<tr>
<td>qEndpoint</td>
<td>50 h</td>
<td>17.4 B</td>
<td>10 GB</td>
<td>294 GB</td>
<td>6</td>
</tr>
</tbody>
</table>
Table 8
<table>
<thead>
<tr>
<th>File name</th>
<th>File size</th>
<th>Usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>index_dev.hdt</td>
<td>183GB</td>
<td>Dictionary + SPO index</td>
</tr>
<tr>
<td>index_dev.hdt.index.v1-1</td>
<td>113GB</td>
<td>OPS/PSO/POS indexes</td>
</tr>
<tr>
<td>native-store</td>
<td>16KB</td>
<td>RDF4J store</td>
</tr>
<tr>
<td>qendpoint.jar</td>
<td>82MB</td>
<td>qEndpoint</td>
</tr>
</tbody>
</table>
Sizes of each components of qEndpoint (total: 296GB)
using only HDT, which amounts to the first two rows in the table, i.e. 300GB in total. The RDF4j counterpart, which is the third row in Table 8, corresponds to 16KB. Compared with the other endpoints in Table 7, the whole data can be easily downloaded in a few hours with any high-speed internet connection.
The second component in Table 6 of 113GB can be avoided in setups with slow connections since this co-index can be computed in 5h. As a consequence, it is possible to further reduce the amount of time required to deploy a full SPARQL endpoint. This time turns to be shrunk to a few minutes after downloading the files in Table 6\(^{11}\). Note that the whole bzip2 Wikidata dump is more than 150GB\(^{12}\). This means that by sharing the index, the setup time can be reduced to the amount of time that is necessary to download double of the size of the plain compressed Wikidata dump.
6.3.3. Queries
In the following, we discuss the evaluation of the query performance of the qEndpoint with other available systems. To the best of our knowledge, ours is also the first evaluation on the whole Wikidata dump using historical query logs.
As described above, while there are successful attempts to set up a local Wikidata endpoint, these are difficult to reproduce and depending on the cases the needed hardware resources are difficult to find [25]. Therefore, in order to compare with existing systems, we restrict to those whose setups are publicly available:
- Blazegraph: the current system that is used in production by Wikimedia Foundation available at https://query.wikidata.org/sparql;
- Virtuoso: a live demo was set up in 2019\(^{13}\) that is available at https://wikidata.demo.openlinksw.com/sparql;
- QLever: a live demo is available at https://qliver.cs.uni-freiburg.de/wikidata and was set up in 2022.
\(^{11}\)The index files are currently available at the URL https://qanswer-svc4.univ-st-etuienne.fr/ into the RDF HDT open format
\(^{12}\)https://dumps.wikimedia.org/wikidatawiki/entities/
\(^{13}\)https://community.openlinksw.com/t/loading-wikidata-into-virtuoso-open-source-or-enterprise-edition/2717
To benchmark the different systems, we performed a random extraction of 10K queries from Wikidata query logs \cite{27} and ran them on the qEndpoint and on the above endpoints. Table 9 shows the query types for each query.
The 10k queries were selected as follows. We picked 10k random queries of the interval 7 dump matching these conditions:
1. No usage of http://www.bigdata.com/ functions, internal to Blazegraph and not supported by other endpoints
2. No MINUS operation, currently not supported by the qEndpoint.
Unlike with WDBench, the whole dataset is used, increasing the usage of analytic queries, our system being on commodity hardware, we except to have worse results for some query due to a lack of resources to compensate.
The resources for the compared systems are the same as the one reported for indexing in Table 7.
The Wikidata logs queries results are presented in Figure 3 and Table 11.
We observe that the various systems have varying level of SPARQL support (see Table 11) and that qEndpoint via RDF4J can correctly parse all the queries. As shown in Figure 10, it achieves better performances than QLever in 44% of the cases (despite 4x lower memory footprint). It outperforms Virtuoso in 34% of the cases (despite 10x lower memory footprint) and it outperforms Blazegraph (the production system used by Wikimedia Deutschland with Wikidata) in 46% of the cases (despite a 4x lower memory footprint). Overall, we see that the median execution time between the qEndpoint and the other systems is -0.05. This means that, modulo a few outliers, we can achieve comparable query speed with reduced memory and disk footprint. By manually investigating some queries, we believe that the outliers are due to query optimization problems that we plan to tackle in the future. For example by switching the order of the triple patterns in the query.
Overall, despite running on commodity hardware, qEndpoint can achieve query speeds comparable to other existing alternatives. The dataset and the scripts used for the experiments are available online\cite{15}.
### 6.4. Results
During the demonstration, we plan to show the following capabilities of qEndpoint:
- how it is possible to index Wikidata on a commodity hardware;
- how it is possible to set up a SPARQL endpoint by downloading a pre-computed index; (Point 2 in the figure 1)
- the performance of qEndpoint on typical queries over Wikidata from our test dataset (see Table 9). Query types that are relevant for showcasing are simple triple pattern queries (TP), TPs with Unions, TPs with filters, Recursive path queries and other query types found in the Wikidata query logs. We will also be able to load queries formulated by the visitors of our demo booth.
The objective is to demonstrate that the qEndpoint performance is overall comparable with the other endpoints despite the considerable lower hardware resources. As such, it represents a suitable alternative to current resource-intensive endpoints.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|}
\hline
Query type & Count \\
\hline
Triple Pattern (TP) & 5285 \\
Recursive / Path queries & 2852 \\
TP + Filter & 986 \\
TP + Union & 213 \\
Other & 664 \\
\hline
\end{tabular}
\caption{Query count per type}
\end{table}
\footnote{https://iccl.inf.tu-dresden.de/web/Wikidata_SPARQL_Logs/en}
\footnote{https://github.com/the-qa-company/qEndpointWDQueries}
Table 10
Statistics over the time differences in seconds with the percentage of queries qEndpoint was faster. Result: best values
<table>
<thead>
<tr>
<th>Endpoint</th>
<th>Max</th>
<th>Min</th>
<th>Mean</th>
<th>Median</th>
<th>% outperf.</th>
</tr>
</thead>
<tbody>
<tr>
<td>QLever</td>
<td>60</td>
<td>-60.0</td>
<td>-2.8</td>
<td>-0.05</td>
<td>44.4</td>
</tr>
<tr>
<td>Blazegraph</td>
<td>-4.74</td>
<td>-60.0</td>
<td>-2.8</td>
<td>-0.04</td>
<td>45.9</td>
</tr>
<tr>
<td>Virtuoso</td>
<td>60.0</td>
<td>-60.0</td>
<td>-2.88</td>
<td>-0.04</td>
<td>34.4</td>
</tr>
</tbody>
</table>
Table 11
Errors per endpoint on 10K random queries of the Wikidata query logs.
<table>
<thead>
<tr>
<th>Endpoint</th>
<th>parsing errors</th>
<th>timeout errors</th>
<th>evaluation errors</th>
</tr>
</thead>
<tbody>
<tr>
<td>QLever</td>
<td>4174</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Virtuoso</td>
<td>146</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>qEndpoint</td>
<td>0</td>
<td>808</td>
<td>0</td>
</tr>
<tr>
<td>Blazegraph</td>
<td>0</td>
<td>10</td>
<td>1</td>
</tr>
</tbody>
</table>
7. Usage in production
The above implementation is used in two projects that are in production and currently operational at the European Commission, the European Direct Contact Center (EDCC) and Kohesio.
7.1. European Direct Contact Center
The European Direct Contact Center[^16](https://european-union.europa.eu/contact-eu_en) (EDCC) is a contact center of the European Union receiving every year 200k messages from citizens with questions about the EU. The information used to answer questions are stored in the EDCC Knowledge Base (KB), a KG containing resources to help operators to answer the received questions.
7.2. Kohesio
Kohesio is an EU project that aims to make research projects funded by the EU discoverable by EU citizens (https://kohesio.ec.europa.eu/). Kohesio is constructed on top of the EU KG[^28](https://linkedopendata.eu) (available at https://linkedopendata.eu), a graph describing the EU and containing its funded projects. The graph is hosted in a Wikibase instance and contains 726 million triples. All interactions of the Kohesio applications are converted into SPARQL queries that are running over the qEndpoint[^17].
[^16]: https://european-union.europa.eu/contact-eu_en
[^17]: https://github.com/the-qa-company/qEndpoint
8. Supplemental Material Statement
The qEndpoint is implemented as a RDF4J Sail. The code of the qEndpoint is available on Github under https://github.com/the-qa-company/qEndpoint.
9. Conclusions
In this paper, we have presented the qEndpoint, a triple store that uses an architecture based on differential-updates. This represents the first graph database based on this architecture. We have presented: a) details about suitable data-structures that can be used for the main and delta partition respectively, b) a detail description of the merge process.
We have evaluated over different benchmarks the performance of the architecture as well as the efficiency of the implementation that we provided against other public endpoints. We see the following main advantages of this architecture: high read performance $A_1$, fast insert performance $A_2$, fast delete performance $A_3$, small index size $A_4$, fast index speed $A_5$ and good analytical performance $A_6$.
These results show that the architecture that we propose is promising for graph databases. On the other side this is a first step towards graph databases with this architecture. Open challenges are for example:
- be able to query named graphs[29],
- exploit more the data structure of HDT to construct better query plans (for example, using merge joins instead of the current nested ones) and improve in OLAP scenarios,
- propose new versions of HDT optimized for querying (to support for example filters over numbers),
- propose a distributed version of the system
Overall we believe that the propose architecture has the potential to become an ideal solution for querying large Knowledge Graphs in low hardware settings.
References
|
{"Source-Url": "https://www.semantic-web-journal.net/system/files/swj3616.pdf", "len_cl100k_base": 16374, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 69666, "total-output-tokens": 18505, "length": "2e13", "weborganizer": {"__label__adult": 0.0004045963287353515, "__label__art_design": 0.0005879402160644531, "__label__crime_law": 0.0004777908325195313, "__label__education_jobs": 0.0020503997802734375, "__label__entertainment": 0.00021076202392578125, "__label__fashion_beauty": 0.0002377033233642578, "__label__finance_business": 0.0011949539184570312, "__label__food_dining": 0.0004520416259765625, "__label__games": 0.000957965850830078, "__label__hardware": 0.0014314651489257812, "__label__health": 0.0006146430969238281, "__label__history": 0.00057220458984375, "__label__home_hobbies": 0.0001329183578491211, "__label__industrial": 0.0006608963012695312, "__label__literature": 0.0006823539733886719, "__label__politics": 0.00036454200744628906, "__label__religion": 0.0005846023559570312, "__label__science_tech": 0.260498046875, "__label__social_life": 0.0001690387725830078, "__label__software": 0.059234619140625, "__label__software_dev": 0.66748046875, "__label__sports_fitness": 0.000194549560546875, "__label__transportation": 0.0005650520324707031, "__label__travel": 0.0002353191375732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63008, 0.05592]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63008, 0.25676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63008, 0.84282]], "google_gemma-3-12b-it_contains_pii": [[0, 2996, false], [2996, 7256, null], [7256, 11044, null], [11044, 14990, null], [14990, 17748, null], [17748, 21074, null], [21074, 22948, null], [22948, 26760, null], [26760, 29470, null], [29470, 32781, null], [32781, 36647, null], [36647, 39656, null], [39656, 43248, null], [43248, 47385, null], [47385, 50607, null], [50607, 54006, null], [54006, 56236, null], [56236, 60482, null], [60482, 63008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2996, true], [2996, 7256, null], [7256, 11044, null], [11044, 14990, null], [14990, 17748, null], [17748, 21074, null], [21074, 22948, null], [22948, 26760, null], [26760, 29470, null], [29470, 32781, null], [32781, 36647, null], [36647, 39656, null], [39656, 43248, null], [43248, 47385, null], [47385, 50607, null], [50607, 54006, null], [54006, 56236, null], [56236, 60482, null], [60482, 63008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63008, null]], "pdf_page_numbers": [[0, 2996, 1], [2996, 7256, 2], [7256, 11044, 3], [11044, 14990, 4], [14990, 17748, 5], [17748, 21074, 6], [21074, 22948, 7], [22948, 26760, 8], [26760, 29470, 9], [29470, 32781, 10], [32781, 36647, 11], [36647, 39656, 12], [39656, 43248, 13], [43248, 47385, 14], [47385, 50607, 15], [50607, 54006, 16], [54006, 56236, 17], [56236, 60482, 18], [60482, 63008, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63008, 0.2209]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
7dde50fa634b3cabcfaa1b0e0292b87a0fb36ba1
|
An Observation-based Approach Towards Self-managing Web Servers
Prashant Pradhan, Renu Tewari, Sambit Sahu
Networking Software and Services
IBM T. J. Watson Research Center
Hawthorne, NY 10532
{ppradhan,tewarir, ssahu}@us.ibm.com
Abhishek Chandra, Prashant Shenoy
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
{abhishek, shenoy}@cs.umass.edu
Abstract
As more business applications have become web-enabled, the web server architecture has evolved to provide performance isolation, service differentiation, and QoS guarantees. Various server mechanisms that provide QoS extensions, however, rely on external administrators to set the right parameter values for their desirable performance. Due to the complexity of handling varying workloads and bursty traffic, configuring such parameters optimally becomes a challenge. In this paper we describe an observation-based approach for self-managing web servers that can adapt to changing workloads while maintaining the QoS requirements of different classes. In this approach, the system state is monitored continuously and parameter values of various system resources—primarily the accept queue and the CPU—are adjusted to maintain the system-wide QoS goals. We implement our techniques using the Apache web server and the Linux operating system. We first demonstrate the need to manage different resources in the system depending on the workload characteristics. We then experimentally demonstrate that our observation-based system monitors such workload changes and adjusts the resource parameters of the accept queue and CPU schedulers in order to maintain the QoS requirements of the different classes.
1 Introduction
1.1 Motivation
Current web applications have evolved from simple file browsing to complex tools for commercial transactions, online shopping, information gathering and personalized service. To accommodate this diversity, web servers have evolved into complex software systems with a three-tier architecture consisting of a front-end HTTP server, an application server containing the business logic components, and a back-end database server. The front-end HTTP (web) server in such cases performs a variety of tasks such as (a) dynamic HTML generation, (b) personalized page assembly using scripting languages (e.g., JSP), (c) SSL processing for secure transmission, (d) persistent HTTP protocol processing, to reduce connection setup overheads and improve end-user performance, and (e) communication with the application server components via servlets. In doing so, it interacts in complex ways with the underlying OS mechanisms that manage resources such as the CPU, memory, disk and the network interface. Another emerging trend is the growing popularity of web hosting services that collocate multiple web domains on the the same host machine or a cluster and provide different levels of service to these domains based on various pricing options. In such environments, service differentiation
*This research was carried out when Abhishek Chandra was a summer intern at IBM T.J. Watson.*
and performance isolation become necessary for efficient operation.
Numerous mechanisms for service differentiation and performance isolation have been proposed in the literature. Such mechanisms for web servers include QoS-aware extensions for admission control[8], SYN policing and request classification[28], accept queue scheduling [2], and CPU scheduling [3]. These mechanisms enable a web server to differentiate between requests from different classes and provide class-specific guarantees on performance (for instance, by providing preferential treatment to users who are purchasing items at an e-commerce site over users who are merely browsing, or by providing better service to institutional investors over individual investors at a financial site). One limitation of these QoS mechanisms is that they rely on an external administrator to correctly configure various parameter values and set policies on a system-wide basis. Doing so not only requires a knowledge of the expected workload but also a good understanding of how various operating system and web server configuration parameters affect the overall performance. Thus, while these QoS mechanisms undoubtedly improve performance, they also exacerbate the problems of configuration and tuning—each mechanism provides one or more tunable "knobs" that the system administrator needs to deal with. More importantly, these mechanisms are not independent of one another—depending on the configuration, each mechanism can have repercussions on the behavior of others, which further complicates the configuration process. Furthermore, past studies have made contradictory claims about the utility and benefits of these mechanisms. For instance, one recent study has claimed that the (socket) accept queue is the bottleneck resource in web servers [2], while another has claimed that scheduling of requests on the CPU is the determining factor in web server performance [3]. Thus, it is not evident a priori as to which subset of QoS mechanisms should be employed by a web server and under what operating regions.
The increasing complexity of the web server architecture, the dynamic nature of web workloads [10, 13], and the interactions between various QoS mechanisms makes the task of configuring and tuning modern web servers exceedingly complex. It has been argued that the more complex the system, the greater are the chances of a mis-configuration and sub-optimal performance [4, 9]. To address this problem, in this paper, we develop an adaptive architecture to make web servers self-managing. By self-managing, we mean mechanisms to automate the tasks of configuring and tuning the web server so as to maintain the QoS requirements of the different service classes. The emphasis on manageability of computing systems has gained momentum in recent years with the ever increasing complexity of these systems—in fact, several researchers have argued that, in today's environments, the problems of manageability, availability and incremental growth have overshadowed that of the traditional emphasis on performance [12, 18].
1.2 Research Contributions
This paper focuses on the architecture of a self-managing web server that supports multiple QoS classes—a scenario where multiple virtual servers run on a single physical server or where certain classes of customers are given preferential service. Assuming such an architecture, we make three key contributions in this paper. (1) We conduct an experimental study using the Apache web server to identify bottleneck resources for different web workloads; our study illustrates how the bottleneck resource can vary depending on the nature of the workload and the operating region. (2) Based on the workloads in our study, we identify a small subset of resource control mechanisms—the incoming request queue scheduler and the CPU share-based scheduler—that are likely to provide the most benefits in countering the performance degradation. (3) We then present
an observation-based technique to automate the tasks of configuring and tuning of the parameters of these OS mechanisms. A key feature of this technique is that it can handle multiple OS resources in tandem. Our architecture consists of techniques to monitor the workload and to adapt the server configuration based on the observed workload. The adaptation system can adjust to: (i) a change in the request load, (ii) the QoS requirements of the classes, (iii) the workload behavior and (iv) the system capacity. Since the system dynamically monitors and adjusts the parameters it makes no underlying assumption of the workload characteristics and the parameter behaviors.
We implement our techniques into the Apache web server on the Linux operating system and demonstrate its efficacy using an experimental evaluation. Our results show that we can adjust dynamically to a change in workload, a change in response time goal and a change in the type of workload.
The rest of this paper is structured as follows. Section 2 presents our experimental study to determine the bottlenecks in the Apache request path. Section 3 discusses the architecture and kernel mechanisms used to support multiple classes of web requests. Section 4 presents our framework to configure and tune the web server. Section 5 presents the results of our experimental evaluation. Section 6 discusses related work, and finally, Section 7 present our conclusions.
2 Analyzing the Bottlenecks in Web Request Processing
In this section, we examine the bottlenecks encountered in the processing of web requests. We use Apache as a representative example of a web server and subject it to a variety of different workloads. For each workload, we determine the bottlenecks in the request path at different operating regions. In what follows, we first present a brief overview of the software architecture employed by Apache before presenting our experimental results.
2.1 Architecture of the Apache Web Server
Apache employs a process-based software architecture. Apache spawns a pool of child processes at startup time, all of which listen on a common socket (typically, port 80). A newly arriving request is handed over to one of the children for further processing; the process rejoins the pool after it is done servicing the request and waits for subsequent requests. Apache can vary the size of the process pool dynamically depending on the load—it starts with a certain number of children and spawns additional processes as the load increases. The limit on the maximum number of children is determined by a statically defined parameter, MaxClients (this parameter imposes a limit on the number of concurrent Apache processes to prevent memory exhaustion and thrashing in the system). Once this limit is reached, no additional children are created and newly arriving requests must wait for an existing child to become idle before getting serviced. Apache can also terminate child processes when the load decreases, thereby reducing the number of idle processes in the system.
Next, we examine the control path of a web request. Since HTTP employs TCP as its underlying transport protocol, the client first establishes a TCP connection with the server. This is done using a three-way TCP handshake which is initiated by sending a TCP SYN packet to the server. Once the handshake is complete, the new connection is appended to the accept queue of the listening socket. The HTTP request waits in this queue until a child process accepts the connection. In case of HTTP/1.0, each request uses a separate TCP connection, whereas in HTTP/1.1, multiple HTTP requests can share a single connection (the connection is kept open for a timeout duration, during which multiple HTTP requests can be serviced).
For each request, the Apache child process first parses the request, retrieves the requested object (from
the disk or the cache) and sends back a response. Dynamic HTTP requests involve additional CPU processing before a response can be generated. Thus, the servicing of each request involves a certain amount of CPU, disk and network I/O.
With this background, we present the results of our experimental study to determine the bottlenecks in the Apache request path.
2.2 Determining Web Server Bottlenecks
The testbed for our experiments consists of an unmodified Apache server running on a Pentium III PC with 512MB RAM and Redhat Linux 7.1. The client workload is generated using an off-the-shelf web workload generator—htpperf [23]—that can emulate various kinds of workloads (e.g., persistent HTTP, SSL encryption) and different request rates. All machines were interconnected by a 100 Mb/s switched Ethernet and the network was assumed to be lightly loaded in our experiments.
We instrumented the Linux kernel to measure various parameters that affect the performance of web requests, namely (i) the length of the socket accept queue and the time spent by an incoming request in the accept queue, (ii) the amount of CPU time spent in servicing a request, and (iii) the time spent by a request waiting in the CPU run queue. Other metrics such as the network transfer time and the end-to-end response time were measured at the client using httpperf. Unless specified otherwise, all kernel and Apache configuration parameters were set to their default values. The only (kernel) parameter that was modified was the maximum length of the accept queue, which was increased from its default value of 128 to 65536 (this was done to avoid TCP SYN packet drops due to accept queue overflow at heavy loads).
For this setup, we examined the performance of Apache for the following workloads: (i) static web requests over non-persistent HTTP connections, (ii) static web requests over persistent HTTP connections, (iii) static requests using SSL encryption, and (iv) dynamic requests using Apache’s CGI scripting. Whereas the first two workloads are I/O-intensive, the third is both CPU- and I/O-intensive and the fourth is predominantly CPU-intensive. Due to the memory sizes on our machines, we observed that the OS buffer cache was able to easily cache popular files in memory, and hence, most requests are serviced directly from the cache and did not result in disk I/O. Since most requests are serviced from memory rather than from disk, we find that I/O time is independent of the load and depends only on the file size, and hence, do not report it in our results (this assumption does not hold for scenarios where, for instance, a web request triggers a query in a backend database server; however, such scenarios are outside the scope of this paper, given our focus on web server performance).
We now present our experimental results. Due to space constraints, we present detailed results only for two scenarios (persistent HTTP and SSL processing).
Static Web Requests using Persistent HTTP
In this experiment, we configured httpperf to use persistent HTTP connections and to request multiple (static) files over the same connection. We increased the connection rate and observed its impact on the web server and client performance. As shown in Figure 1(a), at low loads, Apache can easily handle all incoming connections (and requests over those connections); requests do not incur any significant delays in the socket accept queue or the CPU run queue. Note that the persistent nature of each connection causes each Apache process to keep the client connection open for a timeout duration waiting for subsequent requests (which delays its return to the idle process pool). Hence, when the load increases, Apache spawns additional child processes to service newly arriving connections (since existing processes are servicing other connections). As the load increases, the MaxClient limit is reached eventually (MaxClients was set to 50 in this experiment). Beyond this
point, the accept queue delay increases rapidly and becomes the dominant factor of the total response time (this is because a newly arriving connection must now wait in the accept queue until an existing child process terminates a persistent connection). Figure 1(a) also shows that the CPU service time and the CPU run queue delay are relatively constant, indicating that most Apache processes are waiting for requests over persistent connections, rather than actively servicing requests. This indicates that the accept queue is the bottleneck resource in this scenario, while the CPU is under-utilized.
**Static Web Requests using SSL Encryption**
In this experiment, we configure httperf to request static files using SSL encryption over non-persistent HTTP connections. The SSL protocol involves public-key authentication and key exchange during connection setup, after which it uses symmetric key encryption for transmitting the data over the connection. Due to the computational overheads involved in encrypting data, this is a CPU-intensive workload. Like in our previous experiment, we increase the client request rate and measure its impact on server performance. Figure 1(b) depicts our results. The figure shows that the CPU run queue waiting times increase steadily with the load—the larger the CPU load, the greater is the time a request needs to wait in the run queue before it can be scheduled on the CPU (since the CPU is busy servicing other requests). The figure also shows that the CPU run queue delay dominates the server response time. Observe that the CPU service time of a request is independent of the load, since the time to service a request (e.g., encrypt data) depends only on the request size. The figure also shows that the accept queue delay is initially small and then increases rapidly beyond a certain load. This is because the CPU saturates at those loads, causing newly arriving requests to wait in the accept queue until an Apache process can be scheduled on the CPU to accept the connection. At very heavy loads, the MaxClients limit is reached, further adding to the accept queue delay. Thus, our experiment indicates that the CPU is the primary bottleneck in this scenario. Although the accept queue delays are significant, this is primarily due to the saturation of the CPU, rather than any shortcomings at the accept queue.
We performed two additional experiments that we do not report here due to space constraints. The first experiment involved requests for static web pages over non-persistent connections. Due to the memory sizes on our machine, the OS buffer cache was able to absorb most of the requests, resulting in few disk accesses; consequently, we found the CPU to be the bottleneck resource.
in this experiment. The second experiment involved dynamic HTML generation using CGI scripting; we found that executing CGI scripts is compute-intensive, causing the CPU to be a bottleneck.
Together, these experiments indicate that depending on the workload and the operating region, different resources can become bottlenecks in the request path. For the workloads that were examined and for our hardware configurations, we observed that the CPU and the accept queue were the primary bottlenecks. This indicates that a web server needs to intelligently detect these scenarios and manage these resources accordingly.
3 Adaptive QoS Architecture
Our experimental study in the previous section highlighted that different resources could become the bottleneck based on the workload characteristics. Based on these insights we choose a small set of kernel mechanisms to control these resources via dynamic resource scheduling. In this paper we target two resources—the accept queue and the CPU—that most affected server performance for our selection of workloads, to highlight the need for multi-resource adaptation. Observe that our goal is not to design new resource control mechanisms; rather it is to pick existing mechanisms in current commercial or open-source operating systems and build an adaptive framework to parameterize and control these mechanisms.
We assume that the web server supports multiple classes of requests (also referred to as service classes) each with its specified QoS requirement. In this paper we consider class-specific response time as the default QoS metric. Throughput is another metric that can be controlled, but discussion of such metrics is beyond the scope of this paper. To control the performance offered to requests within each class, we employ an adaptive QoS architecture that consists of three main components.
- **Kernel resource controllers:** The two resources, the socket’s accept queue and the CPU run queue, are controlled by a proportional-share scheduler to meet the performance goals of different service classes. Specifically, we use a weighted fair queuing scheduler for the accept queue, and the hierarchical start-time fair queuing (HSFQ) scheduler for the CPU. A SYN classifier is used to classify incoming TCP connections into their service classes.
- **Monitoring framework:** The monitoring framework continuously obtains measurements from the system for each resource, and each class, which are used by the adaptation engine. Examples of these measurements include per-class delays, request service times and resource utilizations.
Figure 2: Architecture for Adaptive QoS
**Adaptation engine:** The adaptation engine uses an observation-based approach to adjust the resource allocations for each class based on the monitored performance and the desired QoS goal. The adaptation progresses on two levels—a local, per-resource level and a global one across resources.
Figure 2 illustrates the interactions between these components. The flow of control during the lifetime of the request is as follows. The kernel performs early de-multiplexing and classification of incoming TCP (SYN) packets and assigns each request to a service class. After a request is admitted and added to a class-based accept queue, the weighted fair queuing accept queue (WFQAQ) scheduler determines the order in which the waiting Apache processes accept the requests. After accepting a new request, each Apache process is attached to the corresponding CPU service class of the request and scheduled by the HSFQ CPU scheduler. Through the monitoring framework, the performance of each class is monitored continuously by the adaptation engine. In response to changing workload, the adaptation engine adjusts the shares assigned to each class in the accept queue and the CPU scheduler such that their QoS goals are met.
In what follows, we first describe the kernel mechanisms used in our adaptive QoS architecture and then describe the monitoring framework and the adaptation algorithms.
### 3.1 SYN Classifier
The SYN classifier uses the network packet headers to perform classification of incoming requests into different service classes. In [28] there is a description of how to extend the classification within the kernel to include application headers. Since a majority of web requests use TCP as the underlying transport, the SYN classifier resides in the TCP/IP processing path. The classifier employs classification rules to determine the class to which an incoming connection belongs. The classifier includes mechanisms for admission control via SYN policing, however, we do not focus on the admission control aspects in this paper. The classification rules, shown in Table 1, are based on the network 4-tuple (IP address and port number). In our prototype on Linux, the `iptables` command is used to insert and delete rules in the kernel packet filtering tables. These filters are maintained by the `netfilter` framework inside the Linux kernel [1].
<table>
<thead>
<tr>
<th>Filter</th>
<th>QoS Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>128.1.1.*, 80, *, *</td>
<td>Delay=200ms</td>
</tr>
<tr>
<td>128.1.1.*.21, *, *</td>
<td>Delay=1sec</td>
</tr>
<tr>
<td>*, <em>, 112.3.4.</em>, *</td>
<td>Thruput=100 req/sec</td>
</tr>
</tbody>
</table>
**Table 1: Classification Rules**
### 3.2 Accept Queue Scheduler
For a new incoming request, after the three-way TCP handshake is complete, the connection is moved from the SYN queue (called the partial-listen queue in a BSD-based stack) to the listening socket’s accept queue. Instead of a single FIFO accept queue for all requests, our architecture employs a separate accept queue for each service class. Requests in these queues are scheduled using a work-conserving weighted fair queuing (WFQAQ) scheduler. The scheduler controls the order in which requests are accepted from these queues for service by the web server processes. The scheduler allows a weight to be assigned to each class; the rate of requests accepted from a class is proportional to its weight. Thus, the weight setting of a class allows us to control its delay in the accept queue. As soon as an Apache process becomes idle, a request is dequeued from one of the class-specific accept queues in accordance with their weight assignments. Thus, the Apache process pool is not statically partitioned across classes. WFQAQ is a work-conserving scheduler—an Apache
There are other alternatives for managing the accept queue; we discuss these alternatives briefly and contrast them to our WFQAQ scheduler. The first alternative is to employ a fixed static-priority accept queue [28] that always services a higher priority request before servicing a lower priority request. The problem with a prioritized scheduler is that lower priority classes can be starved by the higher priority classes. As shown in Table 2(c), the lower priority class (C3) gets affected by the request rate of the high priority classes (C1 and C2). In contrast, a proportional-share scheduler like WFQAQ provides performance isolation across classes, since an accept queue buildup in one class does not affect the performance of other classes. If the accept queue of a class becomes full, then further requests are dropped as in the traditional (single FIFO) accept queue scenario. This effect is shown in Table 2(a) that lists the request rate and observed delay for three classes scheduled by a WFQAQ scheduler. Each class is given the same share, i.e., the weight assignments are 1:1:1. Note that even when classes C2 and C3 increase their rates, from 375 req/sec to 400 and 425 req/sec, the delay of class C1 remains unchanged. Classes C2 and C3 eventually start showing errors in the form of client drops, since they only affect their own queue build-up.
The second problem with a prioritized scheduler is that it can only adapt to changing loads in a very coarse-grained manner. Table 2(c) shows that for a given priority ordering among classes, a prioritized scheduler provides exactly one combination of delays that will be seen by the classes based on the offered load of each class. In contrast, a WFQAQ scheduler can offer a wide range of delays for the classes by tuning their share allocation. Thus, the schedulability region of a WFQAQ scheduler is larger than that of a prioritized scheduler. Table 2(b) shows the delay values provided by WFQAQ for different weight assignments to 3 service classes, each receiving requests at a rate of 400 requests/sec.
Instead of static priority, another technique is to statically partition the Apache server processes among the service classes and rely on a feedback system to dynamically adjust the number of processes assigned to a class. Although the approach can adapt to changing loads, its primary drawback is that it is non work-conserving—an Apache process assigned to a class can idle if its queue is empty, even though other classes may have pending requests.
A third approach is to employ deadline-based scheduling of tasks (e.g., EDF). This approach assigns a deadline to each request and orders/schedules requests in increasing order of deadlines. An EDF scheduler
by itself does not provide performance isolation across classes and additional mechanisms are necessary to limit the utilization of each class.
3.3 CPU scheduler
Traditionally, the CPU scheduler on a Unix-based system schedules all OS application processes using a time-shared priority based scheduler. The scheduling priority depends on the CPU usage of the process, the I/O activity, and the process priority.
To achieve the desired response time goal of a class and provide performance isolation, we use a hierarchical proportional-share scheduler that dynamically partitions the CPU bandwidth among the classes. Specifically, we use the hierarchical start-time fair queuing (HSFQ) \[17\] scheduler, to share the CPU bandwidth among various classes. HSFQ is a hierarchical CPU scheduler that fairly allocates processor bandwidth to different service classes and uses a class-specific scheduler for processes within a class. The scheduler uses a tree-like structure with each process (or thread) belonging to exactly one leaf node. The internal nodes implement the start-time fair queuing (SFQ) scheduler that allocates weighted fair shares, i.e., the bandwidth allocated to a node is in proportion to its weight. Unused bandwidth is redistributed to other nodes according to their weights. The properties of SFQ, namely: i) it does not require the CPU service time to be known apriori, and ii) it can provide provable guarantees on fairness, delay and throughput received by each process (or thread), make it a desirable proportional-share scheduler for service differentiation.
In our implementation, we use only a 2-level hierarchy (consisting of the root and various service classes). On accepting a web request, each web server process is dynamically attached to the corresponding service class; the CPU share of the class is determined dynamically based on the requirements of the class and the current workload.
One question that arises regarding the CPU control is whether share-based CPU scheduling is required if the number of processes attached to a class can be adjusted dynamically. A direct co-relation between number of processes per class and the CPU bandwidth that a class receives does not always hold. While it may be a valid assumption for small file accesses and single tiered systems, it does not hold in general when processes have different service time requirements, different disk I/O idling times, or are kept alive by HTTP/1.1 for connection re-use across requests. For more fine-grained performance control, a share-based CPU scheduler is required.
3.4 Monitoring Framework
The monitoring framework continuously obtains measurements on the state of each resource and class that are used by the adaptation engine. These measurements can be broadly categorized into per-class, or local measurements, and resource-wide, or global measurements. Examples of local measurements include per-class delays in a resource, per-class request arrival rates, or the work required by a class’s requests in a resource. Examples of global measurements include resource utilization, or global queue lengths.
The monitoring subsystem is essentially a set of kernel mechanisms to extract measurements from each of the resources managed by the adaptation framework. As an example, we briefly describe the per-class delay measurement implemented for the accept queue and the CPU run queue. In case of the accept queue, when a connection is enqueued in the accept queue, we timestamp its arrival in the associated socket data structure. When TCP dequeues a request from the accept queue, as dictated by the accept queue scheduler, we timestamp the departure of the request and compute the time spent in the accept queue. This measurement is aggregated in a running counter together with the number of requests
seen by the accept queue. In a similar manner, for CPU, we measure the time spent by a process waiting in the run queue and running on the CPU.
A system call interface is used to allow the adaptation algorithm to perform monitoring as well as resource control. We added an `ioctl` like system call, `sys_multisched()`, to the Linux kernel for this purpose. `sys_multisched()` takes as arguments a command and some command-specific arguments. The commands allow the local class-specific values and global resource values to be queried or updated. For local class-specific measurement, the call arguments identify the command, the resource, the resource-specific metric of interest, and the class identifier. For global measurements they only identify the resource and the metric of interest.
Operationally, two timers are used, viz a monitoring timer and an adaptation timer. The values are measured by the monitor every monitoring instant, or "tick", where the time-interval per tick, $T_m$, is a configurable value. The time-interval between the adaptation instants, $T_a = kT_m$, i.e., an adaptation instant happens after multiple monitoring instants, or every $k$ ticks. The measured values over the $k$ ticks are averaged to give the current value at the start of a new adaptation instant. The value at the previous adaptation instant is exponentially averaged using a weighting factor $\alpha$. For a resource parameter $a$, whose exponentially averaged value in the last cycle was $a_{prev}$ and the new set of values at the start of the current adaptation instant were $a_1, a_2, \ldots a_k$, the new value, $a_{cur}$ is given by
$$a_{cur} = \alpha \cdot a_{prev} + (1 - \alpha) \cdot \frac{\sum_{i=1}^{k} a_i}{k}, \quad (0 \leq \alpha \leq 1)$$
### 4 Adaptation Engine
The adaptation engine builds upon the monitoring, scheduling and control infrastructure described in the previous section. Based on the measured values returned by the monitoring agent, the adaptation algorithm computes and sets new shares and weights in the schedulers in order to meet the QoS goals of each class.
#### 4.1 Adaptation Techniques
There are three general approaches that can be employed to build an adaptation framework: (i) a control theoretic approach with a feedback element, (ii) an open-loop approach based on a queuing model of the system, and (iii) an observation-based adaptive system that uses run-time measurements to compute the relationship between the resource parameters and the QoS goal.
A control theoretic approach is a powerful technique in general. However, most solutions that apply control theory for web server adaptation require training the system at different operating points to determine the control parameters for a given workload. Moreover, these control parameters have to be re-computed when the workload characteristics change, e.g., from CPU-bound SSL requests to network bandwidth-bound multimedia requests. Secondly, these solutions assume a linear relationship between the resource parameters and the QoS goals for all operating regions. While linear assumptions may hold, in practice, for throughput control they cannot be generalized for other QoS goals such as response time. An assumption that the reciprocal of the response time can be modeled by a linear behavior
does not capture the delay relationship correctly. The response time, in general, depends on the utilization of the system and the scheduling policy and is, therefore, difficult to capture by using a linear model throughout the range of system utilization values.
On the other hand, an open loop system, for example, one based on a queuing model, is difficult to solve analytically for complex arrival patterns and service time distributions. Queuing models are useful for steady-state analysis and do not handle transients accurately. Simple approximations of arrival and service time distributions lead to incorrect choice of parameters. Moreover, not all schedulable resources can be modeled as queuing systems.
We chose an observation-based approach for adaptation as it is most suited for handling varying workloads and non-linear behaviors. Figure 3 depicts how delay may vary with share assigned to a class (the share for a class translates to its resource utilization). This figure illustrates that (i) the delay-share relationship may change with the request arrival rate $\lambda_i$ (as depicted by the two $\lambda$ curves), and (ii) the delay-share relationship is non-linear even when the request rate remains the same. The basic idea in our observation-based approach is to approximate the non-linear relationship between the delay of a class and its share (or weight), by multiple piece-wise linear parts. The algorithm continuously keeps track of the current operating point of each class on its delay-share curve. The observation-based approach depends on run-time adaptation, and hence is well-suited for highly variable and dynamic workloads.
The observation-based adaptation proceeds on two levels—a local per-resource adaptation and a global system-wide adaptation. The next two sections describe the adaptation algorithm in detail.
### 4.2 Resource-specific Local Adaptation
The local adaptation algorithm of each resource needs to ensure that each class achieves its QoS (in this case response time) goal for that resource. For each class $i$, let $D_i$ represent its desired response time and $d_i$ be its observed average delay in that resource. Furthermore, for each class $i$, the algorithm maintains an estimate of the slope, $m_i$ of its delay-share (or delay-weight) curve at the current operating point. The adaptation algorithm tries to adapt the share of each class, $w_i$, such that the delay value $d_i$, lies in the range $[(1 - \epsilon)D_i, (1 + \epsilon)D_i]$. The adaptation proceeds in the following four steps.
**Determining class state:** At every adaptation instant, the local adaptation engine computes the current value of $d_i$ from the monitored values, as described in Section 3.4.
At every adaptation instant, the algorithm checks whether each class is missing its local response time goal by comparing the values of $d_i$ and $D_i$. A class that is missing its goal i.e., $d_i \geq (1 + \epsilon)D_i$, is called an underweight class. Similarly, a class that is more than meeting its goal, i.e., $d_i \leq (1 - \epsilon)D_i$ is called an overweight class. Other classes that have their delay within the range of the desired delay are called balanced classes. The underweight classes are ordered to determine the most underweight class. The algorithm tries to borrow shares from the overweight classes such that the most underweight class becomes balanced. This redistribution step, however, must ensure that the overweight classes are not overcompensated to make them underweight as a result.
**Redistribution:** For redistributing the share across classes, the algorithm needs to quantify the effect of changing the share allocation of a class on its delay. This is computed by using the slope estimate $m_i$, at the current operating point on the delay-share curve. The total extra share needed by an underweight class $i$ is given by
$$\Delta w_i = \frac{(d_i - D_i)}{m_i}$$
as shown in Figure 3. The extra share required by the underweight class is not equally distributed among the
overweight classes. Instead, the amount of share that an overweight class can donate is based on its sensitivity to a change in share. There are two factors that affect the sensitivity of an overweight class: (i) its delay slack given by \((D_j - d_j)\), which measures how much better off it is from its desired delay goal, and (ii) the current slope of its delay-share curve \(m_j\), which measures how fast the delay changes with a change in share. Based on these factors, the surplus \(s_j\) for an overweight class \(j\) is given by
\[
s_j = \frac{(D_j - d_j)}{m_j}
\]
The surplus of each overweight class is proportionally donated to reduce the likelihood of an overweight class becoming underweight. The donation, \(donation_j\), of an overweight class is a fraction of the required extra share weighted by its surplus, and is given by
\[
donation_j = \Delta w_i \cdot \left(\frac{s_j}{\sum_k s_k}\right)
\]
Before committing these donations, we must check that the new delay value does not make the overweight class miss its delay goal. Based on the slope \(m_j\) we can predict that the new delay value of the overweight class would be given by
\[
d_j' = d_j + m_j \cdot donation_j
\]
If the new delay value misses the delay goal, i.e., \(d_j' \geq (1+\epsilon)D_j\), the donation is clamped down to ensure that new delay is within the range of the desired delay. The clamped donation is given by
\[
clamped_{donation_j} = \frac{[(1 - \epsilon) \cdot D_j - d_j]}{m_j}
\]
The actual donation of an overweight class is, therefore,
\[
actual_{donation_j} = \min\{donation_j, clamped_{donation_j}\}
\]
The total donation available to the underweight class \(i\), which is the sum of the actual donations of all the overweight classes, i.e., \(\sum_j actual_{donation_j}\), is never greater than the required extra share \(\Delta w_i\).
One underlying principle of the redistribution step is that the overweight classes are never penalized more than required. This is necessary because the slope measurements are accurate only in a localized operating region and could result in a large, but incorrect, surplus estimate. When workloads are changing gradually, it is most likely that the extra share requirements of an underweight class will be small, thereby, making the proportional donation of the overweight classes to be smaller.
**Gradual adjustment:** Before committing the actual donations to the overweight and underweight classes, the algorithm relies on gradual adjustment to maintain stability. This is another hook to ensure that there are no large donations by the overweight classes. A large donation could change the operating region of the classes which would make the computations based on the current slope value, incorrect.
Hence, we perform gradual adjustment by only committing a fraction \(\beta\) \((0 \leq \beta \leq 1)\), of the computed actual donation, which is given by,
\[
commit_{donation_j} = \beta \cdot actual_{donation_j}
\]
The algorithm commits the new shares (or weights) to all the involved classes by using the resource control hooks described in Section 3.4.
**Settling:** After committing the donations, the adaptation algorithm delays the next adaptation instant, by scaling the adaptation timer, to allow the effect of the changes to settle before making further adaptation decisions. We keep the adaptation cycle short during stable states to increase responsiveness and only increase it when settling is required after a change to increase stability.
The committed donations change the current operating points of the involved classes along their delay-share curves. At the next adaptation instant, the algorithm
measures the actual observed change in the per-class delays, and uses these values to obtain updated values of the slope \( m_i \) for each class. The updated \( m_i \) values are used in the above adaptation equations whenever adaptation is performed next.
### 4.3 System-Wide Global Adaptation
The system-wide global adaptation algorithm maps the overall response time goals for each class to local response time goals for each resource used by that class. One approach is to use the same value for both system-wide response time goal and the local goal per resource. Although this is a nice choice for initial values, it can reduce performance when different classes have a different bottleneck resource. The main intuition behind our utilization-based heuristic for determining local goals is to give a class a more relaxed goal in its bottleneck resource, i.e., the resource where the class requirements are high relative to the resource capacity.
To determine the per-class resource utilizations the global adaptation engine, at every adaptation instant, uses the monitored values of the work required by each class using resource \( j \), and the total capacity \( C_i \) of each resource \( j \). While the capacity may be a fixed constant (e.g., MIPS) in the case of CPU, for the accept queue it is the measured process regeneration rate of the web server, i.e., the rate at which connections are accepted from the accept queue.
Let \( D_i \) be the global response time goal of class \( i \), and \( D_{i,j} \) be the the local response time goal of class \( i \) in resource \( j \). The sum of the local response time goals should equal the system-wide goal. The local value depends on the utilization \( u_{i,j} \) for the class \( i \) in resource \( j \), which is given by,
\[
u_{i,j} = \frac{C_{i,j}}{C_j}\]
Using the utilization value, the global response time goal is proportionally allocated between the resources, to give the local response time goals for each class, i.e.,
\[
D_{i,j} = D_i \cdot \left( \frac{u_{i,j}}{\sum_k u_{i,k}} \right)
\]
A utilization-based deadline splitting approach has also been used in [16], however, their optimization goal is to balance resource utilization. Our intent, instead, is to examine the workload of each class in isolation and relax the goal in the bottleneck resource for that class.
### 5 Experimental Evaluation
In this section we evaluate the effectiveness of our system’s per-resource and global adaptation algorithms in providing response time guarantees under varying workload conditions. We first demonstrate adaptation of the two system resources—accept queue and CPU—in isolation. We study adaptation behavior for workloads with both deterministic and Poisson request arrival distributions.
Deterministic workloads do not generate significant queuing delays in systems that are not overloaded. With such workloads, the predominant delay is the service time which depends on the resource share assigned to each class. Such workloads are useful to analyze for preemptively-scheduled resources like the CPU, but not for resources like the accept queue where the only delay is caused by queuing. Deterministic workloads allow us to demonstrate the effectiveness of the adaptation algorithm in controlling delays by properly scaling per-class resource shares. On the other hand, Poisson-distributed workloads, which are more representative of real-world workloads, allow us to demonstrate the effectiveness of the algorithm in managing queuing delays. Such delays are relevant for both the CPU and the accept queue resource.
We demonstrate the adaptation behavior of the observation-based approach for: i) changes in workload arrival rates that shift the operating region, ii) changes in response time goals of the classes that can change within
a resource based on global system state, and iii) change in workload characteristics that shift the resource bottlenecks.
After evaluating adaptation for each resource along the above dimensions, we evaluate system-wide global adaptation that implements the adaptation machinery for both resources, and adjusts resource allocations in the appropriate resource depending upon the current system workload, current resource utilizations, and the global response time goals.
5.1 Experimental Testbed
The experimental testbed consists of a server machine running a kernel with the adaptation mechanisms and algorithms, and two client machines that generate workload. The server is a 660 MHz P-III machine with 256 MB RAM and runs Linux 2.4.7. Each client machine is a 450 MHz P-II with 128 MB RAM, also running Linux 2.4.7. The machines are connected by a 100 Mbps Ethernet. The server runs Apache 1.3.19 with SSL support enabled. The MaxClients parameter of Apache was set to 150 processes.
The server kernel was modified to implement monitoring, scheduling and control mechanisms for the accept queue and the CPU, as discussed in Section 3. These mechanisms form the building blocks for the adaptation algorithm described in Section 4.
The workload generator used at the clients was httpperf [23]. Httpperf was chosen because it is an openloop workload generator that not only allows request rates to be specified as a parameter, but also allows generation of deterministic as well as randomly distributed workloads. To stress different resources in the system we use two kinds of workloads:
- **CGI workload**: In this workload, a CGI script is used that blocks for a variable time duration before returning a response. This models blocking for a back-end database request that reduces the Apache process regeneration rate, thereby, stressing the accept queue without loading the CPU.
- **SSL workload**: The SSL workload models a CPU-intensive workload, which does not stress other resources in the system for moderate request rates.
In the experiments that follow, the monitoring framework records the measurements every system “tick” whose value is set to be 5 seconds. For deterministic workloads, adaptation is triggered every 10 ticks in the stable state. In case of Poisson workloads, where the delays show significantly more deviation about their mean, adaptation is triggered every 40 ticks to avoid over-reaction to transient delays. To allow the system to settle after a share is changed, the adaptation interval is increased by a factor of 2.
5.2 CPU Adaptation
For evaluating the adaptation behavior of the CPU, we choose SSL requests as the CPU-intensive workload. The clients request an SSL-encrypted file from the server at a given rate. At the server, response time goals are specified for two classes. In each experiment, we start with an equal share allocation to each class. Figure 4 illustrates the results of CPU share adaptation with a varying workload request rate and a deterministic arrival distribution. The CPU target delay for both classes was 0.1 seconds. Clients of both classes generate a combined aggregate workload of 12 SSL requests/sec. The fraction of the requests coming from each client was varied from 1:1 to 1:2 to 1:1 to 2:1, with the transitions occurring at 100 ticks, 500 ticks, and 900 ticks respectively. In other words, the class pair had request rates of (6 req/sec, 6req/sec) from 0 to 100 ticks, (4 req/sec, 8 req/sec) from 100 to 500 ticks, (6 req/sec, 6 req/sec) from 500 to 900 ticks, and (8 req/sec, 4 req/sec) from 900 to 1200 ticks. Figure 4(a) is a plot of the average per-class delays with time, and shows that adaptation was successfully triggered in each case such that the
response time of each class was close to its goal. Figure 4(b) plots the relative shares assigned to each class. As the figure shows, share of class 1 was increased at the first transition to handle its increased load. This share could be borrowed from class 0 because it had a reduced load. When the request rates were balanced again at the second transition, share of class 0 was increased to re-balance the previous share setting. Finally, to handle the increased load of class 0, its share was increased at the expense of class 1. Thus, the figure demonstrates the gradual share adaptation being performed by the algorithm in the CPU scheduler.
Figure 5 illustrates the results of CPU share adaptation with varying response time goals and a deterministic arrival distribution. Both clients send requests at the rate of 6 SSL requests/sec. Initially the goals of both classes were set to be equal. After 100 ticks, the response time goal of class 0 and 1 was changed to 0.05 seconds and 0.15 seconds respectively. After 500 ticks, the response time goal for these classes was reversed. Note that this reversal causes a large relative change in the response time goals. We use this to stress the adaptation algorithm and verify that large changes do not send the system into oscillations. Figure 5(a) plots the average per-class delays and demonstrates the adaptation to the changes, whereas 5(b) shows the CPU share adjustments performed by the adaptation algorithm.
The above experiments used a deterministic workload to show that the adaptation algorithm can adjust shares to handle changes in request rates and target delays. Next, we study the effectiveness of the adaptation algorithm in managing queueing delays. This is done by using a workload with Poisson request arrivals. Both clients generate requests whose arrival is Poisson distributed with mean 6 requests/sec. During the first 200 ticks, the queue length is allowed to settle, and adaptation does not trigger during this period. Then, at 200 ticks, class 0 is given a goal of 0.25 seconds, whereas class 1 is given a goal of 1 second. At 600 ticks, these goals are reversed, which is again a large relative change. Figure 6 shows the adaptation results. Figure 6(a) plots the average delays that are seen by the adaptation algorithm while making adaptation decisions. The weight adjustments made by the algorithm are shown in figure 6(b). Note that the share adjustment done by the algorithm at the second transition is larger and faster than that done at the first transition. The reason is that at the second transition, the lower delay class 0 has more slack in terms of donating from its share to class 1.
5.3 Accept Queue Adaptation
For evaluating accept queue adaptation behavior, we use the CGI workload as described earlier. Again, in each experiment we start with an equal share allocation to each class. Note that since the only kind of delay in the accept queue is the queuing delay, only workloads with Poisson-distributed arrivals are relevant. Figure 7 shows the accept-queue share adaptation for varying response time goals. Both classes of clients generate requests whose arrival is Poisson distributed with a mean of 24.6 requests/sec. During the first 400 ticks, the queue length is allowed to settle. During this period, the response time goal is kept at a high value for both classes, so that adaptation does not trigger. Then, at 400 ticks, class 0 is given a goal of 0.05 seconds, whereas class 1 is given a goal of 0.15 seconds. At 900 ticks, a large relative change is made by reversing these goals. Figure 7(a) plots the average per-class delays and figure 7(b) shows the accept queue share adjustments. As can be seen from the graphs, the adaptation algorithm changes the shares for the classes to meet their delay goals. We do not show the initial 400 ticks of the experiment, as there is no adaptation taking place there.
5.4 System-Wide Adaptation
In this experiment we demonstrate the combined adaptation of both resources when a change in the type of
workload shifts the bottleneck resource.
For the experiment shown in Figure 8, the clients alternate between generating CGI and SSL workloads. To keep the delay values in each resource comparable, we use a combination of an SSL workload with deterministic arrivals and a CGI workload with Poisson arrivals. Figures 8(a) and (b) plot the average CPU delay and the average accept queue delay respectively, for each class.
The experiment proceeds in three phases.
From 0 to 400 ticks, the clients generate SSL requests at the rate of 6 requests/sec. No adaptation is triggered for the first 100 ticks to allow the system state to stabilize. At 100 ticks, the global response time goal of class 0 is set to 0.05 seconds and that of class 1 is set to 0.15 seconds. For the rest of the experiment these global target delays are kept fixed. As seen in these figures, the accept queue delay is negligible (around 0.002 secs) for the first 400 ticks since the workload is CPU-intensive. Hence, the entire delay budget is available to the CPU. As the graph shows, the CPU shares adapt to provide each class with their target delay values.
Between 400 to 800 ticks, the clients switch from an SSL workload to a CGI workload with a request rate of 24.6 requests/sec each. This reduces the CPU delay to a negligible value (around 0.0002 seconds) but ramps up the accept queue delay. Most of the delay budget for each class is now available for the accept queue. The accept queue adaptation algorithm responds by adjusting shares to achieve the target delays.
Finally, from 800 to 1200 ticks, the clients switch back to an SSL workload, thus making the CPU the bottleneck resource again. Moreover, the request rates of the clients are also changed to 4 requests/sec and 8 requests/sec respectively. Once again, as shown in the graphs, the accept queue delay becomes negligible while the CPU scheduler parameters are adjusted to help the classes achieve their goal.
This experiment demonstrates the ability of the system to choose the appropriate resource to adapt with changes in the type of workload, and to trigger the appropriate local adaptation to meet per-class response time goals.
6 Related Work
Several approaches for self-managing systems have been proposed in the literature in the context of storage systems [6, 22, 27], general operating systems [25], network services [11], etc. Our focus is to design adaptive techniques to make web servers self-managing while providing QoS guarantees to various customer classes.
Recently, several research efforts have focused on the design of adaptive web servers. A control theoretic approach for adaptation has been proposed in [2, 21, 30]. This approach involves a training phase using a given workload to perform system identification, based on which a controller is designed that assumes a linear relationship between the QoS metric and the scheduler parameters. Unlike this effort, we employ an alternate observation-based approach for adaptation. Since delay is not linearly related to the share parameters of proportional-share schedulers, and the system model changes with variations in the workload, we perform adaptation by measuring the system state on a continual basis and adapting based on the current operating region. Thus, system identification is an ongoing process in our system, and while we assume linearity around a particular operating point, the operating region as a whole is assumed to be non-linear.
A number of recent and ongoing research efforts have looked at various aspects of providing QoS support for web servers. WebQoS [8] is a middle-ware layer that provides admission control and service differentiation in user space. Unlike the webQoS effort, the focus of our work is not to design new scheduling or resource management mechanisms per se, rather it is to design an adaptive framework to effectively parameterize existing mechanisms. An adaptive mechanism for admission control for web Servers is described in [19]. Goal-based CPU scheduling using coarse-grained resource allocation techniques for meeting service level agreements has been studied in the context of WLM [5]. In contrast, our work focuses on the combined fine-grained adaptation of multiple resource allocations both in terms of resource units as well as the time scale.
To achieve performance guarantees on a web server, several research efforts have developed predictable resource management mechanisms and techniques for the host operating system. Resource Containers [7] is a kernel mechanism for accurate accounting of resource usage that can be used for service differentiation on a web server. SFQ [17], BVT [15], SMART [24] are predictable scheduling algorithms that can be employed as basic scheduling mechanisms in the kernel. Kernel mechanisms for early classification and managing of accept queue delay have been proposed in [14, 28]. Our work is complementary to the development of such mechanisms. In fact, we assume the existence of such mechanisms and show how to automate the task of parameterizing these mechanisms to achieve self-manageability in the system.
Figure 8: Delays for system-wide adaptation for the CPU and the accept queue.
Previous work on resource management for web servers has typically focused on individual resources. [3] has proposed a CPU scheduling algorithm to dynamically distribute CPU bandwidth to Apache processes. Connection setup delay has been identified as the bottleneck in [21] that proposes a scheduling scheme to manage the accept queue. A key contribution of our work was that we showed the need for managing multiple resources, and developed an adaptation technique for controlling multiple resources dynamically.
Many research efforts have looked at resource control from the application perspective. [20] proposes a mechanism to guarantee end-to-end delays for a periodic real-time application with well-defined stages. SEDA [29] is a framework for designing applications that allows controlled resource allocation for each application stage. In [26], a feedback-driven scheme has been proposed that uses application-specific indicators to determine the resource allocation. Most of these approaches require knowledge or make assumptions about the application structure. In our work, we try to infer the application behavior through system-level observations and don’t require any knowledge of the application internals.
7 Conclusions and Future Work
In this paper, we proposed an observation-based approach for self-managing web servers that can adapt to changing workloads while maintaining the QoS requirements of different classes. First, we illustrated the need to manage different resources for different kinds of workloads. Later, we described an adaptation framework which monitors the system state continuously and adjusts the various resource parameters to maintain the response time requirements of different classes.
As part of an ongoing effort, we are extending the scope of the adaptation architecture to include other system resources such as disk arrays, network interfaces, etc. This includes integrating the adaptation system with the admission controller. In future we plan to investigate more varieties of web workloads and server architectures, in particular, workloads that involve accessing a back-end server and multi-tier server architectures that include a web server, an application server and a back-end database. We would also like to explore the possibilities of using our adaptation technique in other self-managing scenarios such as large storage systems, database systems, etc.
Overall, we believe that an observation-based approach is a useful technique to adapt to unpredictable loads and other system factors, and our techniques show how this approach can be applied in a web server environment.
References
|
{"Source-Url": "https://none.cs.umass.edu/papers/ps/TR02-06.pdf", "len_cl100k_base": 12185, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 58627, "total-output-tokens": 14778, "length": "2e13", "weborganizer": {"__label__adult": 0.00030612945556640625, "__label__art_design": 0.0004684925079345703, "__label__crime_law": 0.0003304481506347656, "__label__education_jobs": 0.00164031982421875, "__label__entertainment": 0.00021922588348388672, "__label__fashion_beauty": 0.00016939640045166016, "__label__finance_business": 0.0007686614990234375, "__label__food_dining": 0.0003919601440429687, "__label__games": 0.0006818771362304688, "__label__hardware": 0.0032863616943359375, "__label__health": 0.0007476806640625, "__label__history": 0.0004589557647705078, "__label__home_hobbies": 0.00010877847671508788, "__label__industrial": 0.0005803108215332031, "__label__literature": 0.0003693103790283203, "__label__politics": 0.0003101825714111328, "__label__religion": 0.0004322528839111328, "__label__science_tech": 0.427734375, "__label__social_life": 0.00010770559310913086, "__label__software": 0.06536865234375, "__label__software_dev": 0.494384765625, "__label__sports_fitness": 0.0002048015594482422, "__label__transportation": 0.0005660057067871094, "__label__travel": 0.00027298927307128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66595, 0.0215]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66595, 0.46974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66595, 0.91316]], "google_gemma-3-12b-it_contains_pii": [[0, 3097, false], [3097, 7075, null], [7075, 10957, null], [10957, 14931, null], [14931, 17681, null], [17681, 20319, null], [20319, 24063, null], [24063, 26804, null], [26804, 30630, null], [30630, 33937, null], [33937, 37984, null], [37984, 41668, null], [41668, 45495, null], [45495, 49238, null], [49238, 51920, null], [51920, 53295, null], [53295, 55818, null], [55818, 58505, null], [58505, 62533, null], [62533, 66595, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3097, true], [3097, 7075, null], [7075, 10957, null], [10957, 14931, null], [14931, 17681, null], [17681, 20319, null], [20319, 24063, null], [24063, 26804, null], [26804, 30630, null], [30630, 33937, null], [33937, 37984, null], [37984, 41668, null], [41668, 45495, null], [45495, 49238, null], [49238, 51920, null], [51920, 53295, null], [53295, 55818, null], [55818, 58505, null], [58505, 62533, null], [62533, 66595, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66595, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66595, null]], "pdf_page_numbers": [[0, 3097, 1], [3097, 7075, 2], [7075, 10957, 3], [10957, 14931, 4], [14931, 17681, 5], [17681, 20319, 6], [20319, 24063, 7], [24063, 26804, 8], [26804, 30630, 9], [30630, 33937, 10], [33937, 37984, 11], [37984, 41668, 12], [41668, 45495, 13], [45495, 49238, 14], [49238, 51920, 15], [51920, 53295, 16], [53295, 55818, 17], [55818, 58505, 18], [58505, 62533, 19], [62533, 66595, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66595, 0.02358]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0e959f58daf4972527fa01df50b26025b16f79e2
|
Consistent policy enforcement in distributed systems using mobile policies
Susan Chapin *, Don Faatz, Sushil Jajodia, Amgad Fayad
The MITRE Corporation, 1820 Dolley Madison Boulevard, McLean, VA 22102-3481, USA
Received 9 February 2002; received in revised form 9 February 2002; accepted 19 June 2002
Abstract
This paper briefly traces the evolution of information system architectures from mainframe-connected terminals to distributed multi-tier architectures. It presents the challenges facing developers of multi-tier information systems in providing effective consistent data policy enforcement, such as access control in these architectures. Finally, it introduces “Mobile Policy” (MoP) as a potential solution and presents a framework for using mobile policy in the business logic tier of multi-tier information systems.
© 2002 Elsevier Science B.V. All rights reserved.
Keywords: Security; Access control; Mobile policy; n-Tier architecture
1. Introduction
Multi-tier architectures separate application functions into three or more tiers, a presentation tier that handles interface with the user, one or more business logic tiers, and a data tier. Typically, the client or presentation tier is reduced to no more than a Web browser and the database management system (DBMS) is returned to its primary function of storing data (see Fig. 1). Business logic is moved from the client and the database to a middle tier, which consists of a Web server and application code [3,4,11,14]. The application code is hosted on a different platform
* This work was funded by the MITRE technology program under project number 51MSR871.
** A preliminary version of this paper has appeared in the Proceedings of the 14th IFIP WG11.3 Working Conference on Database and Application Security, Schoorl, The Netherlands, August, 2000.
*Corresponding author.
E-mail addresses: schapin@mitre.org (S. Chapin), dfaatz@mitre.org (D. Faatz), jajodia@mitre.org (S. Jajodia), afayad@mitre.org (A. Fayad).
from the DBMS. The application code may request data from many DBMSs and may implement functions much more complex than satisfying requests by clients to access data in the DBMS. Support for multiple clients and databases is shown in Fig. 2.
Multi-tier architectures require changes in the way security and other policies are managed. Mechanisms are needed that can achieve consistent policy across elements of a distributed environment and that support flexible policies that address needs other than access control. The need for consistent policy management across distributed components is analogous to the need for consistent transaction management across distributed components. The need for flexible policies arises from the complex functionality of many multi-tier applications. While control over access to data remains a very important policy, support for other types of policies, such as requiring certain files to have a copyright notice or be sanitized in some way before they are returned to certain clients, is also needed.
Specific requirements for making policies consistent across different components include making policies mobile, so that they may travel with the data from one component to another rather than being applied before the data are released from the DBMS, and making security contexts mobile, so that references to users and roles have the same meaning to all components. Making policies flexible requires enabling those who manage data to define the kinds of policies supported, rather than relying on DBMS vendors. Traditional data tier policy management does not support these needs well, for several reasons: policies defined by DBMS vendors, are limited to access control policies; access control is applied by the DBMS at the time access to the data is requested, requiring that the source of the request be known to the DBMS; and traditional data tier users, roles, and policies are all locally defined within the DBMS based on local security context.
We propose an application framework that extends the capabilities for policy management in multi-tier applications without interfering with existing DBMS-based access controls. In our
framework, policy management is decomposed into three separable functions: defining policies, associating policies with data, and applying policies. Security context management is enhanced by including third-party mechanisms, such as digital certificates provided by a public key infrastructure (PKI) and attribute certificates, that can be referred to by all components. Almost any policy can be supported, limited only by the ability of developers to implement the policy in software. The proposed framework does not interfere with existing access control policy mechanisms. The framework allows policies to be applied in any tier of the application, determined by the application developers in conjunction with the database administrators. As of this writing we have developed our proposed framework design in sufficient detail to support a proof-of-concept prototype.
The rest of this paper is organized as follows. Section 2 describes the components of multi-tier architectures and the needs for policy management in multi-tier architectures. Section 3 describes the limitations of existing mechanisms for dealing with policy management in multi-tier architectures. Section 4 presents an overview of our proposed framework and the functions of the components, standardized vocabularies and method interfaces, and how the framework separates duties among development subgroups. It also describes how security contexts, as well as policies, can be shared among application components. Section 5 summarizes what we have achieved and what we want to achieve in the future.
2. Multi-tier architectures
Multi-tier architectures are an evolution of approaches to connecting a user to services informed by data. Architectures for such applications must provide three functions: data storage and retrieval, application business logic, and interface with the user.
The first approach, historically speaking, was for clients to connect to servers using dumb terminals. All three functions were provided by the server, usually in a single integrated application. Terminal to server architecture is shown in Fig. 3. Later the availability of intelligent workstations allowed the application to be split between the client and the server. Data storage was provided by the server, client interface was provided by the client component, and application business logic was provided by both components, with functionality divided according to the needs of the application. Client/server architecture is shown in Fig. 4.
More recently, a third approach has become common, largely in response to the development of Web capabilities. Application-specific client components that support business logic, called fat clients or thick clients, are difficult and expensive to deploy and maintain. Administrators would like to replace them with thin clients that can be deployed once and used to support multiple applications. In multi-tier architectures thin clients are responsible for interfacing with the user,
but do not themselves determine what information to display. All business logic is offloaded to server components. Multi-tier architectures are shown in Figs. 1 and 2.
Web browsers are the most typical of these thin clients; they are automatically available on most user workstations and can be used to support many different applications. While it is true that Web browsers also support the automatic download of thick client software, in the form of mobile code such as ActiveX and Java Applets, many enterprises prefer to turn off mobile code functionality in response to concerns about security and network bandwidth.
Thin clients are responsible only for presentation, which consists of displaying information to users and collecting information from users. Web browser thin clients are connected to Web servers, instead of directly to a DBMS server. The Web server is associated with an application server capability, which supports the application business logic that determines what the Web browser should present to and collect from the user. This application server environment is the business logic tier.
At the data store end, the data tier, there are good reasons for moving business logic out of the data storage systems, the DBMSs, into the business logic tier. As long as you have a Web application server, you have a platform that supports business logic. Moving all the business logic to this platform, instead of developing it within the DBMS, has a number of advantages for development support and run-time effectiveness.
Advantages for development support include:
- One application can access multiple databases on multiple DBMSs, without requiring coordination among developers of code based on the multiple DBMS platforms.
- An application can call legacy applications and databases without requiring the legacy systems to be extensively modified.
- One database can be referenced by more than one application.
- The business logic application can be a coordinated, single application written on a single platform.
- DBMSs were developed to store data and provide it on request. They do this extremely well. They are less praiseworthy as application development platforms. Compared to integrated development environments that support C++, Visual Basic, Enterprise Java Beans (EJB), and other component or object-oriented environments, DBMS development environments are limited in developer support and language capabilities.
Advantages for run-time effectiveness include:
- enhanced scalability by supporting many clients with few resources,
- enhanced efficiency through the use of database connection sharing.
The downside of multi-tier application architectures is that they can be quite complex. The presentation tier can consist of any one of a number of browser products on any one of a number of platforms. The data tier can consist of multiple databases in different DBMSs on different platforms. The business logic tier can consist of multiple components on multiple different platforms.
The problem is that all these different components need to be composed together into a single application. Where security is involved, the composition must be seamless and reliable. Composing policies in multi-tier applications can be an issue, because the developers of the different components of the business tier and the different databases have different responsibilities and knowledge about the policies that should apply. The distribution of knowledge and responsibilities is illustrated in Fig. 5. We address policy composition in multi-tier architectures by providing a framework that allows the developers of each component to concentrate on the policy management functions that are properly their responsibilities.
We develop our solution for policy composition in Sections 2.1–2.3. We discuss aligning the authority for policy with the responsibility, managing identity credentials, and making security contexts mobile. We then discuss how with our solution policies can be extended beyond access control.
2.1. Aligning the authority for policy with the responsibility
An architecture that reserves policy application to the DBMS requires extensive communication among various subgroups within the enterprise. Managing a policy has three components, and each component is, ultimately, the responsibility of different enterprise subgroups:
- **Policy specification** is properly the responsibility of enterprise management.
- **Policy association** is the process of associating enterprise policy with data. It is properly the responsibility of the data owners, usually represented by the database administrator (DBA).
- **Policy application** is the process of applying the policy to data at the appropriate time. It is properly the responsibility of the business logic developer or whoever is in charge of the point(s) at which the data are used. Applying the policy at time of use is an ongoing activity; the data may be considered to be “used” when it is accessed within the DBMS, but it is also “used” within the application, whether the application code is located within the DBMS or...
in a separate middle tier component, and whether the application uses the data immediately or holds on to it for several days before use.
Policy enforcement is only complete when all three elements, definition, association, and application, work harmoniously together. The problem is that building a coordinated response to policies can require close cooperation among those responsible for each component. It is not that any of the policy management problems are inherently impossible to solve. The problem is that they require cooperative design decisions affecting application code in both the middle tier and the DBMS, and these two portions of the application may be developed by different groups of people on different time schedules. The result is a greater risk of miscommunication and decreased assurance that the resulting product will correctly implement the policy.
A mechanism is needed that decouples the development of software that implements the policy from the process of associating the policy with data and the development of software that applies the policy at time of use.
2.2. Managing identity credentials
Management of identity credentials may present design problems because in multi-tier applications the client does not connect directly to the DBMS. Most typically, such application architectures are built from a Web browser on the client, a Web server and one or more applications in the middle tier, and one or more data stores in the data tier. Problems with authentication in the middle tier include communicating user credentials from the client through the middle tier to the DBMS, authenticating all applications in the chain from the client to the DBMS, and supporting high-performance architectures.
Communication of user credentials is a problem because, if authentication in the DBMS is to be based on the identity of the user who originates a request for access, then a mechanism is needed to pass the credentials from the originating client through the business logic tier(s) to the DBMS.
One possible mechanism for communicating credentials is for the middle tier to impersonate the client. With this mechanism, the middle tier opens a connection to the DBMS while impersonating the originating client, and the DBMS uses traditional DBMS authentication and authorization mechanisms. Requirements for this mechanism are (a) the DBMS must trust the middle tier both to authenticate the actual client and to impersonate the actual client and not some other entity, and (b) the client, the middle tier, and the DBMS must share a security context.
Another possible mechanism is for the middle tier to pass the client’s digital certificate to the DMBS. With this mechanism the middle tier and the DBMS both trust the PKI to authenticate the actual client, in effect sharing the same security context, and the DBMS trusts the middle tier to pass it the certificate belonging to the actual client and not some other identity.
---
1 A security context defines the semantic meaning of an identity and its attributes. Without a shared security context, the DBMS might understand the identity “Jenny Blaise” to point to a sales manager in the enterprise, while the middle tier understands the identity “Jenny Blaise” to point to a reporter for a tabloid newspaper.
With either mechanism, chain of delegation is a problem because, in architectures where the DBMS and the client are not in direct communication, the DBMS has no choice but to trust intermediary applications either to transmit the credentials presented by the actual client or to impersonate the client. For this reason, the DBMS must authenticate all intermediary applications as well as the originating client.
Performance can be a problem because traditional DBMS authentication occurs when a connection is established and the authentication identity persists only as long as the connection is maintained. However, establishing and maintaining a connection to a DBMS is expensive in terms of time and resources. In systems where support for a large number of simultaneous connections is important, such as e-commerce systems, any authentication/authorization mechanism that depends on a dedicated connection to the DBMS for each originating user is an unacceptable solution. High-performance multi-tier architectures typically rely on opening a dedicated connection between the middle tier and the DBMS, and using that connection to access the DBMS as required to satisfy requests from multiple clients.
For these reasons, many currently fielded multi-tier systems do not attempt to authenticate the originating client in the DBMS, or even to inform the DBMS of the identity of the originating client. Instead, the middle tier connects to the DBMS under its own identity, and the DBMS trusts it with a set of authorizations associated with the middle tier application rather than the originating client.
To deal with these two problems, a mechanism is needed that allows a more granular level of control in multi-tier architectures, even though that control may not be as strong as traditional access control applied to client/server architectures, because it will be stronger than the practical alternatives for multi-tier architectures.
2.3. Making security contexts mobile
Traditional DBMS policies depend on the DBMS identifying and proving the unique identity of the client who is requesting access to the data. The client usually represents a user, though it may be an application running under its own identity. In either case, the authenticating system has prior knowledge of all potential clients. In our area of discussion, DBMS authentication, this knowledge is traditionally stored directly in the DBMS. Other possible implementations include storing the identities and authorizations of potential clients in an external trusted directory and developing code in the DBMS to use this external information for authentication.
There are two problems with this approach:
- The pool of potential clients may include clients whose identity is not known to the system managers before the clients attempt to connect.
- If a chain of impersonation is involved, then the DBMS concept of user identity must be synchronized with the middle tier’s concept of user identity.
---
2 To be more accurate, we should discuss connections from individual components of the middle tier, rather than the middle tier as a whole. However, this distinction is not relevant to the argument being presented, however significant it may be to middle tier application developers.
2.3.1. Pool of potential clients
For some applications, the pool of potential clients may be unknowable because expansion of the Internet has led to expansion of application client bases. Many e-commerce applications more-or-less consider everyone in the world to be a potential client; this set of potential clients is simply too large to predefine each potential identity. Extranet applications include employees of other enterprises in the potential client base; employees of other enterprises are not readily known to database administrators. Even within an enterprise, the pool of predefined clients needs to be synchronized with the enterprise’s master file; solutions exist for this but they may be restrictive.
Because for many applications the number of potential clients can be too large to predefine, a mechanism is needed that associates the client with some characteristic that can be known before the client makes a request that requires access to data. This characteristic might be identity if the client is a previous customer, or it might be some other attribute that places the customer in a predefined group or role.
2.3.2. Synchronizing concepts of user identity
To deal with these problems, a mechanism is needed that identifies users consistently in all tiers. Today the most promising such mechanism is use of identity and attribute digital certificates validated by a trusted PKI. Attribute certificates facilitate role-based access control (RBAC) by providing a digitally signed list of user attributes [5,6]. Other mechanisms may be developed in the future. Whatever mechanisms are available, system architectures need the flexibility to make use of them.
One way to maintain security context support is to have a mapping between the identity and attributes that the DBMS requires for access to a particular piece of data and the identity and attributes that the application logic understands. For example, if the DBMS requires the attribute “manager” for data access and the application logic’s equivalent term for “manager” is “supervisor”, then a mapping between “manager” and “supervisor” can insure that context is maintained between the DBMS tier and the application server tier (see [5,6] for additional details).
2.4. Making policies general
“Policy” is often taken to mean “security policy”, and “security” is often taken to mean “access control”, and all access control is often expected to be handled by the same mechanisms. Although security policies are important policies, and access control is important to an enterprise’s overall security, these are not the only policies that an enterprise may want to enforce, and not all policies, access control or other types, are equally important.
2.4.1. Different types of policies
A substantial part of most middle-tier application development involves implementing various kinds of policies. Some policies are enterprise policies that are specified by enterprise management as rules that must be followed. Examples of enterprise policies include requirements for ensuring that files have the proper copyright notice before they are released outside the enterprise, degrading the resolution of certain images before they are released to specified classes of clients, and scanning files for viruses before they are used.
Other rules are local to the application but span both the business logic tier and the data tier. It is a bit of a stretch to call these application rules “policies”, but it is convenient for our discussion because they share many of the characteristics of enterprise policies. In particular, they may be as critically important and as much in need of assurance that they are working correctly as enterprise policies, and can equally well be handled by our proposed framework.
An example of one of these other “policies” is a rule that defines the confidence that the middle tier application can have in the accuracy of a data item retrieved from a DBMS. Imagine an application that controls airplane takeoffs. One of the data items it needs is the amount of fuel in the plane’s tank. The rule might be that the confidence level of this type of data is a function of metadata, such as the time since the data elements were last updated, rather than something that can be derived from the data value itself. The application as a whole, including both the middle tier and the DBMS, needs a mechanism to calculate the confidence factor and get that information to the middle tier before the middle tier releases the plane for takeoff, or some considerable unpleasantness might ensue.
A mechanism is needed that supports any policies that may be applicable to data, using the same techniques for any policy, without requiring that the set of policy types be predefined by DBMS vendors.
2.4.2. Different levels of criticality
A characteristic of this expanded definition of policies is that not all policies are equally critical. Some types of policies may be less critical than others in an enterprise; for example, the need to check files for copyright notice may be less critical than protecting write access to the salary file. Even within access control, some data may need to be protected more carefully than others. For example, the author of a document may wish it to be restricted to only a small group of people while it is under development, but the accidental release of the partially written document to other employees would not have as severe consequences as the accidental release of the company’s product source code to the general public.
Therefore, a mechanism that is not deemed sufficiently secure for one policy may still be acceptable, and very valuable, for other policies. The requirement is that the mechanisms must not interfere with each other.
3. Related work
As information system architectures have moved from client-server to multi-tier, administration of security policy has become difficult. Generally, each component in a multi-tier architecture that has policy enforcement responsibilities maintains its own policy information that must be manually configured. Hence, it becomes the responsibility of system administrators to assure that all policies are consistent.
Several research efforts are currently under way to centralize the administration of policy. The Open Group’s Adage project [12,17] is a typical example of this research. A central policy definition and storage capability is used by administrators to define and store all the policies needed throughout the distributed system. These policies are then translated into policy information suitable for the various enforcement mechanisms used throughout the system. Systems like Adage
assume a single central authority that defines all policy. Further, they assume that this single authority has administrative control of all elements of the multi-tier distributed system.
Mobile policy takes a different view, assuming a much more distributed definition of policy and administrative control of the system. Mobile policy allows policy to be defined by an authority close to the system element that is responsible for the information being controlled. Then, that system element shares the policy with other system elements that use its data.
The notion of mobile policy is not particularly new. Several approaches to sharing policy information have been developed. However, none is as general as the approach proposed here.
One problem common to all attempts to centralized policy definition and storage is the need for a semantically rich policy specification language capable of representing all policies that may apply within the multi-tier system. Such a language is very difficult to define and has so far eluded researchers. Mobile policy tries to avoid this problem by encapsulating policy in an executable module. These modules can be coded using any programming or policy definition language that the policy administrator chooses. Instead of defining an all-powerful magic policy language, the problem is transformed into defining a shared vocabulary of inputs to and outputs from policy modules. These vocabularies should be more tractable than a general-purpose policy language.
Information labeling, as typified by NIST FIPS PUB 188 [13], is a mechanism for sharing with data consumers the policy that should be applied to data. However, when using labels, it is assumed that all system elements that exchange data already share a common definition of policies that might apply to data. The label is a pointer to the particular policy to be applied to a piece of data. For example, a label of classified does not define policy for a piece of data but instead tells the recipient to enforce his policy for classified data when using this piece of information.
The policy server in Secure Computing Corporation’s (SCCs) Distributed Trusted Operating System (DTOS) [1,7,8] is an example of mobile policy similar to the approach proposed here even though it requires a central policy specification authority. DTOS is a high-assurance version of the Mach microkernel operating system. In microkernel operating systems, a very small kernel is built (the microkernel) and many of the services associated with traditional operating systems are added as servers on top of the microkernel. SCC wanted to implement discretionary access control as a server outside the kernel. The kernel would enforce policy, but the policy being enforced would be defined outside the kernel by a policy server and could be easily modified. Each time a server running on the microkernel attempted to access a microkernel-controlled resource, the microkernel would consult the policy server to determine if the requested access was allowed.
As might be expected, the need for the microkernel to check with the policy server on every resource access request introduced significant overhead and severely degraded performance of the operating system. To deal with this, SCC added an “access vector cache” to the microkernel. When a process first requests access to a microkernel-controlled resource, the microkernel contacts the policy server and gets an access vector for that process from the policy server. The access vector defines all of the resources that the subject process is allowed to use. On subsequent requests from that process for access to microkernel-controlled resources, the microkernel consults the cached access vector instead of contacting the policy server. This significantly improves performance.
Access vectors are a form of mobile policy and were the first approach considered in this work. However, access vectors are well suited for use in an operating system but not as a policy mechanism for data in a multi-tier information system. Access vectors in DTOS are a fixed-length
bit field. This is possible because the set of resources being controlled by the microkernel is static and completely defined before the system begins operation. Unfortunately, in a distributed system the set of data available for use or the set of potential users is not static nor is it predefined before the system begins operation.
SCC did provide a capability not addressed in the mobile policy framework presented here that may be useful and should be considered for future work. Access vectors are considered cached copies of actual policy information. The policy information in the policy server is the definitive definition of the policy to be enforced. As such, the policy in the server could change during system operation invalidating one or more cached access vectors. SCC provided a mechanism for the policy server to notify the microkernel that policy changes have occurred. When this happens, the microkernel invalidates the entries in the access vector cache and contacts the policy server on the next request for access to a resource to get a new access vector consistent with the new policy. The framework for mobile policy presented here does not yet address policy changes.
While the mobile policy framework presented here was being developed, the Object Management Group’s (OMGs) Common Object Request Broker Architecture (CORBA) medical systems (CORBAmed) domain task force (DTF) was developing a framework for access control decision making within business objects called Resource Access Decision (RAD) [9,10]. CORBA business objects are essentially business logic tier components. RAD deals only with access control decisions and does not define either where access control policy comes from or how it is administered. However, the RAD framework does include several of the elements found in the mobile policy framework presented here.
The next section describes the framework for use of mobile policy in multi-tier information systems.
4. The proposed framework
We call our proposed framework MoP, for mobile policies. MoP allows the separation of policy definition, policy association, and policy application into separate operations that can be performed by different people without requiring them to share the details of the policy with each other. Policies, once defined, are associated with data in the database. When data move from one component or tier to another, any associated policies travel along with the data until the policies are applied. Mobile policies are shown in Fig. 6.
4.1. System overview
The primary goal of the framework is to support separation of duty among application component developers by moving policy from the database to the application along with the data while minimizing the knowledge that must be shared between data tier developers and business logic tier developers. Secondary goals are to minimize effort on the part of application developers, support assurance that the system works as intended, work harmoniously alongside existing policy mechanisms, support multiple application and DBMS platforms, and minimize the impact on performance.
The framework consists of five code component types and a set of standards for how they are used and developed. The component types are:
- Policy module. Policy modules implement policy rules.
- Policy composition module. Deals with issues such as the order in which policies are to be applied. Bonati et al. [2] propose an algebra for security policies with a translation mechanism to logic programs in order to facilitate policy composition even in the case where the policies are expressed in different formats.
- Conflict resolution module. Conflict resolution modules resolve conflicts among policy modules.
- MoP stored procedures. The MoP stored procedures implement MoP framework component logic that resides within each DBMS.
- MoP framework component. The MoP application component implements the mechanisms for using the framework within an application.
The component types are shown in Fig. 7.
Standards for the use and development of the components are the glue that makes the system work. MoP specifies two kinds of standards, interface standards that specify how one component may call another, and vocabulary standards that represent the minimal knowledge that must be shared among the developers of systems that use MoP.
4.2. Function of components
This section describes the MoP component functions.
4.2.1. Policy module
A policy module is an executable code module written for the platform of choice of the application. Each policy module implements one specific policy rule. For example, a policy module may determine whether a requested access is granted based on user identity, or whether access is granted based on the type of connection between the client and the application, or it may add the correct copyright notice to a file. Thus, each policy module is a self-contained package with a limited, specific function, which has the nice benefit that it simplifies validation of correct behavior.
Policy modules are classified into types by the end function they perform, not by the rule that governs how they perform it. The three examples above include only two policy types: determine whether access is granted and add a copyright notice. The two access granting rules, one of which looks at user identity and the other of which looks at the client connection, would be implemented in two separate policy modules, both of which are of type “access grant.”
All policy modules of the same function type return the same output parameters with the same syntax and semantics. An application programmer needs to know what the module does in order to determine whether the module is applicable to the planned use of the data, and what output parameters the module returns and what they mean in order to code an appropriate response, but the application programmer does not need to know the policy rule the module implements.
In contrast, not all policy modules of the same type require the same input parameters. All policy modules implement a method that returns a list of input parameters. The application must be able to accept a list of parameters selected from a predefined set and return a value for each.
The scope of a policy module may vary. It may be developed and used within a single application, department, or enterprise, or eventually there may be well-known policy module components that are widely available.
We envisage that policy modules will come from three sources. At first, policy modules will be custom-built by enterprise developers to implement enterprise policies. Later, if MoP becomes widely used, policy modules that implement common policies may become commodity items. Finally, we plan to investigate automatically extracting existing DBMS access control specifications from the DBMS and creating MoP policy modules dynamically when access to data is requested.
To summarize, a policy module is an executable code module that implements a single rule, has a well-known type and set of output parameters, and produces a list of required input parameters.
4.2.2. Conflict resolution module
Multiple policy modules may be associated with the same data set. If it should happen that more than one policy module of the same type is associated with the same dataset, then any conflicts must be resolved before the correct single output parameter set is defined. This conflict resolution is performed by a conflict resolution module.
Conflict resolution modules can be simple or complex, depending on the policy module type (see [2]). For example, a conflict resolution module for access grant policy modules might be very simple, just the logical AND of Boolean values meaning OK or Not OK, while the conflict resolution module for copyright notices might be very complex, requiring choosing among several sets of alternative copyright statements based on some external information.
We assume that different policy module types are independent of each other. Any interactions between, say, a copyright notice rule and an access grant rule we consider to be idiosyncratic, complex, and outside the scope of the MoP framework. MoP, of course, does not prevent the application developer writing code to resolve any such conflicts.
Conflict resolution module development is closely linked to policy module development. Conflict resolution modules implement the resolution of conflicts among policy rules, and therefore conflict resolution rules are in effect policy rules.
The scope of conflict resolution modules can vary as widely as the scope of policy modules, from application-specific to well-known published conflict resolution module components.
4.2.3. DBMS stored procedures
When data are accessed, the MoP application component needs to retrieve the policy modules associated with the data. Two DBMS stored procedures provide this capability. One receives a SQL request and returns identifiers associated with relevant policies, the other receives a policy module identifier and returns the specified policy module.
MoP therefore requires three or more database queries instead of one for each data request: access the data, request relevant policy module identifiers, and request each needed policy module.
The MoP application component makes all these requests within the same transaction, thereby eliminating potential synchronization difficulties.
The separation of function not only supports flexibility but also decreases performance overhead by allowing the application to make only those requests it actually needs and to make them in any order. For example, for READ requests the policy may be run before the data are retrieved, because the result of the policy may make retrieving the data unnecessary, or the application may first retrieve data and review it to determine which of the associated policies are relevant to its intended use of the data before requesting the policy modules.
Separating the request for policy module identifiers from the request for specific policy modules allows the application to cache policy modules and to request only those policy modules it actually needs, a potentially significant performance enhancement.
4.2.4. Application framework component (MoP)
The MoP application framework component encapsulates the MoP implementation details that are not application dependent. The MoP component exposes methods that support accessing data, identifying relevant policy modules, retrieving relevant policy modules, and running selected policy types.
As of this writing, the application is responsible for setting up pointers to permanent objects (in the current version, the permanent objects are caches and connections to databases), providing an object that actualizes parameters, and calling the MoP retrieve time and MoP apply time methods.
4.3. MoP shared vocabularies
MoP shared vocabularies are the heart of our solution for sharing policy among developers responsible for different application tiers while minimizing the knowledge they must share with
each other. Encapsulating policy rules into components allows us to reduce the semantics that must be shared from understanding policy logic, which requires a “magic language” and is very difficult, to understanding a small set of shared vocabulary items, which is a relatively easy and familiar technology. Each vocabulary item has an identifier, a syntax, and a semantic meaning.
MoP uses three shared vocabularies: policy module types, output parameters, and input parameters.
In the policy module types vocabulary, each term specifies what the policy module does, such as add a copyright notice or determine whether access is granted. The policy module type vocabulary is the most pervasive of the MoP vocabularies. Module type terms must be understood by all participants in policy management except for DBMS developers, including policy makers and policy module developers, conflict resolution module developers, data store administrators, and application developers. The policy module type is important to DBAs determining the applicability of policy modules to data and to application programmers determining the applicability of policy modules to their intended use for the data.
In the output parameters vocabulary, each term specifies both syntax and meaning of a parameter returned from a policy module. Output parameters are the same for all policy modules of the same type. The application uses the output parameters to apply the policy. Output parameter terms must be understood by policy makers and policy module developers, conflict resolution module developers, and application developers. The output parameter vocabulary is important because for many policy types, such as access control, the application must be prepared to take action based on returned output parameters.
In the input parameters vocabulary, each term specifies an input parameter needed by the policy module. The application provides a method that accepts a list of input parameters and returns a list of matching values. Input parameter terms must be understood by developers of policy modules and by application developers. The input parameter vocabulary is important because two modules with the same function may have different input parameters.
The scope of the shared vocabularies can vary in the same way the scope of a policy module can vary. Vocabulary terms can be shared among developers of a single application, developers within a department or an organization. If MoP becomes widely available, some vocabulary term definitions may be standardized across the industry.
### 4.4. Allocation of responsibilities
Supporting separation of duty by allocating specific policy management responsibilities to different development groups is MoP’s prime benefit. With MoP, each group of developers needs to understand only the subset of policy management that falls properly within the group’s purview.
MoP allocates responsibilities to policy makers, database administrators, DBMS developers, and application developers. MoP does not impose any requirements on the client tier.
Policy makers specify the policy rules that are implemented by MoP policy modules. They also have the ultimate responsibility for locating or creating policy modules that implement the rules
and conflict resolution modules that implement the resolution of conflicts among policy rules. Policy makers must understand:
- policy module function type semantics,
- policy module input and output parameter semantics and syntax,
- policy and conflict resolution rules,
- policy module and conflict resolution module interfaces.
*DBMS developers* create stored procedures that implement the two DBMS functions required by MoP, returning identifiers for the policy modules associated with a data access request and returning a policy module on request. Database administrators must understand:
- policy module function type semantics,
- which policy modules are to be associated with which data.
*Database administrators* create and install the MoP stored procedures into the DBMS, insert policy modules identified by the policy makers, and associate policy modules with data. Mechanisms for these functions will vary from DBMS to DBMS. This process is out of the scope of MoP. DBMS developers must understand:
- DBMS development environment,
- stored procedure interfaces.
*Application developers* call the MoP application component, pass it required parameters, and use policy module outputs to apply policy. Application developers must understand:
- policy module function type semantics,
- policy module input and output parameter semantics and syntax,
- MoP application component interfaces.
4.5. *The implementation*
We are using a prototype implementation of the MoP components to validate our framework design as we develop it. We do not consider any portion of our design complete until it has been included in the prototype.
The current prototype uses Microsoft Visual Basic Enterprise 6.0, the Common Object Module (COM) and Microsoft Access 8.0. Early work has focused on building the MoP application component, using stubs for database support and policy modules, and a demonstration application that exercises each feature of the MoP application component.
Our target databases are Oracle and SQL Server. Access does not provide stored procedures or sophisticated policy management mechanisms, but its functionality is adequate to support work on the MoP application component.
The current demonstration application, showing interior functions of the MoP application component, is shown in Fig. 8.
5. Conclusions and future work
This paper proposes the use of mobile policy in multi-tier information systems. Specifically, it separates policy administration from policy enforcement. Policy is specified and administered at the element of a distributed system where the data being controlled by policy is defined. That policy is then shared with consumers of the data so that they can enforce the appropriate policy when using the data.
In this paper, mobile policy is proposed as a means for making DBMSs more composable. This is analogous to the capability provided by the X/Open Distributed Transaction Processing (DTP) model [15,16]. X/Open DTP allows DBMSs to participate in transactions managed by an external transaction monitor. It essentially opens up the DBMSs transaction processing protocol to allow two-phase commit across the DBMS and other software components. Mobile policy attempts to
provide the same capability to the access control mechanisms of the DBMS while simultaneously extending it to handle a broader collection of data handling policies beyond access control.
Although we have not completed work on the basic MoP framework, we have identified a number of enhancements that we would like to add once the basic framework is complete: dynamically generated policy modules, dynamic determination of conflict resolution metadata, a policy composition module that manages relationships among different policy modules, and support for associating policy modules with subsets of retrieved data.
Dynamically generated policy modules are interesting because they would eliminate parallel implementations of the same policy. DBMS systems already have a mechanism that associates access control policies with data. We would like to develop a mechanism that extracts the access control information relevant to an SQL query and packages it as a MoP policy module. In addition to convenience value, automatic generation of MoP policy modules potentially could enhance assurance because the information would not have to be associated with the data twice, once as a policy module and once as DBMS access control lists.
Dynamic determination of conflict resolution metadata is interesting because it would simplify the task of policy module developers. As it stands today, MoP requires linked code development in policy modules on one type and their associated conflict resolution modules. We think it would be desirable to provide a cleaner interface so that policy module and conflict resolution module development can be more independent.
Support for associating policy modules with subsets of retrieved data is interesting because it would support applications, such as data warehouses, where a large block of data is retrieved all at once and stored internally in database table format. Later, when the data are to be used, the application extracts subsets of the data for each specific use. MoP as currently designed does not support this kind of application.
Before our framework can be shown to be useful in production environments, a number of issues need to be addressed: performance, multi-platform application support, and assurance.
Performance is an issue, because a prime reason for using multi-tier architectures is to gain enhanced scalability and efficiency. If making policies mobile slows processing down any appreciable amount, any benefits will not be worth the cost.
Multi-platform support is an issue because another prime reason for using multi-tier architectures is to gain application development flexibility. If the MoP application component can be called only by COM applications, and not by EJB or CORBA applications, its usefulness will be limited.
Assurance is an issue because many MoP policies are security policies. A mechanism for implementing security policies that cannot itself be shown to meet enterprise requirements for security will not be very useful.
References
Susan Chapin is a Lead INFOSEC Engineer at The MITRE Corporation, where she has worked for the Center for Integrated Intelligence Systems since 1992. She has degrees from Harvard University and San Diego State University, and has worked as a software developer and information systems security engineer since 1976. Her e-mail address is schapin@mitre.org.
Don Faatz is a Principal Information System Security Engineer with The MITRE Corporation in McLean Virginia. His work is focused on architectures for secure distributed information systems. He has a BS and ME in Computer Systems Engineering from Rensselaer Polytechnic Institute and is pursuing a Ph.D. in Information Technology at George Mason University.
**Sushil Jajodia** is Principal Scientist at The MITRE Corporation in McLean, Virginia. He is also the BDM Professor and Chairman of the Department of Information and Software Engineering and Director of Center for Secure Information Systems at the George Mason University, Fairfax, Virginia. He received his Ph.D. from the University of Oregon, Eugene. His research interests include information security, temporal databases, and replicated databases. He has authored four books, edited seventeen books, and published more than 250 technical papers in the refereed journals and conference proceedings. He received the 1996 Kristian Beckman award from IFIP TC 11 for his contributions to the discipline of Information Security, and the 2000 Outstanding Research Faculty Award from GMUs School of Information Technology and Engineering. Dr. Jajodia has served in different capacities for various journals and conferences. He is the founding editor-in-chief of the Journal of Computer Security, and serves on the editorial boards of ACM Transactions on Information and Systems Security and International Journal of Cooperative Information Systems. He is the consulting editor of the Kluwer International Series on Advances in Information Security. The URL for his web page is http://isse.gmu.edu/~csis/faculty/jajodia.html.
**Amgad Fayad** leads the advanced security research section at the MITRE Corporation. His interests include security service application programming interfaces (API), suspicious user confinement, access and release control, and penetration testing methodologies. He recently taught courses at George Mason University on C++ programming and discrete mathematics. He holds an M.S. degree in computer sciences from Purdue University, West Lafayette, Indiana.
|
{"Source-Url": "https://www.mitre.org/sites/default/files/publications/chapin_policy.pdf", "len_cl100k_base": 9694, "olmocr-version": "0.1.48", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 100822, "total-output-tokens": 11308, "length": "2e13", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0004706382751464844, "__label__crime_law": 0.0016279220581054688, "__label__education_jobs": 0.0013151168823242188, "__label__entertainment": 9.28044319152832e-05, "__label__fashion_beauty": 0.0002034902572631836, "__label__finance_business": 0.0009136199951171876, "__label__food_dining": 0.0003044605255126953, "__label__games": 0.0007996559143066406, "__label__hardware": 0.0017642974853515625, "__label__health": 0.000782012939453125, "__label__history": 0.0003485679626464844, "__label__home_hobbies": 0.0001289844512939453, "__label__industrial": 0.0007457733154296875, "__label__literature": 0.0002701282501220703, "__label__politics": 0.0004031658172607422, "__label__religion": 0.0003833770751953125, "__label__science_tech": 0.1641845703125, "__label__social_life": 0.00010269880294799803, "__label__software": 0.0249786376953125, "__label__software_dev": 0.798828125, "__label__sports_fitness": 0.0002484321594238281, "__label__transportation": 0.0005669593811035156, "__label__travel": 0.00018668174743652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55364, 0.0157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55364, 0.15798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55364, 0.91356]], "google_gemma-3-12b-it_contains_pii": [[0, 1994, false], [1994, 4173, null], [4173, 7170, null], [7170, 9813, null], [9813, 12307, null], [12307, 15614, null], [15614, 18888, null], [18888, 22201, null], [22201, 25588, null], [25588, 29695, null], [29695, 32813, null], [32813, 34136, null], [34136, 37658, null], [37658, 40795, null], [40795, 44065, null], [44065, 46390, null], [46390, 47295, null], [47295, 50749, null], [50749, 53585, null], [53585, 55364, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1994, true], [1994, 4173, null], [4173, 7170, null], [7170, 9813, null], [9813, 12307, null], [12307, 15614, null], [15614, 18888, null], [18888, 22201, null], [22201, 25588, null], [25588, 29695, null], [29695, 32813, null], [32813, 34136, null], [34136, 37658, null], [37658, 40795, null], [40795, 44065, null], [44065, 46390, null], [46390, 47295, null], [47295, 50749, null], [50749, 53585, null], [53585, 55364, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55364, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55364, null]], "pdf_page_numbers": [[0, 1994, 1], [1994, 4173, 2], [4173, 7170, 3], [7170, 9813, 4], [9813, 12307, 5], [12307, 15614, 6], [15614, 18888, 7], [18888, 22201, 8], [22201, 25588, 9], [25588, 29695, 10], [29695, 32813, 11], [32813, 34136, 12], [34136, 37658, 13], [37658, 40795, 14], [40795, 44065, 15], [44065, 46390, 16], [46390, 47295, 17], [47295, 50749, 18], [50749, 53585, 19], [53585, 55364, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55364, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
9fa5551927f052a3cf46cfb01de030310d635ad4
|
Towards a Performance Estimate in Semi-Structured Processes
Andreas Wombacher
University of Twente, Enschede, The Netherlands
Email: a.wombacher@utwente.nl
Maria Iacob
University of Twente, Enschede, The Netherlands
Email: m.e.iacob@utwente.nl
Martin Haitsma
University of Twente, Enschede, The Netherlands
Email: martin.haitsma@gmail.com
Abstract—Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples are workflows where actors potentially have interaction with customers reporting the result of the interaction in a process aware information system. Building a performance model for resource management in these processes is difficult since the information required for a performance model is only partially recorded. In this paper we propose a systematic approach for the creation of an event log that is suitable for available process mining tools. This event log is created by an incrementally cleansing of data. The proposed approach is evaluated in a case study where the quality of the derived event log is assessed by domain experts.
I. INTRODUCTION
Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples can be found in scenarios where several people potentially from different organizations cooperate e.g. in creating a yearly progress report or writing a scientific paper. Other examples are workflows where people interact with clients and/or paper documents which are used to insert, approve, or validate information in a potentially Web based information system. These Web based information systems can be an application server or orchestrated services e.g., using BPEL.
Nevertheless, in these scenarios it is important for the management to better understand the process, the characteristics of activities, and the performance of individual employees. Lacking such knowledge makes it hard to predict the load of resources and to make a balanced resource planning. For example, it is difficult to predict the ability of the business to handle higher workload due, for example, to a promotion activity or to vacations.
Independent of the workflow’s implementation, the underlying information system may keep track of the completion time of an activity but cannot record the start time of an activity. Such an information system cannot detect for instance when a conversation with a client starts or when an employee starts to read a paper request form of a client. Thus, it is not possible to build a classical performance model and use existing process analysis techniques like those described in [1] before enriching the data with the activities’ start times.
Therefore, in this paper we aim to use the available log information to perform data analysis and data cleansing in order to get an estimate of the starting time, from which the underlying performance model can be further inferred. Thus, we propose a structured approach to investigate and cleanse the observed event data. The result is an estimated starting time for each event. In case the estimated starting time is not trustworthy we report it as ‘unknown’.
II. USE CASE
The proposed approach has been motivated and evaluated on a real-life use case. Due to a non-disclosure agreement the labels of activities have been made more generic and no absolute performance data is provided. The use case is the semi-structured processes in the front-office of a service provider for a financial company. The service provider uses a web service-based application to quickly set up semi-structured financial processes without developing the same components repetitively. A typical front office employee handles applications of clients for, e.g., a loan, insurance or savings account, at the office counter, but also Internet and telephone applications. Typical activities in the front office are talking to the client, collecting and verifying client documents, do some automatic checks (e.g., a credit check), handling the contracting, and sending the application to the back office for further handling.
The framework provides a proprietary process modeling language which is based on states, and manual and automatic state changes, performed respectively by an employee or by the software. The expressiveness of the modeling language is comparable to that of Finite State Automata, thus supports loops but no parallelism. Due to the processes at hand, the system only documents the completion of a state change (activity), and thus not the starting of an activity.
The data used in the use case have been collected from the end of September 2010 until mid February 2011. It should be noted that users spend only part of their time working in this system. However, we can state that the average number of hours per user spend working in the framework system stays approximately the same over the investigated period of time.
III. PROBLEM DESCRIPTION
The challenge posed with semi-structured processes is that start times of activities cannot automatically recorded by the underlying system. Another challenge is that users often work on more than one process and therefore the percentage of time a user is working on the process under investigation is unknown. Further, 'internal' activities like e.g., meetings, coffee breaks, early departure of an employee are not documented and therefore are not available for the start time estimation.
After estimating a start time, the derived performance model has to be applied carefully. Since employees work on more than one process of which no performance model is available, it is impossible to make statements about how fast the incoming requests can be processed. However, an estimate of how many hours the employees have to spend on the process to handle these requests can be determined. This is valuable information for the management, which should have an overview of the workload caused by other processes.
In this paper we assume the existence of a process execution log file, which contains information about the case ID, the State Change ID, the Completion Time, the ID of the user performing the state change, the source and the target state. The State Change ID provides a complete order on all state changes. The Completion Time provides a partial order of state changes. An example of a log file is depicted in Table I, which will be used as an example later in the paper. The table is partly visualized in Fig 1.
<table>
<thead>
<tr>
<th>Case ID</th>
<th>State Change ID</th>
<th>Completion Time</th>
<th>User ID</th>
<th>Source State</th>
<th>Target State</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>5</td>
<td>9:44:14</td>
<td>Andy</td>
<td>Initial State</td>
<td>Process Start</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>9:49:14</td>
<td>Andy</td>
<td>New Request</td>
<td>Send Request</td>
</tr>
<tr>
<td>1</td>
<td>5</td>
<td>10:15:00</td>
<td>Peter</td>
<td>Send Request</td>
<td>Control Opening</td>
</tr>
<tr>
<td>1</td>
<td>6</td>
<td>09:05:00</td>
<td>Andy</td>
<td>Control Opening</td>
<td>Credibility Check</td>
</tr>
</tbody>
</table>
Table I
EXAMPLE STATE TRANSITION LOG
In the following we assume that the process is involving potentially multiple systems each providing part of the log information. However, we are not addressing neither data integration problems such as entity resolution problems of event log information nor syntactic or semantic data integration problems.
IV. APPROACH
The approach presented here is based on the steps depicted in Fig 2. A first cleansing step is performed on the raw event data. Next the cleansed data is used to infer an initial estimate of the start time for each activity. The initial start time estimates may be overwritten in later cleansing steps. The following cleansing step investigates special situations per process instance (also called case). The last cleansing step is the histogram based cleansing removing outliers, i.e., exceptionally high durations of activities. The final step investigates dependencies of activity durations cross process instances and categorical data like, e.g., the weekday or the experience of a user. Thus, the final step tries to verify whether the independence assumption used in a performance model is actually supported by available data. The final result is a cleansed event log, which can be used for the mining of a control flow and for performance analysis using existing tools.
A. Raw Event Data Cleansing
The initial step of the data cleansing is to make sure that the basic characterization as given in Sect III actually applies to the event log data. In particular, we are checking whether the partial order of the Completion Time and the complete order of the State Change ID are not conflicting with each other. A reason for conflicting order relations could be the delayed logging caused by executing the workflow in a distributed infrastructure or by performing external service invocations.
The second step of the cleansing aims to ensure the reliability of the data, thus, establishes whether the data at hand reflects normal operation of the system or an exceptional mode of operation. An example of an exceptional mode of operation are network problems in a distributed infrastructure.
A summary of the cleansing rules of raw data can be found in Table II. The table contains a rule number, the title of the rule which matches the subsection heading, static and dynamic requirements, and the recommended cleansing action. Static requirements are based on characteristic of the workflow and infrastructure, while dynamic requirements are evaluated based on the event log data.
1) Delayed logging: The logging of events and how it is realized in the infrastructure may result in a violation of the partial order of Completion Time and the total order of
Figure 2. Cleansing rules overview
An inconsistency of the two orders can be caused by the fact that the Completion Time of an activity is determined at a different point in time than the moment when the number representing the State Change ID is assigned. This can occur because
- the systems assigning Completion Time and State Change ID are running on different systems and therefore the network delay causes time differences, or
- the definition of activity completion varies for the Completion Time and the State Change ID.
In either case it is important to have an complete order. Thus, a new complete order has to be defined based on the available orders. We keep the inconsistent orders since the fact that there are inconsistencies is important information for further cleansing steps. Since the new order is complete but is potentially based on a partial order, the maximum time difference between two elements which have the same partial order relation to all other elements determines the accuracy effectively provided by the new complete order and therefore the accuracy of the achievable performance model.
In the use case (see Sect II), the system is web based and thus distributed over multiple systems (see Fig 3). This means that after an employee submits a form, the form data has to be sent to the application server. At the application server the Completion Time is determined but the completion of the activity requires further processing of the data. In particular, an external web service is called (e.g., the bureau of credit registration, BKR in Dutch). After receiving the result of the web service the state change is logged in the event log and a State Change ID is assigned automatically. Thus, the point in time when Completion Time is recorded and when a State Change ID is assigned may differ, which may result in an order inconsistency.
In the use case we observed that the time difference between form submission (when the employee finishes) and the logged Completion Time is only a few milliseconds, which is relatively low compared to the execution time of manual state changes. Further, we observed that the processing time between the determination of the Completion Time and assigning a State Change ID may vary from a few seconds up to five minutes. In other words, a state change with an ID higher than another state change, can have a Completion Time which is up to five minutes earlier. Or the other way around, a state change which has an ID which is 180 higher than another State Change ID, can have an earlier Completion Time.
2) Exceptional Operation: In case the system under investigation is a distributed system or is invoking external services, infrastructure related errors can happen. These errors are often related to the unavailability of components or services, such as, external services, the logging server or the network. Dependent on how the infrastructure has been implemented these different errors can be observed in different ways. It should be noted that these infrastructure problems can occur and can influence the quality and consistency of the available event log. Furthermore, infrastructure problems observed during a timespan influence the events related to various cases. Consequently, the only option to cleans the data is to exclude the data collected during the identified time span. Potentially more fine grained exclusion criteria can be defined, but this depends on the actual workflows and the used infrastructure.
In general infrastructure problems may result in the event log in incorrect ordering of state changes, missing state changes, or duplicate state changes. Due to network congestion, the log message of an earlier completed state change may arrive later at the event log than a state change completed later. When the sending party gets a timeout (no reaction within a certain period), which usually means that the message is lost, the event will be sent again to the event log. However it is possible that the event was in a message queue somewhere in the infrastructure, and will arrive later at the event log. Thus, two events are recorded.
The detection of infrastructure problems is hard to detect automatically. For example repeating state changes can happen due to infrastructure problems or due to a loop in the workflow. To distinguish between these two situations it is necessary to investigate the relative occurrence of these errors per time span over the complete event log. The relative number of errors in a specific time span of infrastructure problems is higher than in the remaining cases. The challenge here is to choose the right time span. If it is too short or too long, the deviations due to infrastructure problems are not significant. The time span also defines the granularity of time spans to exclude.
In the use case (see Sect II), for a period of a few days, there were network problems. Analysis of the event log showed that the system experienced a lot of network congestion for three days. This resulted in order violation on Completion Times and State Change ID’s and duplicate state change events. As a consequence, the data of these days are not usable for the further analysis, thus we exclude this data in the following steps.
B. Start Time Estimate
Estimating the start time of an activity is based on a complete order of state changes (activities), which is consistent with the partial order of the Completion Time.
First, the control flow dependencies in a workflow ensure that an activity can only start after the preceding activity has been completed. Thus, by determining the Completion Time of the preceding activity an estimate of the start time of the activity can be inferred. With regard to the example in Table I the activity Control Opening has the preceding activity Send Request. Thus, an estimate for the start time of the Control Opening activity is the completion time of the Send Request activity. This results in an estimated execution time of 26 minutes and 6 seconds as depicted in Fig 1.
Second, we make the assumption that a user can only perform one activity at a time. Thus, an activity performed by a user can only start after another activity performed by the same user has been completed. With regard to the example in Table I the activity Send Request of case 1 performed by user Andy is preceded by the completion of activity Process Start of case 2. Thus, an estimate for the start time of the Send Request activity is the completion time of the Process Start activity. This results in an estimated execution time of 5 minutes and 40 seconds as depicted in Fig 1.
Thus, the estimated start time of an activity is the maximum of
- the completion time of the preceding activity of the same process, and
- the completion time of the preceding activity of the same user.
Consequently, the start time of the first activity in a process can only be estimated based on the preceding activity of the same user since there is no preceding activity in the process. In Sect IV-D we will discuss two options of user behavior conflicting with this basic inference and how to deal with these conflicts.
### Table II
<table>
<thead>
<tr>
<th>Rule</th>
<th>Issue</th>
<th>Static requirement</th>
<th>Dynamic requirement</th>
<th>Cleansing action</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>delayed logging</td>
<td>different systems or completion definitions</td>
<td>order inconsistencies in ID and time based orders</td>
<td>introduce new ID guaranteeing absolute order</td>
</tr>
<tr>
<td>2</td>
<td>network problem (exceptional operation)</td>
<td>different systems or external system calls</td>
<td>(a) duplicate state changes: 2 events representing 1 event, (b) higher probability of out of order events in the complete system, or (c) missing state changes in case of an independent logging system</td>
<td>remove data of inferred time span with exceptional operation</td>
</tr>
</tbody>
</table>
**C. Process Instance based Cleansing**
The third step investigates the event log per process instance, also called case, and marks complete cases as unsuitable for performance model mining. In particular, we are considering special test cases performed on the system, as well as special deadlock and livelock errors.
1) **Test cases**: Productive systems undergo an evolution over time, thus hardware and software updates are performed. To ensure the reliable operation of the software, i.e., the implemented processes, it is necessary to perform tests. Test data should be excluded from the event log. To exclude the test cases from the event log criteria have to be determined to identify activities in the event log to be part of a test case. Often used criteria are specific users used to perform activities in the corresponding test process instances or specific days of a week or times of a day when test process instances are performed.
In the use case the test cases have been performed during the weekend. No specific test users have been used. Therefore, all process instances which had activities completed during the weekend have been marked as test cases and removed from the event log.
2) **Deadlock state changes**: Due to a bug in the code or any other error it can happen that a process case is blocked in a state (i.e., endlessly waiting for the exit criteria). In that case a user with admin rights can manually perform a state change, ignoring the exit criteria. If multiple cases are blocked, a programmer can make an automated script which puts these cases in the desired state. Ideally, the transitions which are executed ignoring the criteria should be flagged, such that, these can easily be excluded in the generation of the performance model. If this is not the case, these state changes have to be filtered out based on a determined criterium. This can be done manually by asking the administrator which transitions were performed outside the normal flow. Another way is to extract the business rules and then exclude the state changes which do not conform to these rules. An automated method is to filter state changes executed by persons, which are normally done by the software system. Since they are normally automatic activities a state change performed by a person is an indication of an exceptional state change, although it remains still unclear whether this is due to a deadlock or another reason. Anyhow, such cases should be excluded from the event log.
3) **Livelock state changes**: A livelock is similar to a deadlock, except that the process continuously performs state changes but is unable to complete the process, i.e., the process execution cannot leave a loop. For example, the system repeatedly tries to invoke an external web service, but each time this gives an error (going from the invocation external web service state to the error state).
Livelocks can be detected by counting the repetitions of a certain transition. If the count is above a certain threshold (e.g., five repetitions), the system should give an
alert to fix this error. If the system does not have such functionality, livelocks can be treated similar to deadlocks, since they must be resolved through the intervention of an admin user by resolving infrastructure problems or by manually performing a state change again. Since livelocks are exceptional situations, the corresponding cases must be excluded.
<table>
<thead>
<tr>
<th>Rule</th>
<th>Issue</th>
<th>Static requirement</th>
<th>Dynamic requirement</th>
<th>Cleansing action</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>Test cases</td>
<td>test cases performed in the system</td>
<td>specific characteristics of the data, e.g. specific user, specific time</td>
<td>exclude the complete case</td>
</tr>
<tr>
<td>4</td>
<td>deadlock</td>
<td>automatic state changes exist</td>
<td>(a) automatic state changes performed by an admin, or (b) deviation of the performer of a state change from observed behavior</td>
<td>exclude the complete case for performance mining, but not for control flow mining</td>
</tr>
<tr>
<td>5</td>
<td>Livelock</td>
<td>Loops in workflow</td>
<td>repetition of some state changes per case more that a certain threshold derived from the application</td>
<td>exclude the complete case</td>
</tr>
</tbody>
</table>
Table III
SUMMARY OF PROCESS INSTANCE CLEANSING RULES
D. Histogram based Cleansing
Based on the remaining process instances in the event log, the next step is to investigate the histograms of activity durations with the same label over all process instances. The duration is defined as the difference between the Completion Time of an activity and its estimated start time. Based on the histogram a threshold can be defined, i.e., when a duration is considered a too strong deviation from expectations. For these activities, the start time is set to unknown and these activities are not further considered. In the following two reasons for strong deviations are investigated.
1) Working Hours of Users: A challenge for start time estimation of activities is that working hours are not precisely fixed. Let’s say Jim completed the last activity on Tuesday at 17:00 and the next activity completion is Wednesday at 9:05, this doesn’t mean that Jim took 16 hours and five minutes to complete a task.
We assume the end time of a certain day for a person is the completion time of the last activity that day. Thus, if a person’s last activity of a day is at 16:45, we assume that this person works till 16:45. Determining the start time of a person’s working day is more difficult. We could assume a person always starts at 9:00 sharp, or we could ignore that activity.
The proposed approach is to approximate the start time of a person at a specific day, by subtracting the average execution time for the first activity that day minus its Completion Time. Thus, Jim takes in average 3 minutes for state change B. At a certain day, B is the first state change of Jim, completed at 9:05. In this case, we assume Jim started at 9:02. Thus, instead of the 17:00 of the previous day, we assume the start time is 9:02 of the same day.
2) Non-visible Activities: In the proposed approach we assume that a user is only working on the system under investigation. However, a person also performs other tasks in addition to working in this particular system. For example, when user Jim completes the state change ‘send request’ at 09:48, then attends a meeting till 11:00, and then completes the state change ‘control opening’ at 11:05, the system will assume that it took Jim 65 minutes to execute state change ‘control opening’, instead of the actual five minutes work. We call such activities, (e.g. attending a meeting, having a coffee break or lunch, or working in a different system) non-visible activities, since they are activities of the user, but they are not documented in the event log.
However, if we take a sufficiently large data set, the ratio of non-visible activities and visible activities is spread out evenly. And if we assume that the ratio of non-visible activities and visible activities remains constant over time, this ratio also holds for predictions based on a derived performance model. This line of reasoning does not hold anymore, if for example the management decides that users must perform non-visible activities with higher priority than visible activities.
The threshold for the extreme values can be determined either by a percentile score (e.g., the upper 10 percent of the values), by z-score (e.g., more than two standard deviations above the average), or by domain experts. A method for determining a threshold by domain experts, is by asking one or preferably multiple domain experts to approximate the execution time for the worst case scenario of a specific activity. The average or maximum of these approximations, possibly multiplied by a certainty factor (e.g., a factor two), can be determined as threshold. For example, three experts give a time estimate of respectively 15, 20 and 30 minutes as worst case scenario of activity A. The maximum time of 30 minutes is multiplied by two, which gives a threshold of 60 minutes.
To illustrate the effect, we consider the ‘send request’ activity of the use case workflow as discussed in Sect II. In total there are 4774 executions of this activity remaining in the dataset. 1 The related histogram of the durations of this activity is depicted in Fig 4. The average duration of the activity is 229 sec and the standard deviation is 319 sec indicating that the deviation of the data is quite big. Applying a threshold of a percentile of 10% on the data, means that all durations longer than 510 sec are neglected, which excludes 490 activities in the data set. Following the
1 It should be noted that these are numbers which can be directly mapped to the actual number, but are not the real numbers.
z-score approach, all data above 865 sec are ignored, which affects 223 activities in the dataset. And finally following the estimation from experts, the worst case estimate was 30 minutes, which results in a threshold of 3600 sec not affecting the dataset at all.
E. Data Independence Test
The last step is to perform an analysis of the independence assumption of the data. In a performance model the assumption is that executing an activity follows always the same distribution independent of the day of the week, the experience of the user, or the user itself. Since all the characteristics mentioned are categorical data, we propose to perform a \( \chi^2 \) test for homogeneity\[2\]. The aim is to determine whether the distributions of durations observed in each category can be considered as having the same distribution. A basic requirement of the approach is that more than 80\% of the durations contain at least 5 observations. We will illustrate below the approach for the relation between the week day in which an activity is completed and the duration of an activity. The remaining criteria can be applied in a similar way. Alternatively, other tests like Fisher’s exact test could be applied.
It should be noted that although some information like a weekday or a measure of experience could be represented as a continuous number, we consider them categorical information anyway. This is because we do not think that twice the number of a weekday has any meaning. In case of experience, e.g. measured by the number of cases performed, we do not see that twice the amount of cases means twice the experience. Therefore, we treat them as categorical data.
1) Weekday independence: The analysis for the weekday is based on data contained in Table IV and visualized in Fig 5 for activity ‘new request’ based on the cleansed data. The numbers provided represent the duration distribution as a percentage of the overall number of executions for a particular weekday. Percentages are used instead of absolute numbers since the variations in the absolute number per weekday were so high that the test would not provide reliable results. To perform the \( \chi^2 \) test these percentages are multiplied by a constant (e.g., 100) in order to get real numbers to perform the test.
In particular, a value is calculated based on the following formula:
\[
Q = \sum_{r,c} \frac{(O_{r,c} - E_{c})^2}{E_{c}}
\]
where \( r \) is the number of durations, \( c \) is the number of categories, \( E_{c} \) is the expected number of instances for category \( c \), and \( O_{r,c} \) is the observed duration for category \( c \) and duration \( r \). The expected instance number \( E_{c} \) can be calculated as the average of the observed instance numbers for category \( c \), thus
\[
E_{c} = \sum_{r} \frac{O_{r,c}}{n}
\]
where \( n \) is the number of categories. The null hypothesis that the same distribution applies for all categories can be verified if the calculated value \( Q \) is below \( \chi^2_{df} \), which is the \( \alpha \) quantile of the \( \chi^2 \) distribution for a degree of freedom \( df \). The degree of freedom is given by \( df = (#columns - 1) \times (#rows - 1) \).
Applying these formulas to the data presented above produce the following results: the degree of freedom \( df \) is 60 and the 99\% quantile of the \( \chi^2 \) distribution is \( \chi^2_{60;0.99} = 37.4848 \). The determined \( Q \) value is 8.2686, which is below the quantile and therefore the distributions observed per weekday are considered to be based on the same distribution. Thus, the observed durations are independent of a particular weekday.
2) Iteration independence: A process may contain cycles/loops. It has to be checked whether the durations per iteration are equally distributed, i.e., whether the second iteration generally takes less time than the first one. We apply the same approach to the data for the ‘new request’ activity presented in Table V and visualized in Fig 6. In the table the first occurrence of the activity and the later repetitions are considered. The latter repetitions are not
---
2Estimated value taken from the evaluation section (see Sect V).
Calculating the $Q$ value results in $Q = 71.1563$. Since $\chi^2_{0.05} = 1.239$ is significantly below the $Q$ value the null hypothesis is not valid, and thus, the first and the subsequent iterations do not follow the same distribution.
In the example process, the activity 'New Request' can be repeated multiple times for one process case. In particular, the assigned roles remain the same, but the work performed in the activity itself differs. In the first iteration, the full client data has to be obtained, while in the later iterations only partial information has to be adapted. A second iteration is required in case an error has to be resolved or the back office needs additional information. This second or further iterations take significantly less time than the first one. As a consequence the iterations of activities have to be distinguished for creating a performance model.
3) Discussion: Having data independence is a critical requirement for determining a performance model. In case data dependency is concluded a possible solution is to resolve these dependencies by further distinguishing activities. In the loop scenario, a possibility would be to classify the 'new request' activity as contained in the original event log into two activities: 'first new request' and 'repeated new request' activity. Based on this distinction data independence can be confirmed and a performance model can be derived.
In situations where a refinement of activities is applied, the histogram based cleansing and the data independence test have to be repeated to determine a cleansed event log, which can be used to mine a performance model.
V. Evaluation
The result of the approach presented in this paper is a cleansed event log, which can be used for mining the control flow or performance models. Since the motivation in this paper was related to process performance, and since the performance model is strongly dependent on the start time estimates defined in the presented approach, the evaluation will focus on this aspect.
The aim of the evaluation is to see whether the performance model per activity which can be directly derived from the cleansed event log, conforms to the expectations of the managers in the company. Since there is no performance model available at the company, we made a questionnaire for the analyst in the bank and the analyst of the software supplier to estimate the durations for activities of a process. The time-estimates follow the Project Evaluation and Review Technique (PERT) [3]. The idea is that a domain expert gives three time estimates for each activity: an optimistic estimate, or the minimum time in the most favorable conditions, a pessimistic time as in most unfavorable conditions and the most likely time. The expected time for each activity is a weighted average of these estimates, following the formula (optimistic time + 2x average time + pessimistic time)/4.
This assessment has been performed for several activities not just the 'new request' activity as depicted in Table VI. The conclusion is that the data in the cleansed log file is indeed in the range of the expected durations. In case of the 'new request activity' the durations contained in the log file is in average about 4 minutes while the optimistic estimate of the experts has been 3 and 5 minutes. Adding the standard deviation observed in the log file, we get around 10 minutes which is the estimated average. The challenge with the PERT method is that there is an assumption made on the underlying distribution, which may deviate in the actually observed distribution.
### Table IV
<table>
<thead>
<tr>
<th>Weekday</th>
<th>40</th>
<th>80</th>
<th>120</th>
<th>160</th>
<th>200</th>
<th>240</th>
<th>280</th>
<th>320</th>
<th>More</th>
</tr>
</thead>
<tbody>
<tr>
<td>Monday</td>
<td>8.8</td>
<td>23.3</td>
<td>21.3</td>
<td>10.5</td>
<td>7.6</td>
<td>5.4</td>
<td>4.1</td>
<td>2.1</td>
<td>2.3</td>
</tr>
<tr>
<td>Tuesday</td>
<td>5.1</td>
<td>21.0</td>
<td>23.6</td>
<td>12.3</td>
<td>9.0</td>
<td>5.8</td>
<td>4.3</td>
<td>2.4</td>
<td>2.1</td>
</tr>
<tr>
<td>Wednesday</td>
<td>7.2</td>
<td>22.9</td>
<td>19.9</td>
<td>14.3</td>
<td>6.1</td>
<td>5.8</td>
<td>3.6</td>
<td>4.0</td>
<td>2.0</td>
</tr>
<tr>
<td>Thursday</td>
<td>7.0</td>
<td>23.0</td>
<td>21.5</td>
<td>14.7</td>
<td>8.2</td>
<td>5.8</td>
<td>3.0</td>
<td>2.4</td>
<td>1.6</td>
</tr>
<tr>
<td>Friday</td>
<td>7.8</td>
<td>23.7</td>
<td>16.4</td>
<td>12.6</td>
<td>8.5</td>
<td>5.5</td>
<td>4.3</td>
<td>4.4</td>
<td>1.8</td>
</tr>
</tbody>
</table>
**Figure 6. Visualization of the Loop Probability Distribution in Percent**
Further distinguished simply because otherwise the dataset gets too small. As it can be seen we already reduced the number of durations considered compared to Table IV since there was not sufficient data available.
### Table V
<table>
<thead>
<tr>
<th>Loop</th>
<th>40</th>
<th>80</th>
<th>120</th>
<th>160</th>
<th>200</th>
<th>240</th>
<th>280</th>
<th>320</th>
<th>More</th>
</tr>
</thead>
<tbody>
<tr>
<td>first</td>
<td>2</td>
<td>23</td>
<td>22</td>
<td>14</td>
<td>8</td>
<td>6</td>
<td>4</td>
<td>3</td>
<td>17</td>
</tr>
<tr>
<td>repetition</td>
<td>55</td>
<td>19</td>
<td>8</td>
<td>6</td>
<td>2</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>7</td>
</tr>
</tbody>
</table>
**Table V Loop Probability Distribution in Percent**
It should be noted that not all event logs are focusing on performance of control flow mining. For example, in [10] the authors base their work on change logs, i.e., documenting ad-hoc changes performed on process instances. These change logs are then used to mine reference models.
VII. Conclusion
In this paper we propose a systematic approach to prepare event log data from semi-structured processes for the derivation of a performance model. In particular, the main goal is to estimate the start time of an activity in the process. This is necessary, since in a semi-structured process, activities are not always performed solely in one computer system and therefore the start time of an activity cannot be acquired automatically. The start time estimates are checked for outliers based on various errors and the independence of situational characteristics is checked. The resulting event log can then be further used in combination with process mining techniques to actually infer a performance model.
Future work will strengthen the evaluation of our approach and apply it to more commercial scenarios.
REFERENCES
<table>
<thead>
<tr>
<th>Expert</th>
<th>Case</th>
<th>Duration</th>
</tr>
</thead>
<tbody>
<tr>
<td>analyst of software supplier</td>
<td>best case 3.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>average case 10.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>worst case 30.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>average 13.3</td>
<td></td>
</tr>
<tr>
<td>analyst bank</td>
<td>best case 5.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>average case 10.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>worst case 30.0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>average 13.8</td>
<td></td>
</tr>
<tr>
<td>event log</td>
<td>average 3.9</td>
<td></td>
</tr>
<tr>
<td></td>
<td>standard deviation 5.5</td>
<td></td>
</tr>
</tbody>
</table>
Table VI
SUMMARY OF ESTIMATED AND GUESSED DURATIONS
Over all activities investigated it turns out that the bank analyst is more optimistic with his estimates and as a consequence is closer to the estimates contained in the cleansed event log. We presented the results of this study to the experts and they found the discrepancy with the estimates contained in the log file explainable. Overall they were content with the accuracy of the results. Our aim in the coming period is to get more experts involved and extend the investigation to more processes and activities, and larger data sets to get a better empirical basis for the evaluation.
VI. Related Work
There is quite some related work on performance model mining. Many approaches have been implemented in the context of ProM [4] and are based on event logs provided in the Mining Extensible Markup Language (MXML) [5]. Rozinat et al. [5] present an approach to mine simulation models from these MXML event logs. The idea is to automatically generate a process model, represented as a Colored Petri Net (CPN). Depending on the richness of the event log, the resulting CPN may cover not only the control-flow perspective, but also the resource and performance perspective. However, all approaches around the ProM tool assume that the event log contains the start and end time of an activity, which is not the case in our scenario.
However, there is also some literature making less assumptions on the available event logs. For example, in [6] the authors try to derive the relation between events and process instance assuming there is no explicit data available to make the link. In [7] the authors address noisy event logs and ways of dealing with it. However, the focus there is not on performance models.
Classical performance models, such as, Queuing Networks [8] or stochastic Petri Nets [9] assume that the complete system is modeled. The models can then be used either to perform an equilibrium analysis or a transient analysis. In our situation the event log does not capture the complete system but only a part. To be able to apply classical performance models we have to make strong assumptions on the non-represented systems to apply classical analysis.
|
{"Source-Url": "http://doc.utwente.nl/78474/1/paper_tr.pdf", "len_cl100k_base": 8831, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28394, "total-output-tokens": 9321, "length": "2e13", "weborganizer": {"__label__adult": 0.0003383159637451172, "__label__art_design": 0.0005741119384765625, "__label__crime_law": 0.0005474090576171875, "__label__education_jobs": 0.0035190582275390625, "__label__entertainment": 0.0001552104949951172, "__label__fashion_beauty": 0.00020968914031982425, "__label__finance_business": 0.0026721954345703125, "__label__food_dining": 0.0004892349243164062, "__label__games": 0.001140594482421875, "__label__hardware": 0.0011577606201171875, "__label__health": 0.0007481575012207031, "__label__history": 0.0005049705505371094, "__label__home_hobbies": 0.00021338462829589844, "__label__industrial": 0.0015239715576171875, "__label__literature": 0.0005044937133789062, "__label__politics": 0.0003898143768310547, "__label__religion": 0.0003974437713623047, "__label__science_tech": 0.4208984375, "__label__social_life": 0.00017726421356201172, "__label__software": 0.0285797119140625, "__label__software_dev": 0.5341796875, "__label__sports_fitness": 0.0003235340118408203, "__label__transportation": 0.0007548332214355469, "__label__travel": 0.00024390220642089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41200, 0.02721]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41200, 0.29506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41200, 0.9263]], "google_gemma-3-12b-it_contains_pii": [[0, 5110, false], [5110, 9963, null], [9963, 15385, null], [15385, 20872, null], [20872, 26574, null], [26574, 30771, null], [30771, 35523, null], [35523, 41200, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5110, true], [5110, 9963, null], [9963, 15385, null], [15385, 20872, null], [20872, 26574, null], [26574, 30771, null], [30771, 35523, null], [35523, 41200, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41200, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41200, null]], "pdf_page_numbers": [[0, 5110, 1], [5110, 9963, 2], [9963, 15385, 3], [15385, 20872, 4], [20872, 26574, 5], [26574, 30771, 6], [30771, 35523, 7], [35523, 41200, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41200, 0.2303]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
f68cfad809f6f674bdaecedc531baf994ea92299
|
[REMOVED]
|
{"Source-Url": "https://kar.kent.ac.uk/30616/1/CEFP.pdf", "len_cl100k_base": 10853, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 47320, "total-output-tokens": 12027, "length": "2e13", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.0003173351287841797, "__label__crime_law": 0.00018727779388427737, "__label__education_jobs": 0.000701904296875, "__label__entertainment": 5.6743621826171875e-05, "__label__fashion_beauty": 9.34600830078125e-05, "__label__finance_business": 0.00011652708053588869, "__label__food_dining": 0.00026869773864746094, "__label__games": 0.0003917217254638672, "__label__hardware": 0.0005555152893066406, "__label__health": 0.0002264976501464844, "__label__history": 0.00017321109771728516, "__label__home_hobbies": 6.389617919921875e-05, "__label__industrial": 0.0002541542053222656, "__label__literature": 0.00016951560974121094, "__label__politics": 0.00016617774963378906, "__label__religion": 0.00034809112548828125, "__label__science_tech": 0.00438690185546875, "__label__social_life": 6.830692291259766e-05, "__label__software": 0.003643035888671875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0002157688140869141, "__label__transportation": 0.0003633499145507813, "__label__travel": 0.00017690658569335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50973, 0.01442]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50973, 0.68412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50973, 0.88487]], "google_gemma-3-12b-it_contains_pii": [[0, 296, false], [296, 3200, null], [3200, 6840, null], [6840, 9522, null], [9522, 12018, null], [12018, 15050, null], [15050, 18202, null], [18202, 20741, null], [20741, 23743, null], [23743, 27400, null], [27400, 29219, null], [29219, 32642, null], [32642, 35866, null], [35866, 39351, null], [39351, 42932, null], [42932, 45014, null], [45014, 48230, null], [48230, 50973, null]], "google_gemma-3-12b-it_is_public_document": [[0, 296, true], [296, 3200, null], [3200, 6840, null], [6840, 9522, null], [9522, 12018, null], [12018, 15050, null], [15050, 18202, null], [18202, 20741, null], [20741, 23743, null], [23743, 27400, null], [27400, 29219, null], [29219, 32642, null], [32642, 35866, null], [35866, 39351, null], [39351, 42932, null], [42932, 45014, null], [45014, 48230, null], [48230, 50973, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50973, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50973, null]], "pdf_page_numbers": [[0, 296, 1], [296, 3200, 2], [3200, 6840, 3], [6840, 9522, 4], [9522, 12018, 5], [12018, 15050, 6], [15050, 18202, 7], [18202, 20741, 8], [20741, 23743, 9], [23743, 27400, 10], [27400, 29219, 11], [29219, 32642, 12], [32642, 35866, 13], [35866, 39351, 14], [39351, 42932, 15], [42932, 45014, 16], [45014, 48230, 17], [48230, 50973, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50973, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
ef3e35932e6fb14e3f8c16e7e7d426b6717ac830
|
The RIGHT Model for Continuous Experimentation
Fagerholm, Fabian
2017-01
http://hdl.handle.net/10138/175085
https://doi.org/10.1016/j.jss.2016.03.034
Downloaded from Helda, University of Helsinki institutional repository.
This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.
Please cite the original version.
The RIGHT model for Continuous Experimentation
Fabian Fagerholm\textsuperscript{a,*}, Alejandro Sanchez Guinea\textsuperscript{b}, Hanna Mäenpää\textsuperscript{a}, Jürgen Münch\textsuperscript{a,c}
\textsuperscript{a}Department of Computer Science, University of Helsinki, P.O. Box 68, FI-00014 University of Helsinki, Finland
\textsuperscript{b}University of Luxembourg, 4 rue Alphonse Weicker, L-2721, Luxembourg
\textsuperscript{c}Faculty of Informatics, Reutlingen University, Alteburgstraße 150, D-72762 Reutlingen, Germany
Abstract
\textbf{Context:} Development of software-intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation.
\textbf{Objective:} This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system.
\textbf{Method:} An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed.
\textbf{Results:} Building blocks for a continuous experimentation system and infrastructure are presented.
\textbf{Conclusions:} A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
\*Corresponding author
Email addresses: fabian.fagerholm@helsinki.fi (Fabian Fagerholm), alejandro.sanchezguinea@uni.lu (Alejandro Sanchez Guinea), hanna.maenpaa@cs.helsinki.fi (Hanna Mäenpää), juergen.muench@cs.helsinki.fi, juergen.muench@reutlingen-university.de (Jürgen Münch)
Keywords: Continuous experimentation, Product development, Software architecture, Software development process, Agile software development, Lean software development,
1. Introduction
The accelerating digitalisation in most industry sectors means that an increasing number of companies are or will soon be providers of software-intensive products and services. Simultaneously, new companies already enter the marketplace as software companies. Software enables increased flexibility in the types of services that can be delivered, even after an initial product has been delivered to customers. Many constraints that previously existed, particularly in terms of the behaviour of a product or service, can now be removed.
With this newfound flexibility, the challenge for companies is no longer primarily how to identify and solve technical problems, but rather how to solve problems which are relevant for customers and thereby deliver value. Finding solutions to this problem has often been haphazard and based on guesswork, but many successful companies have approached this issue in a systematic way. Recently, a family of generic approaches has been proposed. For example, the Lean Startup methodology [26] proposes a three-step cycle: build, measure, learn.
However, a detailed framework for conducting systematic, experiment-based software development has not been elaborated. Such a framework has implications for the technical product infrastructure, the software development process, the requirements regarding skills that software developers need to design, execute, analyse, and interpret experiments, and the organisational capabilities needed to operate and manage a company based on experimentation in research and development.
Methods and approaches for continuous experimentation with software product and service value should itself be based on empirical research. In this paper, we present the most important building blocks of a framework for continuous experimentation. Specifically, our research question is:
RQ How can Continuous Experimentation with software-intensive products and services be organised in a systematic way?
To further scope the question, we split it into two sub-questions:
RQ1 What is a suitable process model for Continuous Experimentation with software-intensive products and services?
RQ2 What is a suitable infrastructure architecture for Continuous Experimentation with software-intensive products and services?
We give an answer to the research questions by validating an analytically derived model against a series of case studies in which we implemented different parts of the model in cooperation with two startup companies. The result is the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing). This model focuses on developing the right software, whereas the typical focus of software engineering in the past has been on developing the software right (e.g. in terms of technical quality). The model is instantiated in the RIGHT process model and the RIGHT infrastructure architecture model. Together, these instantiations address the need to integrate the requirements, design, implementation, testing, deployment, and maintenance phases of software development in a way that uses continuous empirical feedback from users.
The rest of this paper is organised as follows. In Section 2, we review related work on integrating experimentation into the software development process. In Section 3, we describe the research approach and context of the study. In Section 4, we first present our proposed model for continuous experimentation, and then relate the findings of our case study to it in order to illustrate its possible application and show the empirical observations that it was grounded in. In Section 5, we discuss the model and consider some possible variations. Finally, we conclude the paper and present an outlook on future work in Section 6.
2. Related work
Delivering software that has value – utility for its users – can be considered a primary objective for software development projects. In this section, we describe models for systematic value delivery and approaches for using experiments as a means for value testing and creation. In addition, we discuss related work with respect to experiments at scale.
2.1. Models for systematic value delivery
Lean manufacturing and the Toyota Production System [22] has inspired the definition of Lean software development. This approach provides comprehensive guidance for the combination of design, development, and validation built as a single feedback loop focused on discovery and delivery of value [25]. The main ideas of this approach, which have been emphasised since its introduction, are summarised in seven principles: optimize the whole, eliminate waste, build quality in, learn constantly, deliver fast, engage everyone, and keep getting better [24].
Lean Startup [26] provides mechanisms to ensure that product or service development effectively addresses what customers want. The methodology is based on the Build-Measure-Learn loop that establishes learning about customers and their needs as the unit of progress. It proposes to apply scientific method and thinking to startup businesses in the form of learning experiments. As the results of experiments are analysed, the company has to decide to “persevere” on the same path or “pivot” in a different direction while considering what has been learned from customers.
Customer Development [4] emphasises the importance of not only doing product development activities but also to learn and discover who a company’s initial customers will be, and what markets they are in. Customer Development argues that a separate and distinct process is needed for those activities. Customer Development is a four-step model divided into a search and an execution phase. In the search phase, a company performs customer discovery, testing whether the business model is correct (product/market fit), and customer validation, which develops a replicable sales model. In the execution phase, customer creation focuses on creating and driving demand, and
is the transition from an organisation designed to learn and discover to one that is optimised for cost-efficient delivery of validated products or services.
In light of the benefits that a methodology such as Lean Startup can provide, where controlled experiments constitute the main activity driving development, Holmström Olsson et al. [12] propose a target stage for any company that wishes to build a development system with the ability to continuously learn from real-time customer usage of software. They describe the stages that a company has to traverse in order to achieve that target as the “stairway to heaven”. The target stage is achieved when the software organisation functions as an R&D experiment system. The stages on the way to achieving the target are: (i) traditional development, (ii) agile R&D organisation, (iii) continuous integration, and (iv) continuous deployment. The authors first describe these four stages and then analyse them through a multiple-case study that examines the barriers that exist on each step on the path towards continuous deployment. The target stage is only described; the authors do not detail any means to overcome the barriers. A main finding from the case study is that the transition towards Agile development requires shifting to small development teams and focusing on features rather than on components. Also, it is relevant to notice that the transition towards continuous integration requires an automated build and test system (continuous integration), a main version control branch to which code is continuously delivered, and modularised development. Holmström Olsson et al. found that in order to move from continuous integration to continuous deployment, organisational units such as product management must be fully involved, and close work with a very active lead customer is needed when exploring the product concept further. The authors suggest two key actions to make the transition from continuous deployment to an R&D experiment system. First, the product must be instrumented so that field data can be collected in actual use. Second, organisational capabilities must be developed in order to effectively use the collected data for testing new ideas with customers.
Other works have studied some of the stages of the “stairway to heaven” individually. Ståhl & Bosch [27] have studied the continuous integration stage, pointing out that there is no homogeneous practice of continuous integration in the industry. They propose a descriptive model that allows studying and evaluating the different ways in which
continuous integration can be viewed. Eklund & Bosch [7] present an architecture that supports continuous experimentation in embedded systems. They explore the goals of an experiment system, develop experiment scenarios, and construct an architecture that supports the goals and scenarios. The architecture combines an experiment repository, data storage, and software to be deployed on embedded devices via over-the-air data communication channels. The architecture also considers the special requirements for safety in, e.g., automotive applications. However, the main type of experiment is confined to A/B testing, and the architecture is considered mainly from the perspective of a software development team rather than a larger product development organisation.
Holmström Olsson & Bosch [13] describe the Hypothesis Experiment Data-Driven Development (HYPEX) model. The goal of this model is to shorten the feedback loop to customers. It consists of a loop where potential features are generated into a feature backlog, features are selected and a corresponding expected behaviour is defined. The expected behaviour is used to implement and deploy a minimum viable feature (MVF). Observed and expected behaviour is compared using a gap analysis, and if a sufficiently small gap is identified, the feature is finalised. On the other hand, if a significant gap is found, hypotheses are developed to explain it, and alternative MVFs are developed and deployed, after which the gap analysis is repeated. The feature may also be abandoned if the expected benefit is not achieved.
2.2. Systematic value creation through experimentation
The models outlined above all aim to make experimentation systematic in the software development organisation. One important conceptual concern is the definition of experimentation. Experimentation has been established in software engineering since the 1980s. Basili et al. [3] were among the first to codify a framework and process for experimentation. Juristo et al. [14] and Wohlin et al. [31] present more recent syntheses regarding experimentation in software engineering. Taken together, these works show that “experimentation” in software engineering can be considered in a broad sense, including both controlled experiments but also more explorative activities which aim at understanding and discovery rather than hypothesis testing. For the purposes of this article, we consider experimentation to be a range of activities that can be placed
within a spectrum including controlled experiments as well as open-ended exploration. However, we emphasise that regardless of the placement within this spectrum, all methods require rigorous study designs and have a defensible and transparent way of reasoning and drawing conclusions from empirical data. They are not the same method being applied more or less carefully. The logic of controlled experiments relies on careful manipulation of variables, observation of effects, and analysis to test for causal relationships. Quasi-controlled experiments relax some of the requirements for randomised treatment. Case studies often include qualitative elements and their logic is different from controlled experiments: they generalise analytically rather than statistically [32]. Qualitative methods may also be used alone, such as through interview-or observation-based studies.
Experimentation may also be considered in terms of goals, and goals may exist on different levels of the product development organisation. On the product level, experimentation may be used to select features from a set of proposed features. On the technical level, experimentation may be used to optimise existing features. However, the model presented in this paper links experimentation on the product and technical level to the product vision and strategy on the business level. Experimentation becomes a systemic activity that drives the entire organisation. This allows for focused testing of business hypotheses and assumptions, which can be turned into faster decision-making and reaction to customer needs. Depending on the specific method used, the results of an experiment may suggest new information which should be incorporated into the decision-making process.
2.3. Considerations for running experiments at a large scale
Previous works have presented case studies that exhibit different aspects concerning continuous experimentation. Steiber [28] report on a study of the continuous experimentation model followed by Google, analysing a success story of this approach. Tang et al. [29] describe an overlapping experiment infrastructure, developed at Google, that allows web queries in a search engine to be part of multiple experiments, thus allowing more experiments to be carried out at a faster rate. Adams [1] present a case study
on the implementation of Adobe’s Pipeline, a process that is based on the continuous experimentation approach.
Kohavi et al. [16, 17] note that running experiments at large scale requires addressing multiple challenges in three areas: cultural/organisational, engineering, and trustworthiness. The larger organisation needs to learn the reasons for running controlled experiments and the trade-offs between controlled experiments and other methods of evaluating ideas. Even negative experiments should be run, which degrade user experience in the short term, because of their learning value and long-term benefits. When the technical infrastructure supports hundreds of concurrent experiments, each with millions of users, classical testing and debugging techniques no longer apply because there are millions of live variants of the system in production. Instead of heavy up-front testing, Kohavi et al. report having used alerts and post-deployment fixing. The system has also identified many negative features that were avoided despite being advocated by key stakeholders, saving large amounts of money.
Experimentation also has an important relationship with company culture. Kohavi et al. [15] describe a platform for experimentation built and used at Microsoft, noting the cultural challenges involved in using experiment results, rather than opinions from persons in senior positions, as the basis of decisions. They suggest, for example, that one should avoid trying to build features through extensive planning without early testing of ideas, that experiments should be carried out often, that a failed experiment is a learning opportunity rather than a mistake, and that radical and controversial ideas should be tried. All these suggestions are challenging to put into practice in organisations that are not used to experimentation-based decision-making. Kohavi et al. note the challenges they faced at Microsoft, and describe efforts to raise awareness of the experimentation approach.
The final stage of the “stairway to heaven” model is detailed and analysed by Bosch [5]. The differences between traditional development and the continuous approach are analysed, showing that in the context of the new, continuous software development model, R&D is best described as an “innovation experiment system” approach where the development organisation constantly develops new hypotheses and tests them with certain groups of customers. This approach focuses on three phases:
pre-deployment, non-commercial deployment, and commercial deployment. The authors present a first systematisation of this so-called “innovation experiment system” adapted for software development for embedded systems. It is argued that aiming for an “innovation experiment system” is equally valid for embedded systems as it is in the case of cloud computing and Software-as-a-Service (SaaS), and that the process could be similar in both cases. That is, requirements should evolve in real time based on data collected from systems in actual use with customers.
Inspired by the ideas that define the last stage of the “stairway to heaven”, we develop and propose the RIGHT model for Continuous Experimentation. In this model, experiments are derived from business strategies and aim to assess assumptions derived from those strategies, potentially invalidating or supporting the strategy. Previous works have explored the application of a framework for linking business goals and strategies to the software development activities (e.g., [2], [20]). However, those works have not considered the particular traits of an experiment system such as the one presented in this paper. The model presented also describes the platform infrastructure that is necessary to establish the whole experiment system. The Software Factory [8] can serve as infrastructure for the model proposed, as it is a software development laboratory well suited for continuous experimentation. In a previous article, in which we presented a study on creating minimum viable products [19] in the context of collaboration between industry and academia, we showed the Software Factory laboratory in relation to the Lean Startup approach and continuous experimentation. Some of the foundational ideas behind Software Factory with respect to continuous experimentation have been studied in the past, analysing, for instance, the establishment of laboratories specifically targeted for continuous development [21] and the impact of continuous integration in teaching software engineering.
The building blocks presented in this paper, although generalizable with certain limitations, are derived from a startup environment where the continuous experimentation approach is not only well suited but possibly the only viable option for companies to grow. Our work has similarities to the “Early Stage Startup Software Development Model” (ESSSDM) of Bosch et al. [6] which extends existing Lean Startup approaches offering more operational process support and better decision-making support for startup
companies. Specifically, ESSSDM provides guidance on when to move product ideas forward, when to abandon a product idea, and what techniques to use and when, while validating product ideas. Some of the many challenges faced when trying to establish a startup following the Lean Startup methodology are presented by May [18] with insights that we have considered for the present work.
3. Research approach
Our general research framework can be characterised as design science research [11], in which the purpose is to derive a technological rule which can be used in practice to achieve a desired outcome in a certain field of application [30]. The continuous experimentation model presented in this paper was first constructed based on the related work presented in the previous section as well the authors’ experience. While a framework can be derived by purely analytic means, its validation requires grounding in empirical observations. For this reason, we conducted a holistic multiple case study [32] in the Software Factory laboratory at the Department of Computer Science, University of Helsinki, in which we matched the initial model to empirical observations and made subsequent adjustments to produce the final model. The model can still be considered tentative, pending further validation in other contexts. It is important to note that this study investigates how Continuous Experimentation can be carried out in a systematic way independently of the case projects’ goals and the experiments carried out in them. Those experiments and their outcomes are treated as qualitative findings in the context of this study. In this section, we describe the case study context and the research process.
3.1. Context
The Software Factory is an educational platform for research and industry collaboration [8]. In Software Factory projects, teams of Master’s-level students use contemporary tools and processes to deliver working software prototypes in close collaboration with industry partners. The goal of Software Factory activities is to provide students with means for applying their advanced software development skills in an environment with working life relevance and to deliver meaningful results for their customers [19].
During the case projects used in this study, two of the authors were involved as participants observers. The first author coordinated the case projects: started the projects, handled contractual and other administrative issues, followed up progress through direct interaction with the customer and student teams, ended the projects, handled project debriefing and coordinated the customer interviews. The third author also participated as an observer in several meetings where the customer and student teams collaborated. The researchers were involved in directing the experimentation design activities together with the customer, and students were not directly involved in these activities. However, the customer and students worked autonomously and were responsible for project management, technical decisions, and other issues related to the daily operations of the project.
3.1.1. Case Company 1
Tellybean Ltd. is a small Finnish startup that develops a video calling solution for the home television set. During September 2012–December 2013 the company was a customer in three Software Factory projects with the aim of creating an infrastructure to support measurement and management of the architecture of their video calling service. Tellybean Ltd. aims at delivering a life-like video calling experience. Their value proposition – “the new home phone as a plug and play experience” – is targeted at late adopter consumer customers who are separated from their families, e.g. due to migration into urban areas, global social connections, or overseas work. The company puts special emphasis on discovering and satisfying needs of the elderly, making ease of use the most important non-functional requirement of their product. The primary means for service differentiation in the marketplace are affordability, accessibility and ease of use. For the première commercial launch, and to establish the primary delivery channel of their product, the company aims at partnering with telecom operators. The company had made an initial in-house architecture and partial implementation during a pre-development phase prior to the Software Factory projects. A first project was conducted to extend the platform functionality of this implementation. A second project was conducted to
\(^1\)http://www.tellybean.com/
validate concerns related to the satisfaction of operator requirements. After this project, a
technical pivot was conducted, with major portions of the implementation being changed;
the first two projects contributed to this decision. A third project was then conducted
to extend the new implementation with new features related to the ability to manage
software on already delivered products, enabling continuous delivery. The launch
strategy can be described as an MVP launch with post-development adaptation. The
three projects conducted with this company are connected to establishing a continuous
experimentation process and building capabilities to deliver software variations on
which experiments can be conducted. They also provided early evidence regarding the
feasibility of the product for specific stakeholders, such as operator partners, developers,
and release management.
3.1.2. Product
The Tellybean video calling service has the basic functionalities of a home phone: it
allows making and receiving video calls and maintaining a contact list. The product is
based on an Android OS set-top-box (STB) that can be plugged into a modern home
TV. The company maintains a backend system for mediating calls to their correct
respondents. While the server is responsible for routing the calls, the actual video
call is performed as a peer to peer connection between STBs residing in the homes of
Tellybean’s customers.
The company played the role of a product owner in three Software Factory projects
during September 2012–December 2013. The aim of the first two projects was to create
new infrastructure for measuring and analysing usage of their product in its real envi-
ronment. This information was important in order to establish the product’s feasibility
for operators and for architectural decisions regarding scalability, performance, and
robustness. For the present research, the first two projects were used to validate the steps
required to establish a continuous experimentation process. The third project at Software
Factory delivered an automated system for managing and updating the STB software
remotely. This project was used to investigate factors related to the architecture needs
for continuous experimentation. Table 1 summarises the goals and motivations of the
Table 1: Scope of each of the three Tellybean projects at Software Factory.
<table>
<thead>
<tr>
<th>Project 1</th>
<th>As an operator, I want to be able to see metrics for calls made by the video call product’s customers.</th>
<th>… so that I can extract and analyse business critical information.</th>
<th>… so that I can identify needs for maintenance of the product’s technical architecture.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project 2</td>
<td>As a Tellybean developer, I want to be sure that our product’s system architecture is scalable and robust. As a Tellybean developer, I want to know technical weaknesses of the system. As a Tellybean developer, I want to receive suggestions for alternative technical architecture options.</td>
<td>… so that I know the limitations of the system.</td>
<td>… so that I can predict needs for scalability of the platform.</td>
</tr>
<tr>
<td>Project 3</td>
<td>As a technical manager, I want to be able to push an update to the Tellybean set-top-boxes with a single press of a button.</td>
<td>… so that I can deploy upgrades to the software on one or multiple set-top-boxes.</td>
<td></td>
</tr>
</tbody>
</table>
projects in detail. Each project had a 3–7-person student team, a company representative accessible at all times, and spent effort in the range of 600 and 700 person-hours.
3.1.3. Project 1
The aim of Tellybean’s first project at the Software Factory was to build means for measuring performance of their video calling product in its real environment. The goal was to develop a browser-based business analytics system. The team was also assigned to produce a back-end system for storing and managing data related to video calls, in order to satisfy operator monitoring requirements. The Software Factory project was carried out in seven weeks by a team of four Master’s-level computer science students. Competencies required in the project were database design, application programming, and user interface design.
The backend system for capturing and processing data was built on the Java Enterprise Edition platform, utilising the Spring Open Source framework. The browser-based reporting system was built using JavaScript frameworks D3 and NVD3 to produce vivid and interactive reporting. A cache system of historical call data was implemented to ensure the performance of the system.
After the project had been completed, both students and the customer deemed that the product had been delivered according to the customer’s requirements. Despite the fact that some of the foundational requirements changed during the project due to discoveries of new technological solutions, the customer indicated satisfaction with the end-product. During the project, communication between the customer and the team was frequent and flexible.
The first project constituted a first attempt at conducting continuous experimentation. The goal of the experiment was to gain information about the performance of the system architecture and its initial implementation. The experiment arose from operator needs to monitor call volumes and system load – a requirement that Tellybean’s product developers deemed necessary to be able to partner with operators. It was clear that there existed a set of needs arising from operator requirements, but it was not clear how the information should be presented and what functionality was needed to analyse it. From
a research perspective, however, the exact details of the experiment were less important than the overall process of starting experimentation.
3.1.4. Project 2
The second project executed at Software Factory aimed at performing a system-wide stress test for the company’s video calling service infrastructure. The Software Factory team of four Master’s-level students produced a test tool for simulating very high call volumes. The tool was used to run several tests against Tellybean’s existing call mediator server.
The test software suite included a tool for simulating video call traffic. The tool was implemented using the Python programming language. A browser-based visual reporting interface was also implemented to help analysis of test results. The reporting component was created using existing Javascript frameworks such as Highcharts.js and Underscore.js. Test data was stored in a MongoDB database to be utilised in analysis.
The purpose of the experiment was a counterpart to the experiment in the first project. Whereas the first project had focused on operator needs, the second focused on their implications for developers. The initial system architecture and many of the technical decisions had been questioned. The project aimed to provide evidence for decision-making when revisiting these initial choices.
The team found significant performance bottlenecks in Tellybean’s existing proof-of-concept system and analysed their origins. Solutions for increasing operational capacity of the current live system were proposed and some of them were also implemented. Towards the end of the project, the customer suggested that a new proof-of-concept call mediating server should be proposed by the Software Factory team. The team delivered several suggestions for a new service architecture and composed a new call mediator server. For the purposes of this study, we consider the second experiment to be another round in the continuous experimentation cycle where findings from the first cycle resulted in a new set of questions to experiment on.
3.1.5. Project 3
For their third project at Software Factory, Tellybean aimed to create a centralised infrastructure for updating their video calling product’s software components. The new remote software management system would allow the company to quickly deploy software updates to already delivered STBs. The functionality was business critical to the company and its channel partners: it allowed updating the software without having to travel on-location to each customer to update their STBs. The new instrument enabled the company to establish full control of their own software and hardware assets.
The project consisted of a team of five Master’s-level computer science students. The team delivered a working prototype for rapid deployment of software updates. In this project, the need for a support system to deliver new features or software variations was addressed. We considered the architectural requirements for a continuous delivery system that would support continuous experimentation.
3.1.6. Case Company 2
Memory Trails Ltd. (Memory Trails) is a small Finnish startup that develops a well-being service which helps users define, track, and receive assistance with life goals. During May–July 2014, the company was a customer in a Software Factory project that aimed to develop a backend recommendation engine for the service, improve the front-end user experience, and to validate central assumptions in the service strategy. Memory Trails aims at delivering the service as an HTML5-based application which is optimised for tablets but also works on other devices with an HTML5-compatible browser. The service targets adults who wish to improve their quality of life and change patterns of behaviour to reach different kinds of life goals.
Whereas the projects with the first case company focused mostly on establishing a continuous experimentation process and building capabilities to deliver software variations for experimentation, the project with the second case company focused on some of the details of deriving experiments themselves. In particular, we sought to uncover how assumptions can be identified in initial product or service ideas. These assumptions are candidates for experiments of different kinds.
3.1.7. Project 4
Memory Trails provided an initial user interface and backend system prototype which demonstrated the general characteristics of the application from a user perspective. Users interact with photos which can be placed in different spatial patterns to depict emotional aspects of their goals. Users are guided by the application to arrange the photos as a map, showing the goal, potential steps towards it, and aspects that qualify the goals. For example, a life goal may be to travel around the world. Related photos could depict places to visit, moods to be experienced, items necessary for travel such as tickets, etc. The photos could be arranged, e.g., as a radial pattern with the central goal in the middle, and the related aspects around it, or as a time-line with the end goal to the right and intermediate steps preceding it.
In the project, two high-level assumptions were identified. The customer assumed that automatic, artificial intelligence-based processing in the backend could be used to automatically guide users towards their goals, providing triggers, motivation, and rewards on the way. Also, the customer assumed that the motivation for continued use of the application would come from interacting with the photo map. Since the automatic processing depended on the motivation assumption, the latter became the focus of experimentation in the project. The customer used versions of the application in user tests during which observation and interviews were used to investigate whether the assumption held. For the purposes of this study, we used the project to validate the link in our model between product vision, business model and strategy, and experiment steps.
3.2. Research process
The case study analysis was performed in order to ground the continuous experimentation model in empirical observations, not to understand or describe the projects themselves, nor to assess the business viability of the case companies. Therefore, we collected information that would help us understand the prerequisites for performing continuous experimentation, the associated constraints and challenges, and the logic of integrating experiment results into the business strategy and the development process.
We used four different sources of data in our analysis: (i) participant observation, (ii) analysis of project artefacts, (iii) group analysis sessions, and (iv) individual interviews. We subsequently discuss the details of the data collection and analysis.
During the projects, we observed the challenges the companies faced related to achieving the continuous experimentation system. At the end of each project, an in-depth debriefing session was conducted to gain retrospective insights into the choices made during the project, and the reasoning behind them. In addition to these sources, we interviewed three company representatives from Tellybean to understand their perception of the projects and to gain data which could be matched against our model. We also conducted a joint analysis session with the project team and two representatives from Memory Trails to further match insights on the experimentation process in their project with our model.
The debriefing sessions were conducted in a workshop-like manner, with one researcher leading the sessions and the project team, customer representatives, and any other project observer present. The sessions began with a short introduction by the leader, after which the attendees were asked to list events they considered important for the project. Attendees wrote down each event on a separate sticky note and placed them on a time-line which represented the duration of the project. As event-notes were created, clarifying discussion about their meaning and location on the time-line took place. When attendees could not think of any more events, they were asked to systematically recount the progress of the project using the time-line with events as a guide.
The interviews with customer representatives were conducted either in person on the customer’s premises, online via video conferencing, or on the University of Helsinki’s premises. The interviews were semi-structured thematic interviews, having a mixture of open-ended and closed questions. This interview technique allows participants to freely discuss issues related to a focal theme. Thematic interviews have the advantage that they provide opportunities to discover information that researchers cannot anticipate and that would not be covered by more narrowly defined, closed questions. While they may result in the discussion straying away from the focal theme, this is not a problem in
practice since the interviewer can direct the participant back to the theme and irrelevant information can be ignored in the analysis.
A minimum of two researchers were present in the interviews to ensure that relevant information was correctly extracted. All participating researchers took notes during the interviews, and notes were compared after the interviews to ensure consistency. In the interviews, company representatives were first asked to recount their perception of their company, its goals, and its mode of operation before the three projects. Then, they were asked to consider what each project had accomplished in terms of software outcomes, learned information, and implications for the goals and mode of operation of the company. Finally, they were asked to reflect on how the company operated at the time of the interview and how they viewed the development process, especially in terms of incorporating market feedback into decision-making.
During analysis, the project data were examined for information relevant to the research question. We categorised the pieces of evidence according to whether they related to the Continuous Experimentation process or to the infrastructure. We sought to group the observations made and understanding gained during the projects with evidence from the retrospective sessions and interviews so that the evidence was triangulated and thus strengthened. Such groups of triangulated evidence was then matched with our initial model, which was similar to the sequence shown in Figure 1, and included the build-measure-learn cycle for the process, and a data repository, analysis tools, and continuous delivery system as infrastructure components. We adjusted the model and introduced new process steps and infrastructure components that supported the need implied by the evidence. We strived for minimal models, and when more than one need could be fulfilled with a single step or component, we did not introduce more steps or components. When all the evidence had been considered, we evaluated the result as a whole and made some adjustments and simplifications based on our understanding and judgement.
4. Results
In this section, we first describe our proposed model for continuous experimentation, and then report on the insights gained from the multiple case study and how they inform the different parts of the model.
4.1. The RIGHT model for Continuous Experimentation
By continuous experimentation, we refer to a software development approach that is based on field experiments with relevant stakeholders, typically customers or users, but potentially also with other stakeholders such as investors, third-party developers, or software ecosystem partners. The model consists of repeated Build-Measure-Learn blocks, supported by an infrastructure, as shown in Figure 1. Each Build-Measure-Learn block results in learnings which are used as input for the next block. Conceptually, the model can also be thought to apply not only to software development, but also to design and development of software-intensive products and services. In some cases, experimentation using this model may require little or no development of software.
The Build-Measure-Learn blocks structure the activity of conducting experiments, and connect product vision, business strategy, and technological product development through experimentation. In other words, the requirements, design, implementation, testing, deployment, and maintenance phases of software development are integrated and aligned by empirical information gained through experimentation. The model can be considered a vehicle for incremental innovation as defined by Henderson and Clark [10], but the model itself, as well as the transition to continuous experimentation in general, can be considered radical, architectural innovations that require significant new organisational capabilities.
4.1.1. The RIGHT process model for Continuous Experimentation
Figure 2 expands the Build-Measure-Learn blocks and describes the RIGHT process model for Continuous Experimentation. A general vision of the product or service is assumed to exist. Following the Lean Startup methodology [26], this vision is fairly stable and is based on knowledge and beliefs held by the entrepreneur. The vision is connected to the business model and strategy, which is a description of how to execute the vision. The business model and strategy are more flexible than the vision, and consist of multiple assumptions regarding the actions required to bring a product or service to market that fulfils the vision and is sustainably profitable. However, each assumption has inherent uncertainties. In order to reduce the uncertainties, we propose to conduct experiments. An experiment operationalises the assumption and states a hypothesis that can be subjected to experimental testing in order to gain knowledge regarding the assumption. The highest-priority hypotheses are selected first. Once a hypothesis is formulated, two parallel activities can occur. The hypothesis can optionally be used to implement and deploy a Minimum Viable Product (MVP) or Minimum Viable Feature (MVF), which is used in the experiment and has the necessary instrumentation. Simultaneously, an experiment is designed to test the hypothesis. The experiment is then executed and data from the MVP/MVF are collected in accordance with the experimental design. The resulting data are analysed, concluding the experimental activities.
Once the experiment has been conducted and analysis performed, the analysis results are used on the strategy level to support decision-making. Again following Lean Startup terminology, the decision can be to either “pivot” or “persevere” [26], but a third alternative is also possible: to change assumptions in the light of new information. If the experiment has given support to the hypothesis, and thus the assumption on the strategy level, a full product or feature is developed or optimised, and deployed. The strategic decision in this case is to persevere with the chosen strategy. If, on the other hand, the hypothesis was falsified, invalidating the assumption on the strategy level, the decision is to pivot and alter the strategy by considering the implications of the assumption being false. Alternatively, the tested assumption could be changed, but not
completely rejected, depending on what the experiment was designed to test and what the results were.
4.1.2. The RIGHT infrastructure architecture for Continuous Experimentation
To support conducting such experiments, an infrastructure for continuous experimentation is needed. Figure 3 sketches the RIGHT infrastructure architecture for Continuous Experimentation, with roles and associated tasks, the technical infrastructure, and information artefacts. The roles indicated here will be instantiated in different ways depending on the type of company in question. In a small company, such as a startup, a small number of persons will handle the different roles and one person may have more than one role. In a large company, the roles are handled by multiple teams. Seven roles are defined to handle four classes of tasks. A business analyst and a product owner, or a product management team, together handle the creation and iterative updating of the strategic roadmap. In order to do so, they consult existing experimental plans, results, and learnings, which reside in a back-end system. As plans and results accumulate and are stored, they may be reused in further development of the roadmap. The business
Figure 2: The RIGHT process model for Continuous Experimentation.
analyst and product owner work with a data scientist role, which is usually a team with diverse skills, to communicate the assumptions of the roadmap and map the areas of uncertainty which need to be tested.
The data scientist designs, executes, and analyses experiments. A variety of tools are used for this purpose, which access raw data in the back-end system. Conceptually, raw data and experiment plans are retrieved, analysis performed, and results produced in the form of learnings, which are stored back into the back-end system.
The data analyst also communicates with a developer and quality assurance role. These roles handle the development of MVPs, MVFs, and the final product. They first work with the data analyst to produce proper instrumentation into the front-end system, which is the part of the software which is delivered or visible to the user. In the case of a persevere-decision, they work to fully develop or optimise the feature and submit it for deployment into production. MVPs, MVFs, and final products are deployed to users after first going through the continuous integration and continuous delivery systems. A DevOps engineer acts as the mediator between the development team and operations, and a release engineer may oversee and manage the releases currently in production.
Importantly, the continuous delivery system provides information on software roll-out status, allowing other roles to monitor the experiment execution and, e.g., gain an understanding of the conditions under which the software was deployed to users and of the sample characteristics and response rate of the experiment. Cross-cutting concerns
such as User Experience may require additional roles working with several of the roles mentioned here. To simplify the figure, we have omitted the various roles that relate to operations, such as site reliability engineer, etc. Also, we have omitted a full elaboration of which information artefacts should be visible to which roles. In general, we assume that is is beneficial to visualise the state of the continuous experimentation system for all roles.
The back-end system consists of an experiment database which, conceptually, stores raw data collected from the software instrumentation, experiment plans – which include programmatic features of sample selection and other logic needed to conduct the experiment – and experiment results. The back-end system and the database are accessible through an API. Here, these parts should be understood as conceptual; an actual system likely consists of multiple APIs, databases, servers, etc. The experiment database enables a product architecture where deployed software is configured for experiments at run-time. Thus it is not always required that a new version of the software or the accompanying instrumentation is shipped to users prior to an experiment; the experimental capability can be built into the shipped software as a configurable variation scheme. The shipped software fetches configuration parameters for new experiments, reconfigures itself, and sends back the resulting measurement data, eliminating the need to perform the Develop Product and Deploy Product tasks. For larger changes, a new software version may be required, and the full set of tasks performed.
4.2. Model instantiations and lessons learned
In this subsection, we describe how the RIGHT models were instantiated in the four projects, and we describe the lessons learned. We include illustrative examples from our interview data. We note that the model was initially quite simple, similar to the sequence described in Figure 1 with a build-measure-learn cycle, a data repository, analysis tools, and continuous delivery system. We also note that not all parts of the models were instantiated in all projects. We assume that this will be the case in other projects as well. In the first two projects, we focused on problem validation: developing an understanding of the needs in real situations that a model for continuous experimentation should address. In the two latter projects, we already had most of the model in place and
focused more on validating our solution, using detailed findings from the projects in order to adjust the model.
Each of the four case projects relate to different aspects of continuous experimentation. The case findings support the need for systematic integration of all levels of software product and service development, especially when the context is rapid new product and service development. The key issue is to develop a product that customers will buy, given tight financial constraints. Startup companies operate in volatile markets and under high uncertainty. They may have to do several quick changes as they get feedback from the market. The challenge is to reach product-market fit before running out of money.
“You have to be flexible because of money, time and technology constraints. The biggest question for us has been how to best use resources we have to achieve our vision. In a startup, you are time-constrained because you have a very limited amount of money. So you need to use that time and money very carefully.” (Tellybean founder)
When making changes in the direction of the company, it is necessary to base decisions on sound evidence rather than guesswork. However, we found that it is typically not the product or service vision that needs to change. The change should rather concern the strategy by which the vision is implemented, including the features that should be implemented, their design, and the technological platform on which the implementation is based. For example, although Tellybean has had to adapt several times, the main vision of the company has not changed.
“The vision has stayed the same: lifelike video calling on your TV. It is very simple; everyone in the company knows it. The TV part doesn’t change, but the business environment is changing. The technology – the hardware and software – is changing all the time.” (Tellybean founder)
“We had to pivot when it comes to technology and prioritising features. But the main offering is still the same: it’s the new home phone and it connects to your TV. That hasn’t changed. I see the pivots more like springboards to
the next level. For example, we made a tablet version to [gain a distributor partner].” (Tellybean CTO)
Also, although an experiment design is, at best, self-evident when viewed in hindsight, developing one based on the information available in actual software projects, especially new product or service development, is not an easy task. There are multiple possibilities for what to experiment on, and it is not obvious how to choose the first experiment or each next experiment after that. Our case projects showed that initiating the continuous experimentation process is a significant task in its own right and involves much learning. This strengthens the notion that a basic and uncomplicated model to guide the process in the right direction is needed.
4.2.1. Project 1
In the first project, the new business analytics instrument allowed Tellybean to yield insights on their system’s statistics, providing the company a means for feedback. They could gain a near-real-time view on call related activities, yielding business critical information for deeper analysis. The presence of the call data could be used as input for informed decisions. It also allowed learning about service quality and identifying customer call behaviour patterns. Based on the customer’s comments, such information would be crucial for decision-making regarding the scaling of the platform. Excess capacity could thus be avoided and the system would be more profitable to operate while still maintaining a good service level for end users. The primary reason for wanting to demonstrate such capabilities was the need to satisfy operator needs. To convince operators to become channel partners, the ability to respond to fluctuations in call volumes was identified as critical. Potential investors would be more inclined to invest in a company that could convince channel operators of the technical viability of the service.
“There were benefits in terms of learning. We were able to show things to investors and other stakeholders. We could show them examples of metric data even if it was just screenshots.” (Tellybean CTO)
The high-level goal of the first project could be considered as defining a business hypothesis to test the business model from the viewpoint of the operators. The project delivered the needed metrics as well as a tool-supported infrastructure to gather the necessary data. These results could be used to set up an experiment to test the business hypotheses.
Table 2 shows the parts of our model that were instantiated in Project 1. The project instantiated a few basic elements of the RIGHT process model. The chosen business model and strategy was to offer the video calling service through operator partnerships. In order for the strategy to be successful, the company needed to demonstrate the feasibility of the service in terms of operator needs and requirements. This demonstration was to operators themselves but also to other stakeholders, such as investors, who assessed the business model and strategy. The hypothesis to test was not very precisely defined in the project, but could be summarised as “operators will require system performance management analysis tools in order to enter a partnership”. The experiment, which was obviously not a controlled one but rather conducted as part of investor and operator negotiations, used the analytics instrument developed in the project to assess whether the assumption was correct, thus instantiating an MVF, and making a rudimentary experiment execution and analysis. Based on this information, some decisions were made: to start investigating alternative architectures and product implementation strategies.
4.2.2. Project 2
In the second project, Tellybean was able to learn the limitations of the current proof-of-concept system and its architecture. An alternative call mediator server and an alternative architecture for the system were very important for the future development of the service. The lessons learned in the second project, combined with the results of the first, prompted them to pivot heavily regarding the technology, architectural solutions, and development methodology.
“The Software Factory project […] put us on the path of ‘Lego software development’, building software out of off-the-shelf, pluggable components. It got us thinking about what else we should be doing differently. […] We
Table 2: Model instantiations in Project 1.
<table>
<thead>
<tr>
<th>Process model instantiation</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Vision</strong></td>
</tr>
<tr>
<td>Video calling in the home</td>
</tr>
<tr>
<td><strong>Business model and strategy</strong></td>
</tr>
<tr>
<td>Offer video calling through operator partnerships (+ assumptions about architecture and product implementation strategies)</td>
</tr>
<tr>
<td><strong>Hypotheses</strong></td>
</tr>
<tr>
<td>“Operators will require performance management analysis tools in order to enter a partnership”</td>
</tr>
<tr>
<td><strong>Design, execute, analyse</strong></td>
</tr>
<tr>
<td>Rudimentary</td>
</tr>
<tr>
<td><strong>MVF</strong></td>
</tr>
<tr>
<td>Analytics instrument</td>
</tr>
<tr>
<td><strong>Decision making</strong></td>
</tr>
<tr>
<td>Start architectural pivot (continued in Project 2)</td>
</tr>
<tr>
<td>Start product implementation strategy pivot (continued in Project 2)</td>
</tr>
<tr>
<td>Validate further assumptions (regarding architecture and product implementation)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Infrastructure model instantiation (only applicable parts)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Roles</strong></td>
</tr>
<tr>
<td>Business analyst, product owner (played by company leadership), software developer (played by Software Factory students)</td>
</tr>
<tr>
<td><strong>Technical Infrastructure</strong></td>
</tr>
<tr>
<td>Analytics Tools (MVF developed in project)</td>
</tr>
<tr>
<td><strong>Information Artefacts</strong></td>
</tr>
<tr>
<td>Learnings (not formally documented in project)</td>
</tr>
</tbody>
</table>
were thinking about making our own hardware. We had a lot of risk and high expenses. Now we have moved to existing available hardware. Instead of a client application approach, we are using a web-based platform. This expands the possible reach of our offering. We are also looking at other platforms. For example, Samsung just released a new SDK for Smart TVs.”
(Tellybean founder)
“Choosing the right Android-based technology platform has really sped things up a lot. We initially tried to do the whole technology stack from hardware to application. The trick is to find your segment in the technology stack, work there, and source the rest from outside. We have explored several Android-based options, some of which were way too expensive. Now we have started to find ways of doing things that give us the least amount of problems. But one really important thing is that a year ago, there were no Android devices like this. Now there are devices that can do everything we need. So the situation has changed a lot.” (Tellybean CTO)
The high-level goals of the second project could be considered as defining and testing a solution hypothesis that addresses the feasibility of the proposed hardware-software solution. The project delivered an evaluation of the technical solution as well as improvement proposals. The analysis showed that the initial architecture and product implementation strategy were too resource-consuming to carry out fully. The results were used by the company to modify their strategy. Instead of implementing the hardware themselves, they opted for a strategy where they would build on top of generic hardware platforms and thus shorten time-to-market and development costs. Table 3 shows the model instantiations in Project 2.
4.2.3. Project 3
In the third project, the capability for continuous deployment was developed. The STBs could be updated remotely, allowing new features to be pushed to customers at very low cost and with little effort. The implications of this capability are that the company is able to react to changes in their technological solution space by updating operating
<table>
<thead>
<tr>
<th>Process model instantiation</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Vision</td>
<td>Video calling in the home</td>
</tr>
<tr>
<td>Business model and strategy</td>
<td>Offer video calling through operator partnerships</td>
</tr>
<tr>
<td></td>
<td>(+ assumptions about architecture and product implementation strategies)</td>
</tr>
<tr>
<td>Hypotheses</td>
<td>“Product should be developed as custom hardware-software codesign” and “Architecture should be based on Enterprise Java technology and be independent of TV set (which acts only as display)”</td>
</tr>
<tr>
<td>Design, execute, analyse</td>
<td>Prototype implementation; evaluate current solution proposal</td>
</tr>
<tr>
<td>MVF</td>
<td>Alternative call mediator server; alternative system architecture</td>
</tr>
<tr>
<td>Decision making</td>
<td>Architectural pivot (Android-based COTS hardware and OS)</td>
</tr>
<tr>
<td></td>
<td>Product implementation strategy pivot (do not develop custom hardware)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Infrastructure model instantiation (only applicable parts)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Roles</td>
<td>Business analyst, product owner (played by company leadership), software developer (played by Software Factory students)</td>
</tr>
<tr>
<td>Technical Infrastructure</td>
<td>Analytics Tools (from previous project)</td>
</tr>
<tr>
<td>Information Artefacts</td>
<td>Learnings (not formally documented in project)</td>
</tr>
</tbody>
</table>
system and application software, and to emerging customer needs by deploying new features and testing feature variants continuously.
The high-level goals of the third project could be considered as developing a capability that allows for automating the continuous deployment process. The prerequisite for this is a steady and controlled pace of development where the focus is on managing the amount of work items that are open concurrently in order to limit complexity. At Tellybean, this is known as the concept of one-piece flow.
“The one-piece flow means productisation. In development, it means you finish one thing before moving on to the next. It’s a bit of a luxury in development, but since we have a small team, it’s possible. On the business side, the most important thing has been to use visual aids for business development and for prioritising. In the future we might try to manage multiple-piece flows.” (Tellybean founder)
The third project instantiated parts of our infrastructure architecture model, shown in Table 4. In particular, it focused on the role of a continuous delivery system in relation to the tasks that need to be carried out for continuous experimentation, meaning that top and rightmost parts of Figure 3 were instantiated, as detailed in the table.
4.2.4. Project 4
In the fourth project, it was initially difficult to identify what the customers considered to be the main assumptions. However, once the main assumptions became clear, it was possible to focus on validating them. This highlights the finding that although it is straightforward in theory to assume that hypotheses should be derived from the business model and strategy, it may not be straightforward in practice. In new product and service development, the business model and strategy is not finished, and, especially in the early cycles of experimentation, it may be necessary to try several alternatives and spend effort on modelling assumptions until a good set of hypotheses is obtained. We therefore found it useful to separate the identification and prioritisation of hypotheses on the strategy level from the detailed formulation of hypotheses and experiment design on the experiment level. Table 5 shows the instantiated model parts in Project 4. We
<table>
<thead>
<tr>
<th>Process model instantiation</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Vision</td>
<td>Video calling in the home</td>
</tr>
<tr>
<td>Business model and strategy</td>
<td>Offer video calling through operator partnerships (+ assumptions about architecture and product implementation strategies)</td>
</tr>
<tr>
<td>Hypotheses</td>
<td>“Capability for automatic continuous deployment is needed for incremental product development and delivery”</td>
</tr>
<tr>
<td>Design, execute, analyse</td>
<td>Project focused on instantiating parts of infrastructure architecture model and did not include a product experiment</td>
</tr>
<tr>
<td>MVF</td>
<td>Prototype for rapid deployment of software updates</td>
</tr>
<tr>
<td>Decision making</td>
<td>Persevere</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Infrastructure model instantiation (only applicable parts)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Roles</td>
<td>Business analyst, product owner (played by company leadership), software developer (played by Software Factory students), DevOps engineer, release engineer (played by company CTO and other technical representatives; also represented by user stories with tasks for these roles)</td>
</tr>
<tr>
<td>Technical infrastructure</td>
<td>Continuous integration system, continuous delivery System (MVF developed in project)</td>
</tr>
<tr>
<td>Information artefacts</td>
<td>Roll-out status</td>
</tr>
</tbody>
</table>
note that some of these parts were introduced into the model because of our findings from Project 4.
In this project, there were two assumptions: that interaction with the photo map would retain users, and that an automated process of guiding users towards goals was feasible. The assumption that continued use of the application would come from interacting with the photo map was shown to be incorrect. Users would initially create the map, but would not spend much time interacting with it – by, e.g., adding or changing photos, rearranging the map, adding photo annotations, etc. Instead, users reported a desire for connecting with other users to share maps and discuss life goals. Also, they expressed a willingness to connect with professional or semi-professional coaches to get help with implementing their life goals. The social aspect of the service had been overlooked. Whether this was due to familiarity with existing social media applications was left uninvestigated. In any case, the assumption was invalidated and as a result, the assumptions regarding automated features for guiding users towards goals were also invalidated. The investigation indicated that users were motivated by the potential for interaction with other users, and that these interactions should include the process of motivating them to reach goals. It is important to note that the two hypotheses could be invalidated because they were dependent. The process of identifying and prioritising hypotheses separately from detailed formulation of hypotheses and experiment design makes it possible to choose the order of experiments in a way that gains the maximum amount of information with the minimum number of experiments. Testing the most fundamental assumptions – the ones on which most other assumptions rely – first, allows the possibility of eliminating other assumptions with no additional effort.
The fourth project also revealed challenges involved with instrumenting the application for data collection. It was difficult to separate the process of continuous experimentation from the technical prerequisites for instrumentation. In many cases, substantial investments into technical infrastructure may be needed before experiments can be carried out. These findings led to the roles, the high-level description of the technical infrastructure, and the information artefacts in the infrastructure architecture (see Figure 3).
Table 5: Model instantiations in Project 4.
<table>
<thead>
<tr>
<th>Process model instantiation</th>
<th>Infrastructure model instantiation (only applicable parts)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Vision</strong></td>
<td>Business analyst, product owner (played by company leadership), software developer (played by Software Factory students)</td>
</tr>
<tr>
<td><strong>Business model and strategy</strong></td>
<td>Instrumentation, Front-end system</td>
</tr>
<tr>
<td><strong>Hypotheses</strong></td>
<td>Learnings</td>
</tr>
<tr>
<td><strong>Design, execute, analyse</strong></td>
<td>User tests with observation and interviews</td>
</tr>
<tr>
<td><strong>MVF</strong></td>
<td>HTML5-based, tablet-optimised application</td>
</tr>
<tr>
<td><strong>Decision making</strong></td>
<td>Product implementation strategy pivot (focus on social interaction rather than automated recommendations)</td>
</tr>
</tbody>
</table>
However, many experiments are also possible without advanced instrumentation. The fourth project indicates that experiments may typically be large, or target high-level questions, in the beginning of the product or service development cycle. They may address questions and assumptions which are central to the whole product or service concept. Later stages of experimentation may address more detailed aspects, and may be considered optimisation of an existing product or service.
5. Discussion
The continuous experimentation model developed in the previous section can be seen as a general description. Many variations are possible. For instance, experiments may be deployed to selected customers in a special test environment, and several experiments may be run in parallel. A special test environment may be needed particularly in business-to-business markets, where the implications of feature changes are broad and there may be reluctance towards having new features at all. The length of the test cycle may thus have to be longer in business-to-business markets. Direct deployment could be more suitable for consumer markets, but we note that the attitude towards continuous experimentation is likely to change as both business and consumer customers become accustomed to it.
Each project could have instantiated the RIGHT models in different ways. In the first project, the experiment could have been carried out using mockup screens to validate what metric data, visualisation, and analysis tools would have been sufficient to convince the stakeholders. However, this would have been detrimental since it would not have revealed the shortcomings in the initial architecture and implementation strategy. Although the design of the experiment left much to be desired, carrying it out using a real, programmed prototype system made it possible to discover the need to reconsider some of the previous strategy choices.
In the second project, the learnings could have been better used to define a more precise set of hypotheses after a careful analysis of the shortcomings of the previous system architecture. However, this was not necessary since the purpose was not a point-by-point comparison but rather an either-or comparison between one general approach.
and another. This highlights an important notion regarding continuous experimentation: it only seeks to produce enough information for a decision to be made correctly.
In the third project, only the capability for continuous delivery was instantiated. The project could also have addressed the components that are necessary to carry out actual experiments. Due to project time constraints, this was left uninvestigated in the third project, but was considered in the fourth project instead. In that project, one cycle of the full RIGHT process model was carried out, and the software was instrumented for experimentation although using ready-made services such as Google Analytics.
While our ultimate aim is for our models to cover the entire breadth of continuous experimentation, we assume that not all real-life projects will need to instantiate all parts. For instance, experiments can be conducted without an MVP, especially in an early stage of product development. It may also not be necessary in all cases to have a heavy infrastructure for the experimentation – this becomes relevant if experimentation is conducted in very large volumes or when the purpose is to maintain a set of experiments that are run continuously to collect trend information while the product is incrementally changed.
In addition to the project-specific observations, we consider some more general concerns. Having several experiments run in parallel presents a particular challenge. The difficulty of interpreting online experiments has been convincingly demonstrated by Kohavi et al. [16]. Statistical interactions between experiments should be considered in order to assess the trustworthiness of the experiments. For this reason, it is important to coordinate the design and execution of experiments so that correct inferences are drawn. More generally, the issue of validity becomes important when the entire R&D organisation is experiment-driven. Incorrectly designed or implemented experiments may lead to critical errors in decision-making. Threats to validity can also stem from a failure to consider ethical aspects of experiments. Not only may unethical experiments damage company reputation, but they may cause respondents to knowingly or unconsciously bias the experimental results, leading to errors in decision-making.
Other challenges include the difficulty of prioritising where to start: which assumption should be tested first. In Project 4, we identified a dependency between assumptions regarding the backend recommendation logic and the assumption of what motivates users
to keep using the application. By invalidating the latter, we automatically invalidated
the first assumption. This highlights the importance of identifying critical assumptions,
as testing them first may save several unneeded experiments. We see a need for further
research into this area. Also, in hardware-software co-design, illustrated by the first
three projects, setting up the experimental cycle quickly is a major challenge due to both
the longer release cycle of hardware and the potential synchronisation problems between
hardware and software development schedules. Based on the findings presented in this
paper, it may be beneficial to test a few strategic technical assumptions first, such as the
viability of a certain hardware-software platform. As our case demonstrates, choosing
the correct platform early can have a significant impact on the ability to proceed to
actual service development.
A further set of challenges have to do with the model of sales and supplier networks.
Essentially all companies are dependent on a network of suppliers and sales channels. It
may be necessary to extend the model presented here to take into account the capabilities
particularly of hardware suppliers to supply the needed components in a timely fashion
and with the needed flexibility to programmatically vary behavioural parameters in
these components. Also, when the company is not selling its products directly to end
users, several levels of intermediaries may interfere with the possibilities to collect
data directly from field use. If a sales partner cannot grant access to end users, other
means of reaching the audience are needed. We envision using early-access and beta-test
programs for this purpose, a practice that is commonly used in the computer gaming
industry. Other models are possible, and there is an opening for further research in this
area.
In some cases, an experimental approach may not be suitable at all. For example,
certain kinds of life-critical software or software that is used in environments where
experimentation is prohibitively expensive, may preclude the use of experiments as
a method of validation. However, it is not clear how to determine the suitability of
an experimental approach in specific situations, and research on this topic could yield
valuable guidelines on when to apply the model presented here.
Another question is whether continuous delivery is a strictly necessary precondition
for continuous experimentation. In the beginning of the product development cycle,
experimentation must occur before much software is written at all. At that stage, continuous delivery may not be necessary. Also, not all experiments require new software to be delivered to users. While a continuous delivery system may exist, the software itself may be architected for variability so that it can reconfigure itself at run-time. In such cases, no new version of the software needs to be delivered for new experiments to run. However, not all experiments are possible even with a very flexible architecture that allows for run-time reconfiguration. Continuous delivery is a good vehicle for delivering experiments to users and to ensure quality in the development process. The model presented here is based on iterative, evolutive optimisation of product features and an incremental model of innovation. To carry out revolutionary innovation, the process needs to be extended with other means of discovering customer value. These may profoundly invalidate the business model or strategy, and may even have an impact on the overall vision.
Finally, experimentation may be conducted with several kinds of stakeholders. Apart from customers and end users, experiments could be directed towards investors, suppliers, sales channels, or distributors. Companies whose product is itself a development platform may want to conduct experiments with developers in their platform ecosystem to optimise the developer experience [9] of their tools, methods, and processes. These experiments may require other kinds of experimental artefacts than the MVP/MVF, including, e.g., processes, APIs, and documentation. Research on the types of experimental artefacts and associated experimental designs could lead to fruitful results for such application areas. Also, an open question is who should primarily lead or conduct the experimentation, especially when the development organisation is separate from the customer organisation. Some training may be needed for customers in order to ensure that they can interact with the continuous experimentation process running in the development organisation. Similarly, the development team may need additional training to be able to interact with the customer to derive assumptions, plan experiments, and report results for subsequent decision-making. Another possibility is to introduce a mediating role which connects the customer and development organisations. More generally, increasing the capability to perform experimentation and continuous software engineering requires consideration of human factors in software development teams [23].
Further research is needed to determine how the experimental process works across organisational borders, whether within or outside a single company.
A particular limitation of this study is the use of relatively short projects with student participants. Students carried out the technical software development and analysis tasks in the projects, while the researchers handled tasks related to identification of assumptions, generation of hypotheses, and higher-level planning tasks together with customer representatives. While it is reasonable to expect that professional software developers would have reached a different level of quality and rigour in the technical tasks, we consider it likely that the findings are applicable beyond student projects since the focus of this paper is not on the technical implementation but on the integration of experiment results in the product development cycle and the software development process. The length of the projects means that at most one experimental cycle could be carried out in a single project. Thus the first case company completed three, and the second case company one experimental cycle. In a real setting, multiple experimentation rounds would be carried out over an extended period of time, proceeding from experiments addressing the most important assumptions with the highest impact towards increasing detail and optimisation. The findings of this study should be considered to apply mostly in the early stages of experimentation.
6. Conclusions
Companies are increasingly transitioning their traditional research and product development functions towards continuous experiment systems [12]. Integrating field experiments with product development on business and technical levels is an emerging challenge. There are reports of many companies successfully conducting online experiments, but there is a lack of a systematic framework model for describing how such experiments should be carried out and used systematically in product development. Empirical studies on the topic of continuous experimentation in software product development is a fruitful ground for further research. Software companies would benefit from clear guidelines on when and how to apply continuous experimentation in the design and development of software-intensive products and services.
In this paper, we match a model for Continuous Experimentation based on analysis of previous research against a multiple case study in the Software Factory laboratory at the University of Helsinki. The model describes the experimentation process, in which assumptions for product and business development are derived from the business strategy, systematically tested, and the results used to inform further development of the strategy and product. The infrastructure architecture for supporting the model takes into account the roles, tasks, technical infrastructure, and information artefacts needed to run large-scale continuous experiments.
A system for continuous experimentation requires the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. There are several critical success factors for such a system. The organisation must be able to properly and rapidly design experiments, perform advanced instrumentation of software to collect, analyse, and store relevant data, and integrate experiment results in both the product development cycle and the software development process. Feedback loops must exist through which relevant information is fed back from experiments into several parts of the organisation. A proper understanding of what to test and why must exist, and the organisation needs a workforce with the ability to collect and analyse qualitative and quantitative data. Also, it is crucial that the organisation has the ability to properly define decision criteria and act on data-driven decisions.
In future work, we expect the model to be expanded as more use cases arise in the field. Domain-specific variants of the model may also be needed. Furthermore, there are many particular questions with regard to the individual parts of the model. Some specific areas include (i) how to prioritise assumptions and select which assumptions to test first; (ii) how to assess validity and determine how far experimental results can be trusted – especially how to ensure that experiments are trustworthy when running potentially thousands of them in parallel; (iii) how to select proper experimental methods for different levels of product or service maturity; and (iv) how to build a back-end system for continuous experimentation that can scale to the needs of very large deployments, and can facilitate and even partially automate the creation of experimental plans. Particular questions regarding automation include which parts of the model could be automated or
supported through automation. Another question is how quickly a Build-Measure-Learn block can be executed, and what the performance impact of the model is on the software development process.
**Acknowledgements**
This work was supported by Tekes – the Finnish Funding Agency for Technology and Innovation, as part of the N4S Program of DIGILE (Finnish Strategic Centre for Science, Technology and Innovation in the field of ICT and digital business).
**References**
|
{"Source-Url": "https://helda.helsinki.fi//bitstream/handle/10138/175085/rightmodel.pdf?sequence=1", "len_cl100k_base": 16139, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 96650, "total-output-tokens": 19268, "length": "2e13", "weborganizer": {"__label__adult": 0.00042557716369628906, "__label__art_design": 0.0007114410400390625, "__label__crime_law": 0.0002753734588623047, "__label__education_jobs": 0.005977630615234375, "__label__entertainment": 9.79304313659668e-05, "__label__fashion_beauty": 0.00021755695343017575, "__label__finance_business": 0.0006961822509765625, "__label__food_dining": 0.00036978721618652344, "__label__games": 0.0008115768432617188, "__label__hardware": 0.0009603500366210938, "__label__health": 0.0004761219024658203, "__label__history": 0.0004677772521972656, "__label__home_hobbies": 0.00014519691467285156, "__label__industrial": 0.0004620552062988281, "__label__literature": 0.0005030632019042969, "__label__politics": 0.0002570152282714844, "__label__religion": 0.0004963874816894531, "__label__science_tech": 0.0186309814453125, "__label__social_life": 0.000141143798828125, "__label__software": 0.005680084228515625, "__label__software_dev": 0.9609375, "__label__sports_fitness": 0.0003077983856201172, "__label__transportation": 0.0006151199340820312, "__label__travel": 0.0002310276031494141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 93409, 0.0238]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 93409, 0.19741]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 93409, 0.94443]], "google_gemma-3-12b-it_contains_pii": [[0, 612, false], [612, 2987, null], [2987, 5020, null], [5020, 6963, null], [6963, 9177, null], [9177, 11763, null], [11763, 14252, null], [14252, 16582, null], [16582, 19066, null], [19066, 21630, null], [21630, 23868, null], [23868, 26182, null], [26182, 28479, null], [28479, 29778, null], [29778, 32021, null], [32021, 34088, null], [34088, 36332, null], [36332, 38571, null], [38571, 40986, null], [40986, 43145, null], [43145, 44889, null], [44889, 47347, null], [47347, 48628, null], [48628, 50282, null], [50282, 52748, null], [52748, 54874, null], [54874, 56985, null], [56985, 59262, null], [59262, 61127, null], [61127, 63250, null], [63250, 64895, null], [64895, 67159, null], [67159, 68567, null], [68567, 70991, null], [70991, 71728, null], [71728, 73996, null], [73996, 76578, null], [76578, 79110, null], [79110, 81697, null], [81697, 84027, null], [84027, 86666, null], [86666, 89029, null], [89029, 91759, null], [91759, 93409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 612, true], [612, 2987, null], [2987, 5020, null], [5020, 6963, null], [6963, 9177, null], [9177, 11763, null], [11763, 14252, null], [14252, 16582, null], [16582, 19066, null], [19066, 21630, null], [21630, 23868, null], [23868, 26182, null], [26182, 28479, null], [28479, 29778, null], [29778, 32021, null], [32021, 34088, null], [34088, 36332, null], [36332, 38571, null], [38571, 40986, null], [40986, 43145, null], [43145, 44889, null], [44889, 47347, null], [47347, 48628, null], [48628, 50282, null], [50282, 52748, null], [52748, 54874, null], [54874, 56985, null], [56985, 59262, null], [59262, 61127, null], [61127, 63250, null], [63250, 64895, null], [64895, 67159, null], [67159, 68567, null], [68567, 70991, null], [70991, 71728, null], [71728, 73996, null], [73996, 76578, null], [76578, 79110, null], [79110, 81697, null], [81697, 84027, null], [84027, 86666, null], [86666, 89029, null], [89029, 91759, null], [91759, 93409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 93409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 93409, null]], "pdf_page_numbers": [[0, 612, 1], [612, 2987, 2], [2987, 5020, 3], [5020, 6963, 4], [6963, 9177, 5], [9177, 11763, 6], [11763, 14252, 7], [14252, 16582, 8], [16582, 19066, 9], [19066, 21630, 10], [21630, 23868, 11], [23868, 26182, 12], [26182, 28479, 13], [28479, 29778, 14], [29778, 32021, 15], [32021, 34088, 16], [34088, 36332, 17], [36332, 38571, 18], [38571, 40986, 19], [40986, 43145, 20], [43145, 44889, 21], [44889, 47347, 22], [47347, 48628, 23], [48628, 50282, 24], [50282, 52748, 25], [52748, 54874, 26], [54874, 56985, 27], [56985, 59262, 28], [59262, 61127, 29], [61127, 63250, 30], [63250, 64895, 31], [64895, 67159, 32], [67159, 68567, 33], [68567, 70991, 34], [70991, 71728, 35], [71728, 73996, 36], [73996, 76578, 37], [76578, 79110, 38], [79110, 81697, 39], [81697, 84027, 40], [84027, 86666, 41], [86666, 89029, 42], [89029, 91759, 43], [91759, 93409, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 93409, 0.19335]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7c4aaef95a5247e5add19bc0fc9293f040a6f71f
|
PATTERN-LEVEL PROGRAMMING WITH ASTEROID
Lutz Hamel
Department of Computer Science and Statistics
University of Rhode Island
Kingston, RI 02881
USA
lutzhamel@uri.edu
ABSTRACT
John Backus identified value-level (object-level) programming languages as programming languages that combine various values to form other values until the final result values are obtained. Virtually all our classic programming languages today including C, C++, and Java belong into this category. Here we identify pattern-level (term-level) programming languages that combine various patterns to form other patterns until the final result patterns are obtained. New patterns are constructed from existing ones by the application of pattern-to-pattern functions exploiting pattern matching and constructors. First-order logic programming languages such as Prolog, OBJ, and Maude belong into this category. Our insight that pattern-level and value-level programming gives rise to a pattern-value duality is used as the foundation of the design of a new programming language called Asteroid. Hallmarks of this new programming language design are the developer’s ability to explicitly control the interpretation or model of expression terms and the notion of ‘patterns as first class citizens’. In addition to a complete implementation of pattern-level programming Asteroid also supports an object-oriented style of programming based on prototypes and also subject to pattern matching.
KEYWORDS
pattern matching, semantics, programming language design
1. INTRODUCTION
Pattern matching is a very powerful and useful device in programming [1].
Abstractly, pattern matching can be defined as:
*Pattern matching is the act of checking a given sequence of tokens or structure (the subject) for the presence of the constituents of some pattern.*
– Wikipedia
In the context of this definition a pattern does three things [2]:
1. Decide whether a given subject has a certain structure;
2. Extract zero or more pieces;
Listing 1: Basic pattern matching in Asteroid.
```plaintext
function postfix
with (op, cl, cr) do — match binary node
return (postfix(cl), postfix(cr), op)
orwith (op, c) do — match unary node
return (postfix(c), op)
orwith (v, ) do — match leaf
return (v, )
end function
```
3. Bind those pieces to variables in a certain context.
The Asteroid code in Listing 1 is an example of pattern matching on function arguments: if a given pattern appearing in a (or)with-clause matches the function input then the corresponding function body is executed. This particular function recursively turns a tree structure written in Lisp-like prefix notation into its corresponding postfix notation. What is implicit in this example is that we are only allowed to pattern match on constructors, that is, functions that represent a structure rather than compute a value.
Value-level programming languages are programming languages that combine various values to form other values until the final result values are obtained. Here we identify pattern-level (term-level) programming languages as opposed to value-level languages that combine various patterns to form other patterns until the final result patterns are obtained. New patterns are constructed from existing ones by the application of pattern-to-pattern functions exploiting pattern matching and constructors.
Our insight that pattern-level and value-level programming gives rise to a pattern-value duality is used as the foundation of the design of a new programming language Asteroid. Hallmarks of this new programming language design are the developer’s ability to explicitly control the interpretation or model of expressions terms and the notion of ‘patterns as first class citizens’. In the context of the ability to manipulate the interpretation of expression terms we are able to develop an elegant semantics for pattern matching. In addition to a complete implementation of pattern-level programming Asteroid also supports an object-oriented style of programming based on prototypes and which is also subject to pattern-matching.
The remainder of the paper is organized as follows: Section 2. puts our work in the context of related work. We look at pattern-level versus value-level programming in Section 3.. Our notion of pattern-value duality is defined in Section 4.. An outline of the major features of the Asteroid language is given in Section 5.. We make some general observations and talk about further work in Section 6.. In Section 7. we make some final remarks.
2. Related Work
Pattern matching first appeared in functional programming languages such as SASL [3] and HOPE [4] in the 1970’s and early 1980’s as a way to make data structure analysis and decomposition more declarative. It was adopted by functional languages such as SML [5] and Haskell [6] in the 1990’s for similar reasons. Today, many modern programming languages such as Python [7], Rust [8], and Swift [9] incorporate some form of pattern matching into the syntax and semantics of the language (as opposed to offering pattern
matching as a module/library add-on, *e.g.* [10]). Furthermore, pattern matching has been studied in different formal computational settings such as the λ-calculus [11, 12] and first-order logic [13]. One of the most comprehensive implementation of pattern matching we are aware of is in the Thorn programming language [2].
If we look beyond functional and imperative programming languages then we find that pattern matching or unification is at the heart of logic programming languages such as Maude [14] and Prolog [15]. Pattern matching is at the core of term rewriting which is considered the operational semantics for equational logic languages like Maude. Unification in Prolog can be viewed as an extended version of pattern matching where not only the pattern is allowed to contain variables but also the subject term.
As useful and powerful as the pattern matching paradigm is, the implementation of pattern matching in most modern programming languages falls short. Here are a few examples,
- With the exception of Thorn none of the present day programming languages support patterns as first class citizens in the same sense that anonymous/lambda functions are now supported by virtually all modern programming languages.
- In most programming languages there is an arbitrary split between constructors that are supported in pattern matching and constructors which are not supported in pattern matching. For example, Python and Swift allow the user to pattern match on tuple and list constructors but not on constant constructors (or expression patterns in Swift terminology). To be fair, Swift does allow constant constructor patterns in a narrow context limited to its ‘switch’ statement. This arbitrary split between constructors that are supported by pattern matching and those that are not seems to violate the notion of orthogonality in programming language design [16, 17].
- Overly restrictive pattern matching semantics. Consider the following ‘let’ statement:
```
let (1, y) = (1, 2);
```
In Rust this is a syntactically correct program but fails to compile due to being a refutable pattern. This is analogous to saying that ‘x = y/z’ is a refutable computation because the undefined value due to a division by zero is usually not allowed to be assigned to a variable and therefore the statement should not compile. No programming language implements this in this way. Instead we rely on exceptions being raised in such contexts. Therefore, rather than failing to compile, the ‘let’ example above should generate a runtime exception if the pattern match fails. The equivalent statement in Python (no ‘let’ keyword and no semi-colon required) fails due to a ‘cannot assign to constants’ error indicating that Python treats this statement with an awkward mix of pattern matching and assignment semantics.
- Languages such as Python and Swift support object-oriented programming but do not support pattern matching on objects.
Here we introduce Asteroid, a new experimental language that employs the insight that patterns and values are dual aspects of expression structures and thereby provides a much more integrated view of programming with patterns. This pattern-value duality is most clearly visible with constants that in one instance can be viewed as values in an expression
and in another instance as patterns during pattern matching depending on the current interpretation of these structures.
Not only does Asteroid address the problematic areas touched upon above but it also addresses the fact that the fixed underlying interpretation of expression structures in our current generation of programming languages interferes with the full deployment of pattern matching as a programming paradigm. Consider for example the ‘+’ operator. In virtually all modern programming languages this has a fixed, value based meaning which can be extended via overloading but ultimately not really changed. This has consequences for pattern matching in that the fixed meaning of the ‘+’ operator is usually a function other than a constructor and therefore operators such as the ‘+’ operator cannot be used in patterns forcing the developer to forsake the most natural expression of a pattern and implement a desired pattern/structure via some sort of secondary (non-optimal) notation. In contrast our Asteroid language avoids attaching rigid interpretations to operators such as ‘+’ and therefore the following Asteroid ‘let’ statement can be interpreted as a legal pattern matching statement:
\[
\text{let } 1 + 1 = 1 + 1.
\]
Under Asteroid’s default model the right side of the equal sign is interpreted as a term (not a value!), the subject term, and the left side of the equal sign is interpreted as a pattern. We can paraphrase the computation by:
\[
\text{Let the expression } 1 + 1 \text{ on the right side be interpreted as a term in Asteroid’s default model and pattern match it with the pattern } 1 + 1 \text{ on the left side.}
\]
In Asteroid default model all expression level symbols are term constructors that can be used to construct term expressions or can be used as patterns. However, the developer can attach a specific behavior or interpretation to individual expression symbols in order to turn expression terms into values. This is not unlike Prolog where terms have no interpretation beyond the Least Herbrand Model term model [18] but can acquire specific interpretations by mapping terms into values, e.g. using the ‘is’ predicate. Consider the following Prolog queries,
\[
?- 1 + 1 = 1 + 1. \\
\text{true}
\]
\[
?- 2 = 1 + 1. \\
\text{false}
\]
\[
?- 2 \text{ is } 1 + 1. \\
\text{true}
\]
\[
?- 1 + 1 \text{ is } 1 + 1. \\
\text{false}
\]
The first two queries demonstrate that in Prolog the ‘+’ symbol has no meaning beyond being a term constructor and therefore the 1 + 1 term has no meaning beyond being an term structure.
The second set of queries demonstrates that the ‘is’ predicate assigns a standard algebraic interpretation to operator symbols such as ‘+’ in the right side term, evaluates that term using this interpretation, and then unifies the result value interpreted as a term with the left side term. It is entirely conceivable that one could write a new version of the ‘is’ predicate that would provide a completely different interpretation of the right side operator symbols.
The idea that a programming language can have multiple interpretations for a set of operator symbols as in Prolog had a fundamental impact on the design of Asteroid. The
Listing 2: Pattern matching and models in Asteroid.
1 load "io".
2
3 load "default". — load default term model
4 let 1 + 1 = 1 + 1.
5 try
6 let 2 = 1 + 1. — throws an exception
7 catch _ do
8 print "pattern match failed".
9 end try
10
11 load "standard". — load standard model
12 let 2 = 1 + 1.
13 try
14 let 1 + 1 = 1 + 1. — throws an exception
15 catch _ do
16 print "pattern match failed".
17 end try
program in Listing 2 is Asteroid’s equivalent of the above Prolog queries. As we have seen before, under the default term model (loaded on line 3) the ‘let’ statement on line 4 shows that the entity on the right of the equal sign is interpreted as a structure which can then be pattern matched to the pattern on the left side. The ‘let’ statement on line 6 throws an exception since the structure on the right cannot be pattern matched to the pattern on the left in the term model. On line 11 we load Asteroid’s standard interpretation for arithmetic operators. We show on line 12 that in this standard model the expression ‘1 + 1’ is interpreted as the value two. This value in turn is then interpreted in Asteroid’s term model as the term ‘2’ which is then pattern matched against the pattern on the left side of the assignment statement. The last ‘let’ statement “proves” that the result of ‘1 + 1’ under the standard model is not a structure by throwing a ‘pattern match failed’ exception. Bear in mind that Asteroid is not a logic programming language. The similarities between Asteroid and Prolog end pretty much here.
By giving the developer the ability to directly manipulate the model/interpretation attached to “standard” operators in Asteroid the confusion and limitations of patterns versus values can be brought under control and it directly addresses the issue of expression punning raised in [2]. This ability to have a fully dynamic interpretation of its constructor symbols firmly sets Asteroid apart from Thorn and any of the other modern programming languages such as Python, Rust, and Swift.
John Backus identified value-level (object-level) programming languages as programming languages that combine various values to form other values until the final result values are obtained. New values are constructed from existing ones by the application of various value-to-value functions [19]. The values are objects that have a hidden internal structure that only becomes explicit during the computational steps when applying a function to a value. Virtually all our classic programming languages today including C [20], C++ [21], and Java [22] belong into this category.
Here we identify pattern-level (term-level) programming languages that combine various patterns to form other patterns until the final result patterns are obtained. New
patterns are constructed from existing ones by the application of pattern-to-pattern functions and constructors. Constructors can be viewed as a special case of pattern-to-pattern functions. Patterns (terms) have an explicit structure that can be processed directly through pattern matching during the computational steps of a program. First-order logic programming languages such as Prolog [18], OBJ [23], and Maude [14] belong into this category.
We treat patterns and terms as synonymous since in our view when patterns are fully implemented in a programming language then any pattern can become a term and any term can become a pattern. It is clear that any term can be considered a pattern since a term has structure that can be matched against a subject term. The converse that any pattern can be considered a term is not so obvious because patterns can have variables. However, if the variables appearing in a pattern are bound to term structures then it is clear that a pattern can be considered a term.
Programming languages such as Python [7], Swift [9] and Rust [8] fully support the value-level programming model and some aspects of the pattern-level programming model. Most notably, the "pattern as first-class citizen" is missing from virtually all these languages. The same holds for most declarative languages such as SML [5] that use patterns extensively but lack the ability to manipulate patterns directly.
There is one exception: the scripting language Thorn [2] which of course implements value-level programming but also implements pattern-level programming in an imperative language setting.
An interesting observation is that most modern programming languages (e.g. SML, Python, Swift, Rust, Thorn, etc.) are value-level programming languages which support some degree of pattern-level programming and that Asteroid is a pattern-level programming language (by default only the term model is available in Asteroid) that also supports value-level programming (by loading the standard model – see Listing 2).
4. The Pattern-Value Duality
As the designers Thorn recognized in their "pattern punning" comments [2], patterns and values are often only disambiguated in the context of a computation. This gives rise to our notion of the pattern-value duality,
1. An expression structure interpreted in a term model such as the Least Herbrand Model [18] or an initial algebra-like model [24] is a term or pattern.
2. An expression structure interpreted in a value-based model such as the standard mathematical interpretation for algebraic operators is a value.
As we have seen, we can use this dual view of expression structures to give an elegant semantics to the 'let' statements appearing in Listing 2. The 'let' statement on line 4 is interpreted in the default Asteroid term model. The 'let' statement on line 4 is interpreted as the term '1 + 1' whose structure can be matched directly by the pattern on the left. The 'let' statement in Listing 2 on line 12 is interpreted in the standard Asteroid model. Here the right side of the 'let' statement is first evaluated to the value two in this standard model. In preparation to the pattern matching step this value viewed as the constructor '2' is then interpreted in the Asteroid term model and now the pattern of the left side of the 'let' can be applied to the right side for a pattern match step. Many of the
other pattern matching operations available in Asteroid can be given a similar semantics based on the pattern-value duality.
A noteworthy consequence of this semantics is that the only things ever associated with variables are term structures. Consider the following Asteroid code,
```asteroid
text
```
Here the variable ‘v’ is a pattern and according to our semantics above the term ‘2’ is bound to ‘v’ during pattern matching. Since term structures are easily reinterpreted under different models there are no semantic difficulties with switching models during the execution of a program.
In Asteroid we can fully exploit pattern-level programming making use of this duality by giving the developer explicit control over the interpretation of structures. This approach is in stark contrast to Thorn where even though the implementation of pattern matching is fairly complete their static interpretation of expressions such ‘1 + 1’ as a value limits pattern-level programming in that language.
5. Asteroid the Programming Language
Asteroid [25] is an imperative style programming language under development that fully supports both value-level and pattern-level programming. It was highly influenced by the minimalistic approach to data structures and object-orientation in the programming language Lua [26]. The focus on readability and the “pythonic” view of programming in Python [7] had a major impact on the syntax of the Asteroid language [27, 28]. The programming language ML [5] had an influence on the function level pattern matching syntax in Asteroid. Many of the semantic issues around pattern matching with first class patterns worked out in Thorn [2] had a direct impact on the design of pattern matching in Asteroid. Finally, the idea of separating term structure from a more value-oriented interpretation was inspired by the Herbrand models in Prolog [18] as well as the initial term algebras in algebraic data type specification [24, 29].
In the following section we will highlight Asteroid functionality. Few if any of the features discussed here are available in languages such as Python, Swift, and Rust. Many of the pattern matching features including patterns as first class citizens in Asteroid are also available in Thorn [2]. However, due to the fact that Thorn has a fixed interpretation of terms many of the model based pattern matching operations Asteroid supports are not available in Thorn.
5.1 Manipulating the Model
We demonstrate how a developer can explicitly manipulate the interpretation of terms. A simple program that manipulates the interpretations of expressions is given in Listing 3. This program prints out the value of the term ‘4 + 3 - 2’ under three different interpretations:
1. Under the default term model (line 4);
2. Under the standard model (line 10);
3. Under the standard model with the interpretations for ‘+’ and ‘-’ swapped (line 26).
Listing 3: Swapping the interpretation of plus and minus.
```plaintext
load "io". --- load io module
load i o module
print (4+3-2).
load "standard".
print (4+3-2).
let plus_op = _plus_.
let minus_op = _minus_.
detach from _plus_.
detach from _minus_.
attach plus_op to _minus_.
attach minus_op to _plus_.
print (4+3-2).
--- load the standard model
--- print out the value using the default term model
print (4+3-2).
--- save the interpretations
let plus_op = _plus_.
let minus_op = _minus_.
detach from _plus_.
detach from _minus_.
--- reattach in opposite order
attach plus_op to _minus_.
attach minus_op to _plus_.
--- print the value of the term using
--- the modified standard model
print (4+3-2).
```
The Asteroid interpreter is initialized with the term model in place. The load command on line 7 loads the standard model: the model with the usual interpretations for all the standard operator symbols. It should be noted that the standard model supports overloaded symbols (e.g., ‘+’ as an addition as well as string concatenation) as well as type promotion (e.g., the expression ‘1 + 2.3’ will evaluate to the floating point value 3.3). The code from line 12 through line 22 swaps the interpretation of the ‘+’ and the ‘-’ operator symbols. Here the symbols ‘_plus_’ and ‘_minus_’ are the internal names of the corresponding operators. The program generates the following output:
```
_minus_((_plus_([4,3]),2))
```
Here the first line is the output under the term model (line 4) and shows a dump of the internal term structure of the expression ‘4 + 3 - 2’ in prefix format. The second line is the output under the standard model (line 10). Given the usual interpretation of ‘+’ and ‘-’, the expression ‘4 + 3 - 2’ evaluates to the value 5. The third line shows the output under the modified standard model with the interpretation of ‘+’ and ‘-’ swapped (line 26). In this case the expression ‘4 + 3 - 2’ evaluates to the value 3.
5.2 Basic Pattern Matching
The ability of manipulating the interpretation of expression terms allows the developer to pattern match on operator symbols usually reserved for value computations. We saw some of this already in Listing 2 where the ‘+’ operator symbol can be used for pattern matching under the default term model. Listing 4 shows another version of this program where we take advantage of quoted expressions. Quoted expressions allows the programmer to treat expressions as constructor terms in the presence of a model other than the term model.
Listing 4: Pattern matching, models, and quoted expressions in Asteroid.
1. load "standard".
2. load "io".
3. load "util".
4. let 1 + 1 = '1 + 1. — quoted expression
5. let 2 = eval ('1 + 1).
6. let 2 = 1 + 1.
7. try
8. let 1 + 1 = 1 + 1. — throws an exception
9. catch do
10. print "pattern match failed".
11. end try
Listing 5: The Quicksort in Asteroid.
1. load "standard".
2. load "io".
3. function qsort
4. with [] do
5. return [].
6. orwith [a] do
7. return [a].
8. orwith [pivot | rest] do
9. let less = []
10. let more = []
11.
12. for e in rest do
13. if e < pivot do
14. let less = less + [e].
15. else
16. let more = more + [e].
17. end if
18. end for
19.
20. return qsort less + [pivot] + qsort more.
21. end function
22.
23. print (qsort [3, 2, 1, 0])
and pattern match against that structure as shown on line 5. Quoted expressions can be interpreted in the current model using the ‘eval’ function as shown on line 6. The remaining program is almost identical to the code in Listing 2.
As we saw in Listing 1, Asteroid supports pattern matching on function arguments in the style of ML and many other functional programming languages. Listing 5 shows the quick sort implemented in Asteroid as another example of this classic style pattern matching. What is perhaps new is the ‘head-tail’ operator on line 9. Here the variable ‘pivot’ matches the first element of the list and the variable ‘rest’ matches the remaining list which is the original list with its first element removed. On lines 15 and 17 we can see that the ‘+’ operator symbols has been overloaded in the standard model to act as a list concatenation operator as mentioned above. As expected, the output of this program is,
[0, 1, 2, 3]
We can also introduce our own custom constructors and use them in pattern matching. The program in Listing 6 implements Peano addition on terms (en.wikipedia.org/wiki/
Listing 6: Asteroid implementation of Peano addition.
```plaintext
load "io".
constructor S with arity 1.
function reduce
with x + 0 do
return reduce(x).
orwith x + S(y) do
return S(reduce(x + y)).
orwith term do
return term.
end function
print(reduce(S(S(0))+S(S(S(0))))).
```
Peano_axioms\#Addition) using the two Peano axioms,
\[
x + 0 = x
x + S(y) = S(x + y)
\]
Here ‘x’ and ‘y’ are variables, ‘0’ represents the natural number with value zero, and ‘S’ is the successor function. In Peano arithmetic any natural number can be represented by the appropriate number of applications of the successor function to the natural number ‘0’. On line 3 our program defines the constructor ‘S’ to represent the successor function. Next, starting with line 5, it defines a function that uses pattern matching to identify the left sides of the two axioms. If either one pattern matches the input to the ‘reduce’ function it will activate the corresponding function body and rewrite the term recursively in an appropriate manner. We have one additional pattern which matches if neither one of the Peano axiom patterns matches and terminates the recursion. Finally, on line 14 we use our ‘reduce’ function to compute the Peano term for the addition of 2 + 3. As expected, the output of this program is,
\[
S(S(S(S(S(0)))))
\]
Observe that due to the fact that here we operate only in Asteroid’s default term model, the ‘+’ operator symbol was available to us as a constructor which allowed us to write the Peano addition in a very natural style.
5.3 Pattern Matching in Control Structures
Control structure implementation in Asteroid is along the lines of any of the modern programming languages such as Python, Swift, or Rust. For example, the ‘for’ loop allows you to iterate over lists without having to explicit define a loop index counter. In this discussion we solely focus on the pattern matching aspects in control structures. We look at pattern matching in ‘if’ statements, ‘while’ and ‘for’ loops, and ‘try-catch’ statements.
Before we begin the discussion we need to introduce the ‘is’ predicate which is a built-in operator that takes the pattern on the right side and applies it to the subject term on the left side (not to be confused with the Prolog ‘is’ predicate). If there is a match the predicate will return ‘true’ if not then it will return ’false’. Here is a snippet that illustrates the predicate,
\[
let true = 1 + 2 is x + y.
\]
The subject term ‘1 + 2’ is matched to the pattern ‘x + y’ which of course will succeed with the variable bindings x → 1 and y → 2.
5.3.1 Pattern Matching in ‘if’ Statements
In Asteroid an ‘if’ statement consists of an ‘if’ clause followed by zero or more ‘elif’ clauses followed by an optional ‘else’ clause. The semantics of the ‘if’ statement is fairly standard. The ‘if’ and ‘elif’ clauses test the value of their corresponding expressions for the term ‘true’ and execute their corresponding set of statements if it does evaluate to ‘true’. If none of the expressions evaluate to ‘true’ then the ‘else’ clause is executed if present.
In order to enable pattern matching in ‘if’ statements we use the ‘is’ predicate. We can rewrite the ‘reduce’ function from Listing 6 using pattern matching in ‘if’ statements as an illustration,
```plaintext
function reduce
with term do
if term is x + 0 do
return reduce(x).
elif term is x + S(y) do
return S(reduce(x + y)).
else do
return term.
end if
end function
```
One thing to note is that the variable bindings of a successful pattern match are immediately available in the corresponding statements of the ‘if’ or ‘elif’ clause.
5.3.2 Pattern Matching in ‘while’ Loops
Pattern matching in ‘while’ loops follows a similar approach to pattern matching in ‘if’ statements. The ‘while’ statement tests the evaluation of the loop expression and if it evaluates to the term ‘true’ then the loop body is executed. Again we use the ‘is’ predicate to enable pattern matching in ‘while’ loops.
Listing 7 shows a program that employs pattern matching using the head-tail operator in the ‘while’ expression in order to iterate over a list and print the list elements. Note that the ‘if’ statement on line 8 is necessary because applying the head-tail operator to an empty list throws an exception. As one would expect, the output of this program is,
```
1
2
3
```
Listing 7: Pattern matching in ‘while’ loop.
Listing 8: Pattern matching in ‘for’ loop selecting substructures.
1 load "standard".
2 load "io".
3 load "util".
4
5 constructor Person with arity 2.
6
7 let people = [
8 Person("George", 32),
9 Person("Sophie", 46),
10 Person("Oliver", 21)
11 |].
12
13 let n = length people.
14 let sum = 0.
15
16 for Person(_, age) in people do
17 let sum = sum + age.
18 end for
19
20 print("Average Age: " + (sum/n)).
5.3.3 Pattern Matching in ‘for’ Loops
Of course Asteroid supports ‘for’ loops indexed over integers,
for x in 1 to 3 do
print x.
end for
or loops that iterate over lists,
for bird in ["turkey","duck","chicken"] do
print bird.
end for
Actually, in the integer example above the loop also iterates over a list because the operator ‘1 to 3’ returns the list ‘[1,2,3]’.
In addition to these canonical examples we can expand the loop variable into a pattern and do pattern matching while we are iterating. This allows us to access substructures of the items being iterated over in a direct and succinct way. Listing 8 shows such a program. The program constructs a list of ‘Person’ structures that consist of a name and an age (line 7). The ‘for’ loop on line 16 iterates over this list while pattern matching the ‘Person’ constructor at each iteration binding the age variable to the appropriate value in the structure. In the loop body it carries a running sum of the age values which it then uses to compute the average age of the persons on the list (line 20). The output of this program is,
Average Age: 33
We can also use pattern matching on the index variable of a ‘for’ loop to select certain items from a list. Suppose we extend the ‘Person’ structure of the program in Listing 8 with an additional field capturing the sex of a person. The program in Listing 9 does just that. That additional field is then used by the ‘for’ loop on line 11 to select only male members on the list and print out their names. As expected, the output of this program is,
George
Oliver
Pattern Matching in ‘try-catch’ Statements
Exception handling in Asteroid is very similar to exception handling in many of the other modern programming language available today. Listing 10 shows an Asteroid program that performs basic exception handling. On line 5 it attempts a division by zero which will throw an exception. The exception is caught by the ‘catch’ clause on line 7 and its value printed on line 8. The output of the program is the value of the exception,
\[ \text{Exception, integer division or modulo by zero} \]
By default, exceptions in Asteroid are pairs where the first component is an exception specifier and the second component is the value of the exception. In Asteroid we can pattern match on the structure of exceptions in the ‘catch’ clause. Listing 11 shows the same program from above where the ‘catch’ clause on line 7 has been modified to match the structure on the exception explicitly. Here we pattern match on the exception specifier and print out the value of the exception. As expected, the output of the program is,
\[ \text{integer division or modulo by zero} \]
The structure of the exceptions as shown in the previous examples are by convention only and all internally generated exceptions in Asteroid follow that convention. However, there is nothing to prevent the user to create his or her own exception structures and objects and pattern match on them in ‘catch’ clauses. Listing 12 shows a program that throws an exception using the ‘MyException’ constructor on line 6. That exception structure is pattern matched in the ‘catch’ clause on line 7 and its value is printed on line 8. The output of this program is,
\[ \text{Hello There!} \]
Listing 11: Basic exception handling in Asteroid with pattern matching.
1 load "io".
2 load "standard".
3 try
4 let i = 10/0.
5 print i.
6 catch ("Exception", v) do
7 print v.
8 end try
Listing 12: Exception handling in Asteroid with custom structures.
1 load "io".
2 constructor MyException with arity 1.
3 try
4 throw MyException("Hello There!").
5 catch MyException(v) do
6 print v.
7 end try
5.4 Pattern Matching on Objects
We introduce Asteroid’s objects using the dog example from the Python documentation (docs.python.org/3/tutorial/classes.html). Listing 13 shows that Python example translated into Asteroid. Asteroid’s object system is prototype based. In Asteroid it is the convention that object members are given as name-value pairs. That also includes function members in addition to data members. On line 8 of our example we define our prototype object with three members: two data members (lines 9 and 10) and one function starting on line 11. Object members are accessed in a Python dictionary style syntax. What makes this truly object-oriented is the fact that when an object function is accessed in the context of a function call, like on line 21, Asteroid generates an implicit object reference as the first argument to the called function. Notice that at the call site (line 21) we only provide a single arguments whereas the function definition (line 11) has two arguments; the first one capturing the object reference. The output of this program is,
Fido: [roll over, play dead]
Buddy: [roll over, sit stay]
In order to demonstrate pattern matching with object we added a list of dogs to our program. The resulting program in Listing 14 shows this and starting with line 6 we also added code that iterates over the list of the dogs and prints out the names of the dogs whose first trick is ‘roll over’. The filtering of the objects on the list is done via pattern matching on the loop variable on line 6.
The pattern matching on objects is straight forward due to the fact that objects like other structures consist of nested constructors. This also includes function constructors. In Asteroid function constructors are purely syntactic in nature. Asteroid does not compute any function closures and therefore only supports dynamic scoping. This makes sense in an environment where patterns as first class citizens are also dynamically scoped objects. We are currently experimenting with the idea on being able to pattern match on function constructors.
Listing 13: Object-oriented programming in Asteroid.
```plaintext
1 load "standard".
2 load "io".
3 load "util".
4
5 constructor Dog with arity 3.
6
7 -- assemble the prototype object
8 let dog_proto = Dog (
9 ("name", "").
10 ("tricks", []).
11 ("add_trick",
12 lambda
13 with (self, new_trick) do
14 let self@{"tricks"} =
15 self@{"tricks"}+[new_trick]).
16
17 -- Fido the dog
18 let fido = copy dog_proto.
19 let fido@{"name"} = "Fido".
20
21 fido@{"add_trick"}("roll over").
22 fido@{"add_trick"}("play dead").
23
24 -- Buddy the dog
25 let buddy = copy dog_proto.
26 let buddy@{"name"} = "Buddy".
27
28 buddy@{"add_trick"}("roll over").
29 buddy@{"add_trick"}("sit stay").
30
31 -- print out the tricks
32 print ("Fido: " + fido@{"tricks"}).
33 print ("Buddy: " + buddy@{"tricks"}).
```
There is an elegant way of rewriting the last part of the code of the example in Listing 14 starting with line 4 using the fact that in Asteroid patterns are first class citizens. In Listing 15 we associate our pattern with the variable ‘dog’ on line 4. The quote at the beginning of the pattern is necessary otherwise Asteroid will try to dereference the variable ‘name’ as well as the anonymous variables ‘_’. We use the pattern associated with ‘dog’ in the ‘for’ loop on line 9 to filter the objects on the list. The ‘*’ operator is necessary in order to tell Asteroid to use the pattern associated with the variable ‘dog’ rather than using the variable itself as a pattern.
### 5.5 Patterns as First Class Citizens
We have shown in Listing 15 that patterns can be associated with and dereferenced from variables. Listing 16 illustrates that we can also pass patterns to functions where they can be used for pattern matching. Here we define a function ‘match’ on line 3 that expects a subject term and a pattern. It proceeds to pattern match the subject terms to the pattern using the ‘is’ predicate and returns whatever the predicate returns. Observe the ‘*’ operator in front of the ‘pattern’ variable stating that we want to use the pattern associated with that variable. On line 8 we call the function ‘match’ with subject term ‘1+1’ and pattern ‘.+’. The output of this program is the term ‘true’.
We can also construct patterns on-the-fly as shown in Listing 17. Here we construct
Listing 14: Pattern matching and object-oriented programming in Asteroid.
```
1: ;
2: −− print out all the names of dogs
3: −− whose first trick is 'roll over'.
4: let dogs = [fido, buddy].
5: for Dog(("name", name),
6: ("tricks", ["roll over", _]),
7: _) in dogs do
8: print (name + " does roll over").
9: end for
```
Listing 15: Storing Asteroid patterns in variables.
```
1: ;
2: let dogs = [fido, buddy].
3: let dog = 'Dog(
4: ("name", name),
5: ("tricks", ["roll over", _]),
6: _).
7: for *dog in dogs do
8: print (name + " does roll over").
9: end for
```
two subpatterns on lines 3 and 4. These two subpatterns are used to construct the full pattern on line 5 when the pattern is evaluated during a pattern match. Finally, we check whether our pattern is assembled correctly on line 7. The output of the program is ‘true’ meaning our pattern has the same structure as the subject term ‘1+2+3’ on line 7.
A couple of observations:
1. The quotes on lines 3 and 4 are not strictly necessary because we are working in the default term model.
2. The quote on line 5 is necessary because we don’t want to evaluate the dereference operators at this point.
3. From this example it is obvious that patterns with dereference operators are dynamically scoped structures. The variables ‘cl’ and ‘cr’ on line 5 will capture their closest associations when the pattern is evaluated during a pattern match as on line 7.
With Asteroid’s ability to manipulate patterns we can rewrite the program implementing Peano addition from Listing 6. In the rewritten version the pertinent Peano axioms are stored as rules in a rule table which the program will access during execution. Listing 18 shows the rewritten program. Our two Peano axioms appear as rules in the rule table on lines 9 and 10. Note that each rule is written as a pair where the first component is the left side of the corresponding rule and the second component is the right side of the corresponding rule. The left sides of the rules represent the patterns that need to match the subject term and therefore it is not surprising that they are written as quoted expressions. We also need to write the right sides of the rules as quoted expressions because we want
Listing 16: Passing Asteroid patterns to functions.
1 load "io".
2
3 function match
4 with subject, pattern do
5 return subject is *pattern.
6 end function
7
8 print (match(1+1, '+')).
Listing 17: Assembling Asteroid patterns on-the-fly.
1 load "io".
2
3 let cl = '1 + 2.
4 let cr = '3.
5 let pattern = '*cl + *cr.
6
7 print (1+2+3 is *pattern).
to delay their evaluations until their corresponding patterns have matched an appropriate subject term (see line 18).
The function ‘reduce’ searches through the rule table for a match to the current subject term ‘term’. If a match is found the corresponding right side of the rule is evaluated. If no match is found then the term is returned unmodified. The output of the program is of course the Peano term ‘S(S(S(S(S(0)))))’.
Observe that the variables of the right sides of the rules in the rule table do not need to be preceded by a '*' dereference operator because we are not in a pattern matching context. There is no ambiguity here on how a variable should be interpreted – it is always to be dereferenced.
This example demonstrates that Asteroid’s ability to manipulate both its model (line 5) and patterns (line 8) allows pattern-level programming (e.g. the rule table and ‘for’ loop body) to coexist seamlessly with value-level programming (e.g. the ‘for’ loop expression).
5.6 Advanced Model Manipulation
Here we look at a couple of examples involving interesting aspects of model manipulation in Asteroid. The first program in Listing 19 shows how straightforward it is to switch between pattern- and value-level programming in Asteroid. We define a constructor ‘S’ and an increment function ‘inc’ on lines 4 and 6, respectively. We then continue to print out the value of the term ‘S(S(S(0)))’ on line 13 which will print exactly the same way on the output because ‘S’ is a constructor. Next, on line 14, we attach the ‘inc’ function as an interpretation to the constructor ‘S’. We then continue to print out the value of the same term ‘S(S(S(0)))’ on line 15. However, now ‘S’ has an interpretation as an increment function so the value printed to the output is ‘3’. Next, on line 16, we detach the ‘inc’ function from the constructor and then print the same term again on line 17. Since at this point ‘S’ is again just a constructor the output generated is ‘S(S(S(0)))’.
The example in Listing 20 shows that models do not always have to be value-oriented. Instead we can interpret one structure with another. Observe that in this example we do not load the standard model and only work in the default term model. We define our by
Listing 18: Peano addition implementation using a lookup table for the rewrite rules.
1 load "standard".
2 load "util".
3 load "io".
4
5 detach from __plus__, __'+_' is a constructor
6 constructor S with arity 1.
7
8 let rule_table = [
9 ('x + 0', 'reduce(x)) ,
10 ('x + S(y)', 'S(reduce(x + y)))
11 ].
12
13 function reduce
14 with term do
15 for i in 0 to length(rule_table) - 1 do
16 let (lhs, rhs) = rule_table[i].
17 if term is __lhs do
18 return eval(rhs).
19 end if
20 end for
21 return term.
22 end function
23
24 print (reduce('S(S(0)) + S(S(0))))).
now familiar constructor ‘S’ on line 3 and an increment function ‘inc’ on line 5. Because
we did not load the standard model the ‘inc’ function returns a structure rather than
a value (‘+’ is treated as a constructor). On line 10 we print out the interpretation of
the term structure ‘S(S(S(0)))’ which under the default term model is just the structure
‘S(S(S(0)))’. Next we attach the function ‘inc’ as an interpretation to the constructor ‘S’
on line 12. On line 13 we again print out the interpretation of term ‘S(S(S(0)))’. In this
case, because ‘S’ now has an interpretation, the value is the structure,
__plus__([[1, __plus__([[1, __plus__([[1, 0]]])]])
Here we can see that we interpreted one structure with another.
6. Remarks and Further Work
As we have seen in the previous section, there is an intricate interplay between the ability
to pattern match structures and the kind of model that is used for the structures. If
we are using a value-based model (like the Asteroid standard model) then only limited
pattern matching and construction is possible because here many of the expression-level
constructors and operators tend to represent functions that compute values and therefore
are not available for pattern matching and construction. On the other hand, if we choose
a term-based model (like the Asteroid default model) then virtually any expression-level
constructor or operator is available for pattern matching or construction. The strength
of Asteroid is that the developer has complete control over which model to deploy (or
create) and therefore has complete control over the amount of pattern- versus value-level
programming is available for a particular problem domain.
The problem with our current generation of “general purpose” programming languages
like Python, Swift, and Rust is that they have a fixed interpretation of their expression-
level structures which limits pattern-matching and in general inhibits the full deployment
Listing 19: Switching back and forth between pattern- and value-level programming in Asteroid.
1 load "standard".
2 load "io".
3 constructor S with arity 1.
4 function inc
5 with n do
6 return 1 + n.
7 end function
8
9 -- switch between pattern- and value-level programming
10 print (S(S(S(0)))).
11 attach inc to S.
12 print (S(S(S(0)))).
13 detach from S.
14 print (S(S(S(0)))).
Listing 20: Interpreting structure with structure.
1 load "io".
2 constructor S with arity 1.
3 function inc
4 with n do
5 return 1 + n.
6 end function
7
8 print (S(S(S(0)))).
9 attach inc to S.
10 print (S(S(S(0)))).
of pattern-level programming.
In terms of further work; semantic details such as the scope of a particular model and the scope of a particular attach/detach operation need to be further investigated.
Another issue we would like to explore is to extend models or interpretations to non-arithmetic constructors such as lists. Currently the list constructor `[ ]` has a fixed, term-based interpretation. It would be interesting to be able to attach a semantics other than the term-based model to lists.
As mentioned before, in Asteroid function constructors are purely syntactic objects and therefore it would be interesting to explore the ability to pattern match on them. The non-trivial part here is that unless we restrict ourselves to very simple functions that only compute on expression-level structures we might be forced to be able to pattern match on arbitrary flow of control structures such as ‘for’ loops and ‘if’ statements.
We need a more powerful expression parser. Eventhough in Asteroid the models for expression-level structures are under the developer’s control the precedence and associativity of the respective operators are fixed in the current parser. We would like to develop a parser that brings all that under the control of the developer in a similar fashion to ISO compatible Prolog implementations using the ‘op’ predicate to extend the parser.
7. Conclusions
Here we identified pattern-level (term-level) programming languages as languages that combine various patterns to form other patterns until the final result patterns are obtained. New patterns are constructed from existing ones by the application of pattern-to-pattern functions exploiting pattern matching and constructors. Our insight that pattern-level and value-level programming gives rise to a pattern-value duality that was used as the foundation of the design of our new programming language called Asteroid. Hallmarks of this new programming language design are the developer’s ability to explicitly control the interpretation or model of expressions terms and the notion of ‘patterns as first class citizens’. We have shown that Asteroid supports many pattern-level programming techniques not available in our current generation of programming languages such as Python, Swift, and Rust. We have also shown that Asteroid seamlessly integrates pattern- and value-level programming.
References
|
{"Source-Url": "https://wireilla.com/papers/ijpla/V8N4/8418ijpla01.pdf", "len_cl100k_base": 10897, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 47782, "total-output-tokens": 13829, "length": "2e13", "weborganizer": {"__label__adult": 0.0003662109375, "__label__art_design": 0.00029754638671875, "__label__crime_law": 0.00024139881134033203, "__label__education_jobs": 0.000560760498046875, "__label__entertainment": 6.300210952758789e-05, "__label__fashion_beauty": 0.0001322031021118164, "__label__finance_business": 0.00013685226440429688, "__label__food_dining": 0.00034499168395996094, "__label__games": 0.00041747093200683594, "__label__hardware": 0.0007023811340332031, "__label__health": 0.00037479400634765625, "__label__history": 0.00019550323486328125, "__label__home_hobbies": 7.700920104980469e-05, "__label__industrial": 0.0003199577331542969, "__label__literature": 0.0002589225769042969, "__label__politics": 0.0002275705337524414, "__label__religion": 0.0004973411560058594, "__label__science_tech": 0.007068634033203125, "__label__social_life": 7.992982864379883e-05, "__label__software": 0.003078460693359375, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0002486705780029297, "__label__transportation": 0.0004839897155761719, "__label__travel": 0.00017249584197998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52705, 0.10131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52705, 0.6179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52705, 0.88322]], "google_gemma-3-12b-it_contains_pii": [[0, 1993, false], [1993, 5078, null], [5078, 8395, null], [8395, 11612, null], [11612, 14458, null], [14458, 17849, null], [17849, 20752, null], [20752, 23270, null], [23270, 25159, null], [25159, 27633, null], [27633, 29643, null], [29643, 31660, null], [31660, 33353, null], [33353, 35849, null], [35849, 38192, null], [38192, 40429, null], [40429, 43036, null], [43036, 45626, null], [45626, 47621, null], [47621, 50447, null], [50447, 52705, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1993, true], [1993, 5078, null], [5078, 8395, null], [8395, 11612, null], [11612, 14458, null], [14458, 17849, null], [17849, 20752, null], [20752, 23270, null], [23270, 25159, null], [25159, 27633, null], [27633, 29643, null], [29643, 31660, null], [31660, 33353, null], [33353, 35849, null], [35849, 38192, null], [38192, 40429, null], [40429, 43036, null], [43036, 45626, null], [45626, 47621, null], [47621, 50447, null], [50447, 52705, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52705, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52705, null]], "pdf_page_numbers": [[0, 1993, 1], [1993, 5078, 2], [5078, 8395, 3], [8395, 11612, 4], [11612, 14458, 5], [14458, 17849, 6], [17849, 20752, 7], [20752, 23270, 8], [23270, 25159, 9], [25159, 27633, 10], [27633, 29643, 11], [29643, 31660, 12], [31660, 33353, 13], [33353, 35849, 14], [35849, 38192, 15], [38192, 40429, 16], [40429, 43036, 17], [43036, 45626, 18], [45626, 47621, 19], [47621, 50447, 20], [50447, 52705, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52705, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0954a919f59450f1647763a6faaae3c17a930c1d
|
5-2013
A Medical Data Cleaner
Jahnavi Yetukuri
Utah State University
Follow this and additional works at: https://digitalcommons.usu.edu/gradreports
Part of the Computer Sciences Commons
Recommended Citation
Yetukuri, Jahnavi, "A Medical Data Cleaner" (2013). All Graduate Plan B and other Reports. 254.
https://digitalcommons.usu.edu/gradreports/254
This Report is brought to you for free and open access by the Graduate Studies at DigitalCommons@USU. It has been accepted for inclusion in All Graduate Plan B and other Reports by an authorized administrator of DigitalCommons@USU. For more information, please contact digitalcommons@usu.edu.
A MEDICAL DATA CLEANER
by
Jahnavi Yetukuri
A report submitted in partial fulfillment of the requirements for the degree of
MASTER OF SCIENCE
in
Computer Science
UTAH STATE UNIVERSITY
Logan, Utah
2013
ABSTRACT
A Medical Data Cleaner
by
Jahnavi Yetukuri
Utah State University, 2013
Major Professor: Dr. Stephen Clyde
Department: Computer Science
This report describes medical-data cleaning tool, called MedDataCleaner that can detect outliers in medical data and assist Database Administrators in resolving data-related problem. Specifically, MedDataCleaner, enables the users to define cleaning rules and offers the ability to choose classification methods that help determine if the data is good or bad. MedDataCleaner uses Vitruvian DB objects for object-relational mapping (ORM) support and Vitruvian alignment links for designing the GUI.
My contribution towards this work includes designing the user interfaces using Vitruvian Alignment links, design and implement mean, standard deviation and neural classification methods using Vitruvian DB objects.
ACKNOWLEDGMENTS
I extend my deepest gratitude to my advisor Dr. Stephen Clyde for his support, guidance and valuable suggestions. Dr. Clyde is a person whom everyone would love to work with. I am glad that I learned the basics of Object Oriented Design under a person who masters in Object oriented design. Dr. Clyde is and remains my best role model for a teacher and mentor. I am grateful for his insightful discussions and encouragement, which helped me resolve several design issues. Without his guidance and persistent help this dissertation would not have been possible.
I am grateful to my committee members, Dr. Curtis Dyreson and Dr. Nicholas Flann for their interest in this project. I would like to extend my appreciation to Abhinav Nahar, a fellow colleague for his insightful thoughts on design and co-operation, especially Brian Smith (Vitruvian framework developer) for his contribution towards resolving Vitruvian issues quickly and for a wonderful framework like Vitruvian which made most time consuming tasks like designing user interfaces easier.
I especially thank my family, Haranathbabu Yetukuri, Subbalakshmi Yetukuri, Karthik Yetukuri, Saikiran Panchakarla, Harisha Yetukuri and friends Udara Weerakoon, Prabhanjali and Shreeramkumar for their unconditional love, support and care. I would not have made it this far without them.
Jahnavi Yetukuri
CONTENTS
ABSTRACT ....................................................................................................................... iii
ACKNOWLEDGMENTS ................................................................................................. iv
CHAPTER
1 INTRODUCTION ...................................................................................................1
2 SYSTEM ANALYSIS .............................................................................................4
2.1 User goals.....................................................................................................4
2.2 Structure Analysis ........................................................................................8
2.3 Functional Requirements ...........................................................................13
3 Architectural Design ..............................................................................................17
3.1 Front End Layer .........................................................................................18
3.1.1 User Interfaces ...............................................................................18
3.1.2 Background Technology ................................................................19
3.1.3 User Interface Design ....................................................................22
3.1.4 My experience with alignment link ...............................................25
3.2 Application Layer ......................................................................................26
3.3 Neural network toolkit background and design .........................................30
4 Underlying technology and implementation details ..............................................32
5 Software Testing ....................................................................................................39
6 Conclusion and Future work ..................................................................................42
LIST OF FIGURES
Figure 1: Actors in MedDataCleaner----------------------------------------------- 5
Figure 2: Usecase diagram describing goals of QA user................................. 6
Figure 3: Usecase diagram depicting the type of classification methods user can define.. 7
Figure 4: Usecase diagram describing the goals of DB user and interactive system .... 8
Figure 5: Class diagram showing cleaning rules and other important classes......... 9
Figure 6: Class diagram depicting the classification method classes .................... 10
Figure 7: Class diagram showing database related classes .................................. 13
Figure 8: Architectural diagram of MedDataCleaner tool................................. 18
Figure 9: Six different align-link glyphs......................................................... 20
Figure 10: Position align-link glyph ............................................................... 21
Figure 11: Maintain same size on two button controls ..................................... 21
Figure 12: Resize box with the form .............................................................. 21
Figure 13: Eye, lock, and trash graphics ......................................................... 22
Figure 14: User interface navigation .............................................................. 23
Figure 15: Class diagram describing Database Def ......................................... 26
Figure 16: Class diagram showing database design of cleaning rule ................. 30
Figure 17: Relationship between DB, data entity and UI .................................. 32
CHAPTER 1
INTRODUCTION
Quality data is an important asset to modern organizations, particularly as they become more dependent on inter-organizational or multi-sourced data. Unfortunately, many organizations suffer from “dirty data,” which includes incomplete records, missing field values or inter-record inconsistencies. Cutter Consortium, an IT advisory firm, identified in a report the following common sources of dirty data [5]:
1. poor data entry, which includes misspellings, typos and transpositions, and variations in spelling or naming, constitutes major sources for dirty data.
2. missing data from database fields.
3. lack of companywide standards in data coding.
4. mismatched syntax, formats and structure, e.g. variation in the number or type of name fields and different phone number formats.
According to the Data Warehousing Institute (DWI), the cost of bad or dirty data exceeds $600 billion annually [1].
In the medical field in particular, dirty data can cause increased costs, inefficiencies, liability risks and degraded quality of care [6]. Further, such dirty data causes significant and immediate need for a solution because it can lead to medical errors. The Institute of Medicine estimated in its report that around 44,000 to 98,000 lives are lost every year as a result of medical errors in hospitals [15]. Therefore, dirty data is motivating people all around the world to develop data-cleaning methods to handle bad data in medical health data. Data cleaning or data scrubbing is the act of detecting and
correcting (or removing) corrupt or inaccurate records from a record set, table, or
database [7]. The data-cleaning process involves classifying data, detecting missed
values, detecting and removing redundant records, checking if the data within the
databases is accurate, and identifying the data that is outside the expected range.
In this project, I developed a medical data cleaning tool, called MedDataCleaner,
using Vitruvian DB objects for object-relation mapping and Vitruvian alignment links for
organizing and constraining the user-interface layout. Vitruvian DB objects and
alignment links are discussed more in detail in Chapter 3 and Chapter 4.
This tool provides an interface for the Quality Assurance (QA) and Database
(DB) users to connect any health database (local or remote) that they want to clean. After
connecting to the database, the users can load the basic schema information pertaining to
the database such as database name, table names, column names, and their data types.
These schemas help users analyze the health-care data in a database and classifying them
into categories, such as good, bad, expected, uncommon, normal, and optimal. Users can
define cleaning rules for specific data types, based on the need, by doing the following
three tasks:
1. Define a domain specifies the unit and the data type of the data to be cleaned
2. Select the data-value classification method that fits the domain and will allow the
user to provide the most accurate means of characterizing the data as good, bad,
expected, uncommon, normal, and optimal. These classification methods are as
follows: 1) range, 2) format, 3) mean, 4) standard deviation, 5) discrete value and
6) neural classification method. See Chapter 2 for more details.
3. Determine unit conversions that map data in a single domain but with multiple units of measure to a data with a single unit. For example, if domain for birth-weight value may include data measured either in ounces or grams. The conversation rule could map all birth-weight data measured in ounces to birth weights measured in grams.
The data-cleaning services available in MedDataCleaner include:
1. Identification of missing data: Identifies the null values existing within the medical databases.
2. Identification of outliers
3. Identification of unusual (yet syntactically correct) values. (e.g. 999 or 9999)
Chapter 2 discusses user goals for a MedDataCleaner tool, summarizes a structural analysis, and lists the system’s functional requirements. Chapter 3 explains the design of MedDataCleaner in terms of user interfaces, application-layer components, and database structures. Chapter 4 provides a brief background on the technologies used and some of the implementation challenges. Chapter 5 discusses software testing to verify the correctness of the implementation of MedDataCleaner. Finally, Chapter 6 gives some ideas about the possible extensions for the project and future enhancements.
CHAPTER 2
SYSTEM ANALYSIS
This chapter documents the functional requirements for a medical data cleaning tool using Unified Modeling Language (UML) diagrams\(^1\)—e.g., UML use-case and class diagrams. Use-case diagrams visualize, specify, and document the behavior of a system. In a nutshell, the use-case diagrams in Section 2.1 provide developers with a high-level overview of what a MedDataCleaner should do. The class diagrams in Section 2.2 specify the structural makeup of the system [12]. Class diagrams describe the key objects and their relationships in a system. Class diagrams define three perspectives that help developers solidify the design of a system: conceptual, specification, and implementation. Section 2.3 includes the functional requirements. Functional requirements specify the functionality or specific behavior of the system.
2.1. User Goals
User goals are captured using use-case diagrams. Use-case diagrams consist of use cases and actors. An actor represents a set of roles that users play when interacting with the system. Actors can be human or can be automated systems [14].
Key actors identified in the MedDataCleaner include quality assurance users and database (DB) owners; their interactions are shown in Figure 1. Quality assurance users
---
\(^1\) UML or the *Unified Modeling Language* helps specify, visualize, and document models of software systems, including their structure and design. It includes use-case diagrams, class diagrams, interaction diagrams, state charts, activity charts, and more. UML can also be used for business modeling and modeling of other non-software systems. Readers who are unfamiliar with UML can refer to any of the many textbooks on the subject, or the official specification published by the *Object Management Group* (OMG).
play the role of defining cleaning rules for columns in the database tables. DB owners can be human or electronic systems. DB owners are further classified into DB users and interactive systems. DB user is responsible for maintaining a database and therefore performs various cleaning activities on the data. An interactive system also cleans the data in the databases it interacts with. The only difference between interactive systems and DB users is DB users are human and interactive systems are electronic systems.
Figure 1: Actors in MedDataCleaner
The use-case diagram in Figure 2 captures the goals of Quality Assurance users. QA users define cleaning rules and the cut-off values that the classification methods use to classify the data. To enable cleaning, each cleaning rule has to be provided with additional information such as subject group, domain, classification methods, and category hierarchy.
Figure 3: Use case diagram depicting the types of classification methods a user can define
Figure 3 shows the sub goals of the general goals of defining classification methods. A user can choose to define any of the following classification methods: discrete value, range subset, standard deviation, mean, format, and neural classification methods. These classification methods classify the data into categories based on cutoffs specified by the user.
In Figure 4, the use-case diagram captures the goals for a database user or an information system. To define cleaning definitions for the database, a user has to give a database definition and define a unit converter. Giving a database definition includes specifying the connection string. Unit converter is used when the unit defined for the cleaning rule is different from the unit defined for a column. After performing classification of data, the MedDataCleaner provides statistics to the user. These statistics enables the user to analyze data. The use case “Clean bad data in a database” is filled in gray because this project emphasizes data classification with data cleaning as future work.
2.2 Structural Analysis
Figures 5 and 6 are UML class diagrams that describe key object classes for detecting the data problems and helping user correct those problems.
From a preliminary analysis of sample sets of medical data, I was able to observe the following general characteristics:
- Medical data values collected from laboratories, hospitals, and personal health systems supposed to fall with some range of expected values or come from a set of possible discrete values.
- The data ranges may vary by gender. For example, the threshold for high-density lipoprotein (HDL) cholesterol in men is $< 40 \text{mg/dL} = \text{low}$ and in women is $< 50 \text{mg/dL} = \text{low}$.
- Data sets often include outliers. Analyzing the data using the mean and standard deviation enables users to track values that are erroneous and also provides users with statistics on the overall data.
I addressed the above observations by modeling different kinds of data-value classification methods, see Figure 6.
2.2.1 Classification Methods:
The base class Classification Method, represents the set of all possible data-value evaluation schemes. This class represents objects that classify data values into categories. Medical data is not always numerical data; it also includes personal information, such as date of birth and phone numbers, so a single classification method is not sufficient. Therefore, the Classification Method class is then partitioned into four specializations:
i) Statistical Classification Method:
Statistical methods help identify outliers in a given set of data. Statistical Classification Method is further divided into Mean Classification Method and StandardDeviation Classification Method. Mean Classification Method uses
lower, middle and upper quartile to classify. StandardDeviation Classification Method uses combination of mean and standard deviation to classify data.
ii) Training Classification Method:
This method uses nprttool, neural network pattern recognition tool to detect the outliers.
iii) Subset Classification Method:
This Classification method is further divided into Range Subset Classification Method and Discrete Subset Classification Method. Range subset Classification Method classifies data into specified range. Discrete Subset Classification Method handles medical data with special values like 999, 777.
iv) Format Classification Method:
This is a classification method based solely on data-value syntax. For example, SSN, DOB etc.
The classification methods classify the data into categories, namely special meaning, good, and bad. Good values are subdivided into common and uncommon. Common values are again classified into normal and optimal.
2.2.2 Column Cleaning Rules:
The column-cleaning rules consist of a hierarchy of categories, See Figure 5. Defining a cleaning rule for a column involves selecting a classification method and assigning the classification method to the category. Now, the classification method classifies the data in to specified category. This structure enables the classification method to interact with other classification methods. For Example, for defining cleaning
rule for Cholesterol column, StandardDeviation Classification Method is used. This method classifies data into one of the categories by calculating mean - 3*standard deviation and mean + 3*standard deviation. The output data gathered using this method could be fed to other classification method to get a finer classification. For example, output from this can be fed to Range classification method, which further classifies the data into other category.
Domain contains the unit and data type. Each cleaning rule has a domain associated with it. For defining the cleaning rule for a column, the domain defined in the cleaning rule has to be compatible with unit and data type of the column.
2.2.3 Unit Converter:
Unit conversion is important as medical data is represented in different units across the world. Also, The unit defined for the cleaning rule may be different from the column unit (e.g., unit of lab values), but these units can still be compatible. For example, the column to be cleaned is considered to be serum cholesterol and the unit for the column is reported as milligrams per deciliter (mg/dL). But the cleaning rule uses the unit millimole per liter (mmol/L). Though these units seem different, they are compatible as mmol/L is the SI unit for serum cholesterol while mg/dL is the conventional unit. So, instead of creating a new cleaning rule, the existing cleaning rule can be used by converting the unit using Unit converter class. See Fig. 7.
2.3 Functional Requirements
The functional requirements capture the core functionality of the application. This section includes functional requirements for MedDataCleaner:
2.3.1 Usability
MedDataCleaner must provide user-friendly interfaces to the user, which enables the user to add the database, analyze the data within database, and display the statistics for the analyzed data.
2.3.2 Accessibility
Only users with a legitimate username and password will be able to access the MedDataCleaner.
2.3.3 Navigation
Users should be able to navigate easily among the cleaning rule, domain, and classification method forms.
2.3.4 Functionality
This section describes the functionality in detail for *MedDataCleaner*:
2.3.4.1 Add a database
Users should be able to add a new local or remote database by specifying the data source name (DSN) and the connection string.
2.3.4.2 Testing the connection and loading the database
Users should be able to test if the database connection is established successfully or not. If successfully established, it loads the tables and columns of the database.
2.3.4.3 Defining cleaning rule
This tool should enable users to define, save, edit, and delete a cleaning rule. The program should store the following information for a cleaning rule:
- **Name**: Name of the cleaning rule cannot be null or empty
- **Subject Group**: Subject group is optional. Some sample subject groups are age, gender, etc.
2.3.4.4 Defining classification method
This tool shall enable the users define a classification method. Classification methods should classify the data values into categories. A classification method consists of the following information:
- **Name**: Classification method name cannot be null or empty
- **Type of Classification method**: This tool allows the user select a classification method among range subset, discrete, standard deviation, mean, format, and neural classification methods.
2.3.4.4.1 Mean deviation classification method
When used for classification, this classification method calculates the mean for the data and enables the user to classify the data based on mean.
2.3.4.4.2 Standard deviation classification method
When used for classification, this classification method classifies the input data based on both mean and standard deviation (i.e., the standard deviation factor).
2.3.4.4.3 Neural classification method
This classification method classifies a column value into a category using the nprttool (neural network pattern recognition tool).
2.3.4.4.4 Range subset classification method
When used for classification, the range subset classification method should store the maximum and minimum values. At least one of the minimum or maximum values should not be null or empty.
2.3.4.4.5 Discrete subset classification method
When used for classification, the discrete subset classification method should store all discrete unique values.
2.3.4.4.6 Format classification method
When used for classification, the format classification method should store the format. Format cannot be null or empty.
2.3.5 Defining domain
This tool will enable users to save, edit, and delete. Users should be able to define a domain, which consists of the following information:
- Name: Domain name cannot be empty or null.
- Data-type: Data type cannot be null and should be among the following: numeric, string, Boolean, or timestamp.
- Unit category: Unit category is optional. Unit category represents the measuring criteria. For example, length, volume, speeds, etc.
- Unit: For a given unit category, there exists a set of units. For example, unit category length has meter, centimeter, and millimeter as units. Unit is optional.
2.3.6 Performing unit conversion
Unit converter should enable the user to convert the column unit to the cleaning rule unit. It stores the following information:
- From unit: From unit cannot be null or empty.
- To unit: To unit cannot be null or empty.
- Conversion factor: cannot be null or empty.
2.3.7 Generate statistics of data
This tool shall enable the user to generate overall statistics of the data analyzed i.e. count of the bad, good, expected, normal, optimal values within the input data.
CHAPTER 3
ARCHITECTURAL DESIGN
This chapter explains the architectural design for a software tool, called MedDataCleaner, that satisfies the requirements summarized in Chapter 2. I use UML package and class diagrams to communicate the architectural design because they focus on module organization and data structures [21]. The architecture of the MedDataCleaner can be organized in three layers: user interface layer, application layer and database layer. Fig. 8 depicts the package diagram that captures all layers of the MedDataCleaner. The user interface package contains graphical user interface classes and windows forms that allow users to view program features. The application package includes classes that contain the functional logic for adding a database: defining, viewing, and editing cleaning rules, classification methods, and domains. Database layer represented as metadata layer in Figure 8 contains the DB-objects and DB-lists for all the tables in the database. The DB-objects and DB-lists are generated by Vitruvian DB-objects.
3.1 Front End Layer
3.1.1 User interface
Most of the software developed today is interactive software. Having a good interface makes things simpler for the user who is interacting with the software. Developing good interfaces increases loyalty and reduces support costs. The user interface for an application can make it or break it [16]. The usability of software depends on the user interface of the software.
Designing and implementing user interfaces for software requires a lot of time and effort. Moreover, the code written for implementing a user interface often makes up over half the total application code. The process of designing interfaces could be made easier and simpler for a programmer if there are some additional features in the development environment which could help the programmer maintain right–left alignment, visual closure, etc.
The .Net GUI development environment supports placing and positioning of controls with respect to edges on forms using anchors, but it does not support positioning or resizing of controls with respect to each other. Therefore, if the position of a control in a form is changed, all the controls on the form have to be rearranged accordingly. Also, an extra amount of code must be written to attain liquid layout. All this accounts for additional development time and effort, which could be reduced significantly by linking controls graphically.
To overcome the above stated hurdles, I used the Vitruvian framework to design the GUI in this project. Section 3.2 discusses this more in detail.
### 3.1.2 Background on the Vitruvian Framework
The Vitruvian framework supports several techniques for assisting the programmers with graphic user interface (GUI) development, including alignment links, which enable developers to constrain the size and position of GUI components; layouts, which allow them to construct GUIs programmatically; and templates, which capture common layout patterns for users.
The alignment links feature enables developers to constrain the size and position of GUI components directly in Microsoft’s Visual Studio Form Designer without needing to write any code by hand. Alignment links provide the following six ways to link controls together:
1. **Position** On positioning the control, maintains the relative distance with the parent.
2. **Resize** On resize, maintains the relative distance with the parent.
3. **Percent** By moving the control, maintains the relative distance with the parent, as a percentage.
4. **Center** Moves the control so that it is centered with the parent.
5. **Same Size** Resizes the control so that it is the same size as the parent.
6. **Percent Size** Resizes the control so that it maintains the relative size, as a percentage of the parent.
A developer must first add *AlignLinkDesigner* to Visual Studio to enable the alignment link feature. Now, when a control is selected, the *AlignLinkDesigner* displays a number of glyphs on the edges and vertices of the control. See Figure 9. To create an alignment link, select the child control, click one of the align-link glyphs and then finally click on the target areas that indicate probable link points. To remove an alignment link, click on the align-link glyph.

Some common uses and examples of alignment links are shown below:
In example 1, the set of alignment links shown in Figure 10 causes the label to maintain the spatial relationship with the text box. This relationship is maintained even if the label text changes, the label font changes, or if the text box is moved or resized.
In example 2, the alignment links shown in Figure 11 make the *Delete* button maintain the same size as the *Add* button, and they also maintain the spacing between the two buttons.
In example 3, the resizing alignment links in Figure 12 attach the box to the edge of the form. When the form is resized the box will also be resized to maintain the distance relationship with the edge of the form.
The alignment link designer adds the following graphics in Fig. 13 to the bottom edge of the control. The eye graphic toggles the visibility of the alignment links for the selected control. The lock graphic can be used to deactivate and activate alignment links for the
selected control. The trash graphic can be used to remove all alignment links from the selected control.

**Figure 13: Eye, lock, and trash graphics**
### 3.1.3 MedDataCleaner’s User interface Design
The user interface package contains classes and windows forms. These forms enable the user to perform the following tasks:
- Log in securely
- Add new database
- Select a database for data cleaning
- Assign unit for a column
- Perform unit conversion
- Define/edit domain
- Define/edit classification method
- Define/edit cleaning rule
Figure 14: User interface navigation
Figure 14 shows the forms that comprise MedDataCleaner’s user interface the possible user navigations between the forms. Below is a description of windows forms that I designed using alignment links.
The Login form enables users with legitimate usernames and passwords to log in and gain access to services offered by MedDataCleaner. In this form, I used the alignment link described in Examples 1, 2, and 3 to position the text boxes and labels (e.g., username and password), to make the login button have same size as cancel button, and to resize the picture box with form. Also, I resized the text boxes corresponding to username and password with the form. Similarly, I used the alignment links for position, resize, center, and same size for all the forms described below.
The **MyDatabases** form displays a list of databases added along with some additional details such as when it was accessed last and connection string. Also, it enables users to navigate to SpecifyDbConnection. The **SpecifyDbConnection** form enables users to add a new database. Users can add a new database by specifying data source name, driver, and the connection-string parameters. Users can even test if the connection is established successfully or not.
The **DbCleaningForm** is the most important form in the entire application and plays a significant role. Users can perform multiple tasks from this form, such as defining and editing cleaning rules, domain, and classification method.
The **SelectUnit** form enables the user to assign the unit for a column. If the user is not aware of unit then the user can navigate to the **UnitConversionTable** form by clicking on the link provided.
The **UnitConversionTable** form shows user a list of medical components along with their SI, conventional units, and unit conversion factor. This form even enables the user to search for the unit of the medical component by entering the name of the medical component.
The **DomainSpecification** form enables the user to define and edit the domain.
The **EditClassificationMethod** specification form enables the user to define and edit the classification method.
The **EditCleaningRuleSpecification** form enables the user to define and edit the cleaning rule. Also, enables the user to define a new classification method and domain.
The **UnitConverter** does as its name implies. If the unit categories of the column and cleaning rule are the same but the units are not compatible, then this form enables the
user to perform unit conversion by specifying the unit conversion factor and the unit conversion rule.
The ResultForm form presents final statistics of the data to the user after analysis and classification.
3.1.4 My experience with alignment links
Alignment links are easy to learn and use. These made designing user interfaces simpler for me. In this project, alignment links provided remarkable benefits in the following areas:
a) Maintainability
Unlike in Visual Designer, I could easily make changes to the screens and controls after the initial development of the screens.
b) Usability
Alignment links is as usable as Microsoft Studio’s Visual Designer and is equally easy to use and learn. Initially, while using alignment links I would get confused among the glyphs. But in no time I became very comfortable using the glyphs.
a. Productivity
When a UI control is moved I didn’t have to make any explicit adjustments to other controls on the form. The controls would rearrange automatically as they are linked using alignment links.
b. Lines of code
Using alignment links, I have created forms that support liquid layout at runtime, thus reducing the lines of code.
### 3.2 Application layer
Robustness, flexibility, reusability, scalability, and maintainability of software mainly depend on the application service layer design.
This layer consists of implementation classes that describe all code behind the forms. These classes map the logic in the implementation classes, captured in the analysis to the view. The class diagram in Fig. 10 shows DatabaseDef and related classes.

**Figure 15: Class diagram describing Database Def**
The user (QA user or DB owner) should first choose a database to clean. The tool enables the user to connect to the database using data source name or connection string.
parameters and store the names of databases, tables, columns, data types, units, and connection strings. Since each user can have access to multiple databases, a user consists of a list of database def. Each database consists of multiple tables, which in turn consist of multiple columns. So, a Database Def object contains a list of Table Def objects and a Table Def object consists of a list of Column Def objects. Column Def object doesn’t contain a list of values within the column; rather, it simply defines that column. Data values for a column are fetched as needed during the cleaning process. Medical data values have different data types, such as integer, float, timestamp, Boolean, string, etc. Also, medical laboratories and practitioners in the US use conventional units for reporting the results of clinical laboratory measurements, while those in other countries use the International Standard (SI) units.
To address this problem of different data types and units, domain is designed. Domain is defined for a cleaning rule and specifies the data type and unit of a column. A cleaning rule can be associated with a column only if the data type and unit of cleaning rule is compatible with that of the column. The MedDataCleaner uses numeric, string, date and time, and Boolean data types. If the unit categories of the column and cleaning rule are the same but units are incompatible, then unit conversion can be performed using the unit converter instead of creating redundant cleaning rules.
Unit conversion is another key component of the tool. Data reported in different units should fall into different numerical ranges. Therefore, if the ranges defined for SI units are used for data in conventional units, and then the resultant analysis becomes
faulty. In other words, unit conversion prevents wrong analysis of data. The unit converter is discussed in more detail in next chapter.
Cleaning Rule in Figure 11 is the most important class in the system. Most of the tool’s functionality is embedded within cleaning rule. Each column to be cleaned has to have a cleaning rule associated with it. It is the cleaning rule that allows MedDataCleaner to classifying actual data values into categories. Each cleaning rule has a domain and classification-category hierarchy associated with it; the category hierarchy has a category which in turn has a classification method associated with it. The classification method is responsible for classifying the data within a column to the category it is associated with.
In a category hierarchy, it is not always necessary to have root categories. To address the cases without a root category, a subtree without a root structure is designed by my teammate. This structure enables the classification methods to be accessed hierarchically, i.e., the result of the parent category is passed as an input to a child category.
All classification methods (standard deviation classification method, mean classification method, format classification method, range-subset classification method) classify the data values into categories and update the classification. This is done using classify() and update() methods.
An example of a category hierarchy: Special values - If a value is classified under special values it cannot be considered as good or bad data. An example of special values is 999, 9999, which are often seen in medical databases. Good values- the values classified as good can further be classified as common and uncommon. Similarly, the values classified as common can further be classified as normal and optimal. A category
at each level is associated with a classification method. For example, a user can use the statistical classification method to classify the values into the “good” category and use the range subset classification method to classify the values in “good” into “normal.” This hierarchy enables the data to be classified more efficiently.
The MedDataCleaner consists of the following classification methods:
1. **Standard deviation classification method**: This classification method classifies the values into categories based on the mean and standard deviation. The implementation of this classification method is discussed in detail in the next chapter.
2. **Mean classification method**: This classification method classifies a column value into a category based on its deviation from the mean, which is discussed in more detail in the next chapter.
3. **Neural classification method**: This classification method classifies a column value into a category using the nprtool (neural network pattern recognition tool).
4. **Range subset classification method**: Medical values can be classified as optimal or normal based on the range. So, this classification method checks if the column value is within the maximum and minimum range.
5. **Discrete subset classification method**: Medical values can be represented as single value instead of a range. So, this classification method checks if the column value is one of the discrete values.
6. **Format classification method**: This classification method checks if a column value is in a specified format or not.

**Figure 16**: Class diagram showing database design of cleaning rule
### 3.3 Neural-network toolkit background and design
Neural Network Toolbox provides tools such as the pattern-recognition tool (nprtool), fitting tool (nftool), clustering tool (nctool) and time-series tool (ntstool) for designing, implementing, visualizing, and simulating neural networks. Neural networks are used for applications where data sets to be examined are large and formal analysis would be difficult, such as pattern recognition [18]. The toolbox supports implementations for feed-forward networks, radial-basis networks, dynamic networks, and self-organizing maps.
The MedDataCleaner uses the neural network pattern recognition tool for classifying data into categories. This is discussed more in detail in the next chapter.
To connect to and access neural network toolkit via C# the following steps have to be followed:
1. In C#, navigate to Project -> Add Reference, then select the COM tab.
2. Under COM tab, select Matlab application.
3. Use private MLApp.MLAppClass matlab to create a C# Matlab object.
4. Now, code can be executed in matlab via C# using matlab.Execute(“Matlab code”)
CHAPTER 4
UNDERLYING TECHNOLOGY AND IMPLEMENTATION DETAILS
To implement MedDataCleaner, I used C#.NET 2005, PostgreSQL 8.3, Vitruvian DB objects for ORM support, and Vitruvian alignment links for designing the GUI. Since Vitruvian is not fully compatible with C#.NET 2008, we used C#.NET 2005. Assessing the usability and improvement of Vitruvian-DbObject and alignment links is a secondary goal of the project. Alignment links are discussed in Chapter 3.
4.1 Introduction to Vitruvian DB-Objects
Object-relation mapping (ORM) maps objects in an object-oriented system to the data stored in a relational database. DB-Objects are similar to ORM.
DB-Objects are used to represent a relational model in an object model. A table in a relational model maps to a class in an object model, and a column in a relational model maps to properties of class in an object model.
Data within the databases passes via some data entity before being displayed on a user interface. See Fig. 17. After editing, the data is transferred to the data entity where it is stored in the database.
Figure 17: Relationship between DB, data entity and UI
DB-Objects provide the following features:
1. Load(): Load the data into the DB-Object.
2. Reload(): Load the new set of data from database.
3. Save(): Save the DB-Object to the database.
4. Delete(): Delete the DB-Object from the database.
5. ResetValues(): Reset the properties of a DB-Object.
6. RelationalSave(): Save the DB-Object and the children tables of the current DB-Objects.
7. RelationalDelete(): Delete the DB-Object and the children tables of the current DB-Objects.
DB-Object also keeps track of its current state. A DB-Object can be in one of the following states:
1. New: DB-Object is just created and not saved into the database.
2. Synced: DB-Object is in synchronization with the database.
3. Modified: DB-Object one of whose properties has been modified and not saved in the database.
4. Deleted: DB-Object is deleted.
5. Detached: DB-Object exists in the database but is marked for deletion.
DB-Objects has a data wizard that automatically converts tables into classes and columns into properties. It is even capable of distinguishing between one-to-one and one-to-many relationships. The user can customize class names, properties, and relationships when generating DB-Objects.
4.2 Implementation details
This section discusses the implementation details of the MedDataCleaner. It also covers the challenges faced and solutions.
My contribution and the features I implemented in this project are:
1) **Graphical user interface:** I designed the user interfaces for the MedDataCleaner using alignment links, which is discussed in detail in Chapter 4.
2) **Connection string:** Adding a database can be done either by using DSN or by building a connection string. To build the connection string, connection-string parameters are needed. To build the connection string, I referred to the connection strings of various database servers including PostgreSQL, Oracle, SQL Server 2008, etc. at [www.connectionstrings.com](http://www.connectionstrings.com). Each of the database servers had several connection strings. For example, PostgreSQL contains the following connection strings:
i) **Standard**
```
Server=127.0.0.1;Port=5432;Database=myDataBase;User
Id=myUsername;Password=myPassword;
```
ii) **Command timeout setting**
```
Server=127.0.0.1;Port=5432;Database=myDataBase;User
Id=myUsername;Password=myPassword;CommandTimeout=20;
```
The CommandTimeout parameter is measured in seconds and controls for how long to wait for a command to finish before giving an error.
iii) **Connection timeout setting**
Server=127.0.0.1;Port=5432;Database=myDataBase;User
Id=myUsername;Password=myPassword;Timeout=15;
To add a PostgreSQL database, the user can select the driver and select a connection type, which could be either of the above, and the connection string parameters corresponding to a connection type are displayed automatically.
3) **Unit converter:** After adding a database, the user can select a medical data column to clean. If the medical column doesn’t contain a unit, the user can assign unit. To assign a unit for a column, the user has to select the unit category the unit belongs to. Then all the units corresponding to the unit category are displayed as a list. For example for unit category height, the possible units are inch, foot, and centimeter. The unit field can even be left empty. If the user is not aware of the unit for the column, then I provided a link to search for the unit of that column.
If the unit for the column is not empty, then a cleaning rule can be defined for the column only if the units are compatible. For example, if the user is cleaning the data column for creatine, the unit for the column is milligram per deciliter (conventional unit for creatine), and the unit defined in the cleaning rule is micromole per liter (SI unit for creatine). In such cases, unit conversion avoids creation of multiple cleaning rules for the same column having different units. The unit of the column can be converted to the cleaning-rule unit by multiplying the data values within the column by a conversion factor of 76.26. However, the unit conversion can only be performed if the unit category of both column and cleaning rule are the same.
4) **Visual representation view:** After adding a database, if the user wishes to view a visual representation of the data column to be cleaned, this feature presents a scatter plot of the data to the user.
5) **Classification methods:** The design and implementation of the mean, standard deviation, and neural classification methods:
**Mean classification method:** In the previous version, I classified the data within the column as good or bad based on the mean value. But choosing mean value to classify the data is not an efficient approach because if the data contains outliers then the mean of the data is affected. For example, consider the following set of data:
3 4 5 7 21 199 1000 9999
The mean for the data is 1404.75. So, if we classify the data above mean as bad and the data below mean as good then the analysis is faulty, with only a single data point above the mean. I went through many mathematical methods and came up with the following method. In this new mean method, first data is sorted and then the lower, middle, and upper quartiles are calculated for the given data using the following formulas:
Lower quartile = \( \frac{1}{4} *(n+1) \) where \( n \) is the number of elements in the data.
Middle quartile = median and
Upper quartile = \( \frac{3}{4} *(n+1) \)
Now, the data between the lower and upper quartiles is considered as this data is not affected by outliers. To classify the data into finer details, the same
procedure is repeated for data below the lower quartile and above the upper quartile.
**Standard-deviation classification method**
Similarly, even for the standard-deviation classification method, if we calculate the standard deviation for the entire data set, the analysis might be faulty as the data might include outliers. So, the mean for the data is calculated using the method described in 5.3.1. The standard deviation of the data is calculated for the data between the lower and upper quartiles. Now to classify the data, calculate mean - 3*standard deviation and mean + 3*standard deviation. Here, 3 is the multiplication factor for the standard deviation. To classify data further, repeat the same steps by varying the multiplication factor.
**Neural classification method**
Input data is arranged as columns in a matrix, then the target vectors arranged so that they represent the classes to which the input data are assigned. The target vectors are calculated using statistical methods. Now use the nprtool (neural network pattern recognition tool) in MATLAB via C# and adjust the validation, testing, number of hidden neurons accordingly and set the epochs to 250. The confusion matrix represents the data classified correctly and the data classified incorrectly. The main challenge I faced while implementing this classification method was while debugging: if an exception occurred in MATLAB, the error reported in C# was “COM exception,” and no
other details were provided about the exception. So I had to debug the program, first placing breakpoints in both C# and MATLAB and then integrating them.
**Dealing with null values and data such as 999 or 9999**
In all the classification methods discussed above, mean plays an important role. But since the data contains null values, if we take the mean of the data considering null values to be zeros then this would result in loss of information. The other approach is if we replace the null values with the average of other values the resultant data might be faulty because outliers affect an average. So, I considered replacing the null values with the mean of the data that falls between the lower and upper quartiles, as outliers do not affect this data.
This project addresses data analysis with data cleaning as an extension. The above-discussed method can be used as one of the methods for replacing null values or values such as 999 or 9999.
CHAPTER 5
SOFTWARE TESTING
Software testing is a process of validating and verifying the quality of a product to provide stakeholders with information about the benefits and risks at implementation of the software product [19]. To test the quality and usability of the MedDataCleaner, we performed unit testing, integration testing, and user-acceptance testing. Sections 5.1, 5.2, and 5.3 contain details about unit, integration, and user-acceptance testing, respectively.
5.1 Unit testing
Unit testing takes the smallest piece of testable software in the application, isolates it, and determines if it behaves as expected. Each unit is tested separately before being integrated into modules. A large percentage of defects are identified during unit testing [20].
Test cases are the basic building blocks of unit testing. Test cases are written by developers to determine the results that an implemented method produces on a wide set of inputs provided by the user.
In the MedDataCleaner, unit testing was performed for select unit, DB cleaning, unit-conversion table, unit converter, creating and editing domain, cleaning rule, mean, neural and standard deviation classification methods.
5.2 Integration testing
Integration testing is a logical extension of unit testing. In integration testing, two individual units already tested are combined into a component and tested. The idea is to test combinations of pieces and eventually expand the process to test all the modules with
those of other groups. Eventually all the modules making up a process are tested together [20]. Integration testing is performed in three ways: the top-down, bottom-up, and umbrella approaches.
For the MedDataCleaner, we followed a bottom-up approach, i.e., the lowest-level units were tested and integrated first. The units were integrated and tested in the following order:
1) Domain and cleaning rule
2) Classification method and cleaning rule
3) Column definition and cleaning rule
4) Column definition and unit converter
5) Column definition, unit converter, and cleaning rule
6) Column definition, cleaning rule, and classification method
7) Column definition, unit converter, cleaning rule, and classification method
8) Testing integration of neural network toolkit with C#
Testing was performed on real medical data. All the bugs encountered were resolved during testing.
5.3 User-acceptance testing
User-acceptance testing is a phase of software development in which the software is tested in the “real world” by the intended audience [20] and is usually done before the delivery of the product. MedDataCleaner was fully tested against the requirements
defined in the analysis and design stages for real medical data, and the tool worked efficiently in classifying the data into good, bad, null, common, and optimal values.
CHAPTER 6
CONCLUSION AND FUTURE WORK
The MedDataCleaner works efficiently in classifying data into various categories using cleaning rules, domain, classification methods, and unit converter. Unlike other tools, the MedDataCleaner enables the user to choose the classification method to be used to classify the data. Cleaning rules can be applied in the cleaning process. Domain and unit converter helps in handling different units and data types. The classification method classifies the values into categories.
Data cleaning tool discussed in this project is currently a GUI based tool; as a next step towards its development, it could be implemented as a web application. Currently, The MedDataCleaner cleans a column using a cleaning rule and saves the cleaning rule corresponding to the column. If the user wishes to clean another column, the tool should suggest to the user an existing cleaning rule, which could be used to clean the data within the column.
REFERENCES
[4] DATA CLEANING A prelude to knowledge discovery by Jonathan I. Maletic
Kent State University, Andrian Marcus Wayne State University
[9] Tools for Data Translation and Integration by S. Abiteboul, S. Cluet, T. Milo, P.
Mogilevsky, J, Siméon, S. Zohar
Jacobson
|
{"Source-Url": "https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1272&context=gradreports", "len_cl100k_base": 10840, "olmocr-version": "0.1.49", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 86074, "total-output-tokens": 13537, "length": "2e13", "weborganizer": {"__label__adult": 0.0006661415100097656, "__label__art_design": 0.001220703125, "__label__crime_law": 0.0004763603210449219, "__label__education_jobs": 0.0198211669921875, "__label__entertainment": 9.733438491821288e-05, "__label__fashion_beauty": 0.00040841102600097656, "__label__finance_business": 0.0005087852478027344, "__label__food_dining": 0.0008716583251953125, "__label__games": 0.0008187294006347656, "__label__hardware": 0.001827239990234375, "__label__health": 0.00594329833984375, "__label__history": 0.0005245208740234375, "__label__home_hobbies": 0.00026679039001464844, "__label__industrial": 0.0006570816040039062, "__label__literature": 0.0005445480346679688, "__label__politics": 0.000308990478515625, "__label__religion": 0.0007219314575195312, "__label__science_tech": 0.04278564453125, "__label__social_life": 0.0002338886260986328, "__label__software": 0.01020050048828125, "__label__software_dev": 0.9091796875, "__label__sports_fitness": 0.0006761550903320312, "__label__transportation": 0.0006852149963378906, "__label__travel": 0.00032448768615722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55523, 0.03171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55523, 0.63073]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55523, 0.85206]], "google_gemma-3-12b-it_contains_pii": [[0, 651, false], [651, 858, null], [858, 858, null], [858, 1721, null], [1721, 3096, null], [3096, 5102, null], [5102, 5102, null], [5102, 6716, null], [6716, 6716, null], [6716, 8255, null], [8255, 10024, null], [10024, 11231, null], [11231, 13035, null], [13035, 13590, null], [13590, 13947, null], [13947, 14400, null], [14400, 15270, null], [15270, 15992, null], [15992, 16851, null], [16851, 18265, null], [18265, 19737, null], [19737, 20257, null], [20257, 21730, null], [21730, 23151, null], [23151, 23959, null], [23959, 25010, null], [25010, 25869, null], [25869, 27335, null], [27335, 28640, null], [28640, 29309, null], [29309, 29893, null], [29893, 30710, null], [30710, 32431, null], [32431, 33498, null], [33498, 34307, null], [34307, 36076, null], [36076, 37905, null], [37905, 39348, null], [39348, 40350, null], [40350, 40716, null], [40716, 41848, null], [41848, 43054, null], [43054, 44453, null], [44453, 46121, null], [46121, 47576, null], [47576, 49040, null], [49040, 49996, null], [49996, 51484, null], [51484, 52651, null], [52651, 52822, null], [52822, 53788, null], [53788, 55008, null], [55008, 55523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 651, true], [651, 858, null], [858, 858, null], [858, 1721, null], [1721, 3096, null], [3096, 5102, null], [5102, 5102, null], [5102, 6716, null], [6716, 6716, null], [6716, 8255, null], [8255, 10024, null], [10024, 11231, null], [11231, 13035, null], [13035, 13590, null], [13590, 13947, null], [13947, 14400, null], [14400, 15270, null], [15270, 15992, null], [15992, 16851, null], [16851, 18265, null], [18265, 19737, null], [19737, 20257, null], [20257, 21730, null], [21730, 23151, null], [23151, 23959, null], [23959, 25010, null], [25010, 25869, null], [25869, 27335, null], [27335, 28640, null], [28640, 29309, null], [29309, 29893, null], [29893, 30710, null], [30710, 32431, null], [32431, 33498, null], [33498, 34307, null], [34307, 36076, null], [36076, 37905, null], [37905, 39348, null], [39348, 40350, null], [40350, 40716, null], [40716, 41848, null], [41848, 43054, null], [43054, 44453, null], [44453, 46121, null], [46121, 47576, null], [47576, 49040, null], [49040, 49996, null], [49996, 51484, null], [51484, 52651, null], [52651, 52822, null], [52822, 53788, null], [53788, 55008, null], [55008, 55523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55523, null]], "pdf_page_numbers": [[0, 651, 1], [651, 858, 2], [858, 858, 3], [858, 1721, 4], [1721, 3096, 5], [3096, 5102, 6], [5102, 5102, 7], [5102, 6716, 8], [6716, 6716, 9], [6716, 8255, 10], [8255, 10024, 11], [10024, 11231, 12], [11231, 13035, 13], [13035, 13590, 14], [13590, 13947, 15], [13947, 14400, 16], [14400, 15270, 17], [15270, 15992, 18], [15992, 16851, 19], [16851, 18265, 20], [18265, 19737, 21], [19737, 20257, 22], [20257, 21730, 23], [21730, 23151, 24], [23151, 23959, 25], [23959, 25010, 26], [25010, 25869, 27], [25869, 27335, 28], [27335, 28640, 29], [28640, 29309, 30], [29309, 29893, 31], [29893, 30710, 32], [30710, 32431, 33], [32431, 33498, 34], [33498, 34307, 35], [34307, 36076, 36], [36076, 37905, 37], [37905, 39348, 38], [39348, 40350, 39], [40350, 40716, 40], [40716, 41848, 41], [41848, 43054, 42], [43054, 44453, 43], [44453, 46121, 44], [46121, 47576, 45], [47576, 49040, 46], [49040, 49996, 47], [49996, 51484, 48], [51484, 52651, 49], [52651, 52822, 50], [52822, 53788, 51], [53788, 55008, 52], [55008, 55523, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55523, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7ac73beb9ab5c8acc6af44ae413cd6b372e5491f
|
A Framework for Verifying Depth-First Search Algorithms
DOI:
10.1145/2676724.2693165
Document Version
Accepted author manuscript
Link to publication record in Manchester Research Explorer
Citation for published version (APA):
Citing this paper
Please note that where the full-text provided on Manchester Research Explorer is the Author Accepted Manuscript or Proof version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version.
General rights
Copyright and moral rights for the publications made accessible in the Research Explorer are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Takedown policy
If you believe that this document breaches copyright please refer to the University of Manchester's Takedown Procedures [http://man.ac.uk/04Y6Bo] or contact uml.scholarlycommunications@manchester.ac.uk providing relevant details, so we can investigate your claim.
A Framework for Verifying Depth-First Search Algorithms
Peter Lammich René Neumann
Technische Universität München
{ lammich, neumannr}@in.tum.de
Abstract
Many graph algorithms are based on depth-first search (DFS). The formalizations of such algorithms typically share many common ideas. In this paper, we summarize these ideas into a framework in Isabelle/HOL.
Building on the Isabelle Refinement Framework, we provide support for a refinement based development of DFS based algorithms, from phrasing and proving correct the abstract algorithm, over choosing an adequate implementation style (e.g., recursive, tail-recursive), to creating an executable algorithm that uses efficient data structures.
As a case study, we verify DFS based algorithms of different complexity, from a simple cyclicity checker, over a safety property model checker, to complex algorithms like nested DFS and Tarjan’s SCC algorithm.
Categories and Subject Descriptors F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying Reasoning about Programs; D.2.4 [Software Engineering]: Software/Program Verification—Correctness Proofs
Keywords Graph algorithms; interactive theorem proving; Isabelle/HOL; refinement proof
1. Motivation
Algorithms based on depth-first search (DFS) are widespread. They range from simple ones, like cyclicity checking and safety property model checking, to more complicated ones such as nested DFS [3, 6, 16], and Tarjan’s algorithm for computing the set of strongly connected components (SCCs) [17]. In our verified LTL-model checker CAVA [4] we find multiple DFS-algorithms side-by-side: Nested DFS for counter example search, SCC-algorithms for counter example search and optimization of Büchi-automata, and graph search for counter example reconstruction.
Despite their common base, a lot of duplicated effort is involved in formalizing and verifying them, due to their ad hoc formalizations of DFS. The goal of this paper is to provide a framework that supports the algorithm developer in all phases of the (refinement based) development process, from the correctness proof of the abstract algorithm to generation of verified, efficiently executable code. In summary, we want to make the verification of simple DFS-based algorithms almost trivial, and greatly reduce the effort for complex algorithms.
2. Introduction
Depth-first search is one of the basic algorithms of graph theory. It traverses the graph as long as possible (i.e., until there are no more non-visited successors left) along a branch before tracking back. As mentioned in the previous section, it is the base of a multitude of graph and automata algorithms. In this paper, we present a framework in Isabelle/HOL [13] for modeling and verification of DFS based algorithms, including the generation of efficiently executable code.
The framework follows a parametrization approach: We model a general DFS algorithm with extension points. An actual algorithm is defined by specifying functions to hook into those extension points. These hook functions are invoked whenever the control flow reaches the corresponding extension point. The hook functions work on an opaque extension state, which is independent of the state of the base DFS algorithm.
Properties of the algorithm are stated by invariants of the search state. To establish new invariants, one only has to show that they are preserved by the hook functions. Moreover, our framework supports an incremental approach, i.e., upon establishing a new invariant, already established invariants may be assumed. This modularizes the proofs, as it is not necessary to specify one large invariant.
Our framework features a refinement based approach, exploiting the general concepts provided by the Isabelle Refinement Framework [11]. First, an abstract algorithm is specified and proven correct. Next, the abstract algorithm is refined towards an efficient implementation, possibly in many steps. Refinement is done in a correctness preserving way, such that one eventually gets correctness of the implementation. The refinement based approach introduces a separation of concerns: The abstract algorithm may focus on the algorithmic idea, while the refinements focus on how this idea is efficiently implemented. This greatly simplifies the proofs, and makes verification of more complex algorithms manageable in the first place.
On the abstract level, we provide a very detailed base state, containing the search stack, timing information of the nodes, and sets of visited back, cross, and tree edges. On this detailed state, we provide a large library of standard invariants, which are independent of the extension state, and thus can be re-used for all correctness proofs.
For refinement, we distinguish two orthogonal issues: Structural refinement concerns the overall structure of the algorithm. To this end, our framework currently supports recursive and tail-recursive implementations. Data refinement concerns the representation of the state. It allows to refine to a concrete state with its content tailored towards the specific requirements of the parametrization. This includes projecting away parts of the state that are not needed by the actual algorithm, as well as representing the state by efficient
data structures. Here, our framework supports some commonly used properties of the base state, and an integration with the Autoref-Tool [7]. This tool synthesizes a refined algorithm that uses efficient and widely applicable data structures provided by the Isabelle Collections Framework [10], and can be exported to executable code by the code generator of Isabelle/HOL [5].
The framework comes bundled with multiple instantiations, which mostly stem from requirements of the CAVA model checker [4]. We provide implementations for a cyclicity checker, a safety property model checker, the nested DFS variant of [16] and Tarjan’s SCC algorithm.
The whole formalization is available online at http://cava.in.tum.de/CPP15.
Structure The structure of the paper follows roughly the layout from above. We start with an overview about related work in Section 3. In Section 4, we describe the generic framework of the parametrized DFS. After a primer on refinement in Section 5, the proof architecture and library invariants are covered in Section 6. Finally, in Section 7, we attend to the different refinement phases, before concluding the paper in Section 8.
3. Related Work
While depth-first search is a well-known and widespread algorithm, not much work has been done on its formal verification. A very basic stand-alone formalization was done by Nishihara and Minamide [13], where two variants of a basic DFS are given (one with explicit stack, one without) and their equality is shown. Furthermore, a couple of basic invariants are proved and code export is possible. But there is neither parametrization (it can solely compute the set of reachable nodes) nor flexible representation of the graph: It is fixed as a list of pairs.
Another basic approach is given by Pottier [15], where DFS is formalized in Coq to prove correct Kosaraju’s algorithm for computing the strongly connected components. This formalization also allows for program extraction, but does not allow easy extension for use in other algorithms.
We described a first approach to a DFS framework in [12], on which this paper is based. The availability of a more advanced refinement framework (cf. Section 5) allows for a cleaner, more general, and more elegant framework. One notable improvement is that the hook functions are now specified in the nondeterminism monad of the refinement framework. This way the refinement based approach can also be used to develop the hook functions, which was not possible in [12]. Another improvement is the introduction of the most specific invariant (cf. Section 6), opposed to the notion of DFS-constructable in [12], which allows for an easier process of proving invariants.
In contrast to our development in [9], where we provide a collection of abstract lemmas that help in proving correct similar algorithms, the framework described in this paper is parameterized with hook functions that are invoked on well-defined extension points. This approach is less general, but greatly reduces the effort of instantiating it for new algorithms, as only the functions for the extension points have to be specified, while in [9] the whole algorithm has to be re-written.
4. Generic Framework
In its most well-known formulation, depth-first search is a very simple algorithm: For each node \( v_0 \) from a given set \( V_0 \) of start nodes, we invoke the function DFS. This function, if it has not seen the node yet, recursively invokes itself for each successor of the node.
discovered = \{\}
for each \( v_0 \) in \( V_0 \) do DFS \( v_0 \)
DFS \( w \):
if \( w \notin \) discovered then
discovered = discovered \( \cup \{w\}\)
for each \( v \in E^{-\{w\}} \) do
DFS \( v \)
Note that we use \( E \) for the (fixed) set of edges of the graph and \( R \setminus S \) for the image of a set under the relation \( R \) (in particular, \( E^{-\{v\}} \) denotes the set of successors of the node \( v \)).
In this simple form, the algorithm can only be used to create the set of reachable nodes, i.e., discovered. However, our aim, as laid out before, is to cover DFS based algorithms in general. Therefore we need to develop another view of the algorithm:
1. The algorithm above was given in a recursive form. For a correctness proof, we need to establish invariants for the two foreach-loops, and a pair of pre- and postcondition for the recursive call. This quite complex proof structure impedes the design of our framework. Thus, we use an iterative formulation of DFS that only consists of a single loop. Correctness proofs are done via a single loop invariant.
2. The simple algorithm above only computes a set of discovered nodes. However, in general, one wants to build up a DFS forest with cross and back edges and discovered and finished times.
3. To generalize over different DFS-based algorithms, we provide a skeleton DFS algorithm, which is parameterized by hook functions that are called from well-defined extension points, and modify an opaque extension state. Moreover, we add an additional break condition, which allows to interrupt the search prematurely, before all reachable nodes have been explored.
The skeleton algorithm is defined as follows:
DFS_step:
if \( \text{stack} = [] \) then
choose \( v_0 \) from \( V_0 \cap (\text{UNIV} \setminus \text{discovered}) \)
new_root \( v_0 \); on_new_root \( v_0 \)
else
\( (u,V) = \text{get\_pending} \)
case \( V \) of
None \( \Rightarrow \) finish \( u \); on_finish \( u \)
Some \( v \) \( \Rightarrow \)
if \( v \notin \) discovered then
discover \( u,v \); on_discover \( u,v \)
else if \( v \in \text{set \ stack} \) then
back_edge \( u,v \); on_back_edge \( u,v \)
else
cross_edge \( u,v \); on_cross_edge \( u,v \)
cond s:
\( \neg \text{is\_break} \land (V_0 \subseteq \text{discovered} \implies \text{stack} \neq []) \)
DFS:
init; on_init
while cond do
DFS_step
The step-function has five cases. In each case, we first perform a transformation on the base part of the state (e.g., finish), and then call the associated hook function (e.g., on_finish). Note that hook functions only modify the extension state. We now describe the cases in more detail: If the stack is empty, we choose a start node that has not yet been discovered (the condition guarantees that there is one). The new_root-function pushes this node on the stack and
marks it as discovered. Moreover, it declares all outgoing edges as pending.
If the stack is non-empty, the get_pending-function tries to select a pending edge starting at the node \( u \) on top of the stack. If there are no such edges left, the finish-function pops \( u \) off the stack. Otherwise, we have selected a pending edge \((u,v)\). If the node \( v \) has not yet been discovered, the discover-function marks it as discovered, pushes it on the stack, and declares all its outgoing edges as pending. Otherwise, we distinguish whether \( v \) is on the stack, in which case we have encountered a back edge, or not, in which case we have encountered a cross edge. The corresponding basic functions back_edge and cross_edge have no effect on the stack or the set of discovered nodes.
Note that we have not given an explicit definition of any basic function (e.g., finish, get_pending), but only stated behavioral requirements. Similarly, we have not described the exact content of the state, but merely expected it to contain a stack, a set of discovered nodes, and a set of pending edges. We will first initialize this generic algorithm with a very detailed state (cf. Section 7.1) and corresponding operations, and then refine it to more suitable states and operations, based on the requirements of the parameterization (cf. Section 7.1).
We now describe two different show-cases on how to instantiate our framework to useful algorithms:
**Example 4.1.** A simple application of DFS is a cyclicity check, based on the fact that there is a back edge if and only if there is a reachable cycle. The state extension consists of a single flag \( cyc \), which signals that a back edge has been encountered, and causes the algorithm to terminate prematurely. The hooks are implemented as follows, where omitted ones default to skip:
\[
\begin{align*}
on_init & : \text{cyc} = \text{False} \quad (* \text{initially no cycle has been found} *) \\
is_break & : \text{cyc} = \text{True} \quad (* \text{cycle}! \) \\
on_back_edge & \quad u \quad v : \text{cyc} \quad = \text{True} \quad (* \text{cycle}! \) \\
\end{align*}
\]
**Example 4.2.** Another important family of DFS based algorithms is nested depth-first search \([3][6][16]\), which is used in model checkers to find acceptance cycles in Büchi-automata. A nested DFS algorithm consists of two phases, blue and red. The blue phase walks the graph to find accepting nodes. On backtracking from such a node it starts the red phase. This phase tries to find a cycle containing this accepting node – depending on the specific algorithm, it searches for a path to a node on the stack, or to the accepting node. In any case, the red phase does not enter nodes which were already discovered by another red search.
The idea behind red search is not a concept specific to nested DFS, but is of a more general nature: Find a non-empty path to a node with a certain property, possibly excluding a set of nodes. The latter set has to be closed (i.e., there must be no edges leaving it) and must not contain any node with the property in question. Using our DFS framework, we formalize this algorithm as find_path1_excl \( V_0 \) \( P \) \( X \) for a set of start nodes \( V_0 \), some property \( P \) and a set of nodes to exclude \( X \). It returns either a path to a node with property \( P \), or a new exclusion \( X' = X \cup V_1 \setminus \{\text{pending} \} \) that is also closed and does not contain a node with property \( P \). Note that we use \( E^+ \) for the transitive closure of \( E \).
For the following description of the nested DFS formalization, we assume find_path1_excl to be given. The extension to the state needed for nested DFS consists of two parts: The lasso (i.e., an accepting cycle plus a reaching path from a start node) and all the nodes visited by red searches. Therefore the obvious hooks are
\[
\begin{align*}
on_init & : \text{lasso} = \text{None}; \quad \text{red} = {} \\
is_break & : \text{lasso} \neq \text{None}. \\
\end{align*}
\]
The next hook to implement is on_finish, where the red phase (that is find_path1_excl) has to be run. We define the auxiliary function run_red_dfs as follows:
\[
\begin{align*}
\text{run_red_dfs} & : \quad \text{u} \\
\text{case} \ & \text{find_path1_excl} \ u \{ \lambda x . x \in \text{set stack} \} \text{ of} \\
& \text{Inl} \ X' \Rightarrow (\ast \text{no path, but new exclusion} \ast) \\
& \text{red} = X' \\
& \text{Inr} \ p \Rightarrow (\ast \text{path} \ast) \\
& \text{lasso} = \text{make_lasso} \ p \\
\end{align*}
\]
The hook is then defined as on_finish \( u \) : if accepting \( u \) then run_red_dfs \( u \).
For more recent optimizations of nested DFS, like cycle detection on back edges \([16]\), some other hooks have to be instantiated, too.
5. The Isabelle Refinement Framework
In order to formalize algorithms such as depth-first search, it is advantageous to start with an abstract description of the algorithmic idea, on which the correctness proof can be done in a concise way. The abstract description usually includes nondeterminism and is not executable.
For example, the get_pending-function in our skeleton algorithm (cf. Section 4) does not specify an order in which pending edges are selected, i.e., any pending edge may be chosen nondeterministically. Moreover, the set type used for the successors of a node has no counterpart in common programming languages, e.g., there is no set datatype in Standard ML.
Once the abstract algorithm is proved correct, it is refined towards a fully deterministic, executable version, possibly via multiple refinement steps. Each refinement step is done in a systematic way that guarantees preservation of correctness. For example, one can implement the graph by adjacency lists, and process the pending edges in list order.
The refinement approach simplifies the formalization by separating the correctness proof of the abstract algorithmic ideas from the correctness proof of the concrete implementation. Moreover, it allows to re-use the same abstract correctness proof with different implementations.
In Isabelle, this approach is supported by the Isabelle Refinement and Collections Frameworks \([10][11]\), and the Autoref tool \([7]\). Using ideas of refinement calculus \([1]\), the Isabelle Refinement Framework provides a set of tools to concisely express nondeterministic programs, reason about their correctness, and refine them (in possibly many steps) towards efficient implementations. The Isabelle Collections Framework provides a library of verified efficient data structures for standard types such as sets and maps. Finally, the Autoref tool automates the refinement to efficient implementations, based on user-adjustable heuristics for selecting suitable data structures to implement the abstract types.
In the following, we describe the basics of the Isabelle Refinement Framework. The result of a (possibly nondeterministic) algorithm is described as a set of possible values, plus a special result \( \text{FAIL} \) that characterizes a failing assertion.
\[
\text{datatype 'a nres = RES 'a set | FAIL}
\]
On results, we define an ordering by lifting the subset ordering, \( \text{FAIL} \) being the greatest element.
\[
\begin{align*}
\text{RES} \ X \leq \text{RES} \ Y & \iff \ X \subseteq Y \\
\text{m} \leq \text{FAIL} & \iff \text{FAIL} \leq \text{RES} \ X \\
\end{align*}
\]
Note that this ordering forms a complete lattice, where \( \text{RES} \ \{\} \) is the bottom, and \( \text{FAIL} \) is the top element. The intuitive meaning of \( \text{m} \leq \text{m}' \) is that all possible values of \( \text{m} \) are also possible for \( \text{m}' \).
say that \( m \) refines \( m' \). In order to describe that all values in \( m \) satisfy a condition \( \Phi \), we write \( m \leq \text{spec } x. \ \Phi \ x \) (or shorter: \( m \leq \text{spec } \Phi \)), where \( \text{spec } x. \ \Phi \ x \equiv \text{RES } \{ x. \ \Phi \ x \} \).
**Example 5.1.** Let \( \text{cyc} \_\text{checker} \ E \ V0 \) be an algorithm that checks a graph over edges \( E \) and start nodes \( V0 \) for cyclicity. Its correctness is described by the following formula, that it should return \( \text{true} \) if and only if the graph contains a cycle reachable from \( V0 \), which is expressed by the predicate cyclic:
\[
\text{cyc}\_\text{checker} \ E \ V0 \leq \text{spec } r. \ r = \text{cyclic} \ E \ V0
\]
Now let \( \text{cyc} \_\text{checker_impl} \) be an efficient implementation\(^1\) of \( \text{cyc} \_\text{checker} \). For refinement, we have to prove:
\[
\text{cyc} \_\text{checker_impl} \ E \ V0 \leq \text{cyc} \_\text{checker} \ E \ V0.
\]
Note that, by transitivity, we also get that the implementation is correct:
\[
\text{cyc} \_\text{checker_impl} \ E \ V0 \leq \text{spec } r. \ r = \text{cyclic} \ E \ V0
\]
To express nondeterministic algorithms, the Isabelle Refinement Framework uses a monad over nondeterministic results. It is defined by
\[
\text{return } x \equiv \text{RES } \{ x \}
\]
\[
\text{bind } F \ f = \text{FAIL } f \ |
\text{bind } (\text{RES } X) f \equiv \text{RES } \bigcup_{x \in X} f \ x
\]
Intuitively, \( \text{return } x \) returns the deterministic outcome \( x \), and \( \text{bind } m f \) is sequential composition, which describes the result of nondeterministically choosing a value from \( m \) and applying \( f \) to it. In this paper, we write \( x = m \ f \ x \) instead of \( \text{bind } m f \) to make program text more readable.
Recursion is described by a least fixed point, i.e., a function \( F \) with recursion equation \( F \ x = B \ F \ x \) is described by \( \text{ifp } (\lambda \ F \ x. \ B \ F \ x) \).
To increase readability, we write a recursive function definition as \( F \ x = B \ F \ x \). Based on recursion, the Isabelle Refinement Framework provides while and foreach loops. Note that we agree on a partial correctness semantics in this paper\(^2\), i.e., infinite executions do not contribute to the result of a recursion.
Another useful construct are assertions:
\[
\text{assert } \Phi \equiv \text{if } \Phi \text{ then } \text{return } () \text{ else } \text{FAIL}.
\]
An assertion generates an additional proof obligation when proving a program correct. However, when refining the program, the condition of the assertion can be assumed.
**Example 5.2.** The following program removes an arbitrary element from a non-empty set. It returns the element and the new set.
\[
\text{select } s.
\]
\[
\text{assert } s \neq \{ \} ;
\]
\[
x = \text{spec } x. \ x \in s;
\]
\[
\text{return } (x, s - \{x\})
\]
The assertion in the first line expresses the precondition that the set is not empty. If the set is empty, the result of the program is \( \text{FAIL} \). The second line nondeterministically selects an element from the set, and the last line assembles the result: A pair of the element and the new set.
Using the verification condition generator of the Isabelle Refinement Framework, it is straightforward to prove the following lemma, which states that the program refines the specification of the correct result:
\[ x \neq \{} \implies \text{select } s \leq \text{spec } (x, s) . \ \{ x \in s \land s = s - \{x\} \}
\]
\[ \text{unfolding } \text{select}_\text{def} \text{ by refine}_\text{vcg} \text{ auto}
\]
Typically, a refinement also changes the representation of data, e.g., a set of successor nodes may be implemented by a list. Such a data refinement is described by a relation \( R \) between concrete and abstract values. We define a concretization function \( \triangleright R \), that maps an abstract result to a concrete result:
\[ \triangleright R \text{ FAIL } \equiv \text{FAIL} \]
\[ \triangleright R \text{ (RES } X) \equiv \{ c. \ \exists x \in X. \ (c, x) \in R\} \]
Intuitively, \( \triangleright R \ m \) contains all concrete values with an abstract counterpart in \( m \).
**Example 5.3.** A finite set can be implemented by a duplicate-free list of its elements. This is described by the following relation:
\[ \text{ls}_\text{rel} \equiv \{(l.s), \ \text{set } l = s \land \text{distinct } l\} \]
The select-function from Example 5.2 can be implemented on lists as follows:
\[
\text{select'} l.
\]
\[
\text{assert } l \neq [];
\]
\[
x = \text{hd } l;
\]
\[
\text{return } (x, tl \ l)
\]
Again, it is straightforward to show that \( \text{select'} \) refines \( \text{select} \):
\[ (l.s) \in \text{ls}_\text{rel} \implies \text{select'} l \leq \triangleright l (\text{Id } \times \text{ls}_\text{rel}) \ (\text{select } s) \]
\[ \text{unfolding } \text{select'}_\text{def} \text{ select}_\text{def} \]
\[ \text{by } (\text{refine}_\text{vcg}) \text{ (auto simp: ls_rel_def neq_Nil_conv)} \]
Intuitively, this lemma states that, given the list \( l \) is an implementation of set \( s \), the results of \( \text{select} \) and \( \text{select'} \) are related by \( \text{Id } \times \text{ls}_\text{rel} \), i.e., the first elements are equal, and the second elements are related by \( \text{ls}_\text{rel} \).
Note that the assertion in the abstract select-function is crucial for this proof to work: For the empty set, we have \( \text{select } \{ \} = \text{FAIL} \) and the statement holds trivially. Thus, we may assume that \( s \neq \{\} \), which implies \( l \neq [] \), which, in turn, is required to reason about \( \text{hd } l \) and \( tl \ l \). The handling of assertions is automated by the verification condition generator, which inserts the assumptions and discharges the trivial goals.
### 6. Proof Architecture
Recall that we have phrased the DFS algorithm as a single loop of the form:
\[ \text{init; \ while \ cond do step} \]
Using the monad of the refinement framework, this is implemented by explicitly threading through the state, i.e.,
\[ \text{let } s = \text{init}; \ (\text{while } (\lambda s. \ \text{cond } s) \ \text{do } (\lambda x. \ \text{step } s)) \ s. \]
For better readability, we introduce the convention to omit the state parameter whenever it is clear which state to use.
Properties of the DFS algorithm are shown by establishing invariants, i.e., predicates that hold for all reachable states of the DFS algorithm.
The standard way to establish an invariant is to generalize it to an inductive invariant, and show that it holds for the initial state, and is preserved by steps of the algorithm. When using this approach naively, we face two problems:
1. The invariant to prove an algorithm correct typically is quite complicated. Proving it in one go results in big proofs that tend to get unreadable and hard to maintain. Moreover, there are many basic invariants that are used for almost all DFS-based
When discharging the proof-obligation for a step, we use the fact that at the end of a loop, the invariant holds and the \( \Phi \) have the already established invariants available.
2. Our refinement framework allows for failing assertions. If the algorithm may reach a failing assertion, we cannot establish any invariants. Thus we can only establish invariants of the base algorithm under the assumption that the hook functions do not fail. However, we would like to use invariants of the base algorithm to show that the whole algorithm is correct, in particular that the hook functions do not fail. Thus, we need a solution that allows us to establish invariants for the non-failing reachable states only, and a mechanism that later transfers these invariants to the actual algorithm.
In the following, we describe the proof architecture of our DFS framework, which solves the above problems. First, we define the operator \( \leq_s \) by \( m \leq_s m' \equiv m \neq \text{FAIL} \rightarrow m \leq m' \).
Thus, \( m \leq_s \text{spec} \Phi \) means, that \( m \) either fails or all its possible values satisfy \( \Phi \). With this, an inductive invariant of the non-failing part of the algorithm can be conveniently expressed as
\[
is \leq_s \text{ind} \\ invar \ P \equiv \begin{cases}
is \leq_s \text{init} \land \text{spec} \ P \\
(\forall s. P s \land \text{cond} s \rightarrow step s \leq_s \text{spec} \ P) \end{cases}
\]
It is straightforward to show that there exists a most specific invariant \( I \), i.e., an inductive invariant that implies all other inductive invariants:
\[
is \leq_s \text{ind} \ I \text{ and } \begin{cases}
is \leq_s \text{ind} \ I \land \text{spec} \ P; I s \Rightarrow P s
\end{cases}
\]
In order to establish invariants of the algorithm, we show that they are inductive invariants when combined with \( I \). This leads to the following rule, which shows consequences of the most specific invariant:
**lemma establish_invar:**
assumes \( \leq_s \text{init} \leq_n \text{spec} \ P \)
assumes \( \forall s. \left( \text{cond} s; I s; P s \right) \rightarrow step s \leq_n \text{spec} \ P \)
shows \( I s \Rightarrow P s \)
When discharging the proof-obligation for a step, introduced by the second premise of this rule, we may assume \( I s \), and thus re-use invariants that we have already proved.
In order to use invariants to show properties of the algorithm, we use the fact that at the end of a loop, the invariant holds and the condition does not:
\[(\text{init} \text{ while} \text{ cond do} \text{ step}) \leq_s \text{spec} s. I s \land \neg \text{cond} s\]
Finally, we use the following rule to show that the algorithm does not fail:
**lemma establish_nofail:**
assumes \( \neq \text{fail} \)
assumes \( \forall s. \left[ \text{cond} s; I s \right] \Rightarrow step s \neq \text{FAIL} \)
shows \( (\text{init} \text{ while} \text{ cond do} \text{ step}) \neq \text{FAIL} \)
To simplify re-using and combining of already established invariants, we define a locale \( \text{DFS}_{\text{invar}} \), which fixes a state \( s \) and assumes that the most specific invariant holds on \( s \). Whenever we have established an invariant \( P \), we also prove \( P s \) inside this locale.
In a proof to establish an invariant, we may interpret the locale, to have the already established invariants available.
---
*Example 6.1.* In our parameterized DFS framework, we provide a version of \( \text{establish}_{\text{invar}} \) that splits over the different cases of \( \text{step} \), and is focused on the hook functions:
**lemma establish_invar:**
assumes \( \text{init} \equiv \text{init} \leq_n \text{spec} x. P \left( \text{empty state} x \right) \)
assumes \( \text{new root: } \forall v0 s'. \left. \text{pre on new root} v0 s' \rightarrow on new root v0 s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \right. \)
assumes \( \text{finish: } \text{pre on finish} u s s' \equiv \text{on finish} u s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \)
assumes \( \text{cross edge: } \forall u v s'. \left. \text{pre on cross edge} u v s s' \rightarrow on cross edge u v s s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \right. \)
assumes \( \text{back edge: } \forall u v s s'. \left. \text{pre on back edge} u v s s' \rightarrow on back edge u v s s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \right. \)
assumes \( \text{discover: } \forall u v s s'. \left. \text{pre on discover} u v s s' \rightarrow on discover u v s s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \right. \)
shows \( \text{is invar P} \)
Here, \( \text{is invar P} \) states that \( P \) is an invariant, \( s' \left[ \text{more} := x \right] \) is the state \( s' \) with the extension part updated to \( s \), and the \( \text{pre} \_ \text{predicates} \) define the preconditions for the calls to the hook functions. For example, we have
\[
\text{pre on finish} u s s' \equiv DFS_{\text{invar}} s \land \text{cond s}
\]
\[
\land \text{stack s} \neq [] \\
\land u = \text{hd stack s}
\]
\[
\land \text{pending s} \left( \{ \} \right) \land s' \equiv \text{finish} u s.
\]
That is, the invariant holds on state \( s \) and \( s \) has no more pending edges from the topmost node on the stack. The state \( s' \) emerged from \( s \) by executing the \text{finish}-operation on the base DFS state.
A typical proof of an invariant \( P \) has the following structure:
**lemma P_invar:**
\( \text{is invar P} \)
**proof** (induction rule: establish_invar)
**case** (discover \( u v s s' \))
then **interpret DFS_{\text{invar}} s by simp**
**shows** (on discover \( u v s s' \leq_n \text{spec} x. P \left( s' \left[ \text{more} := x \right] \right) \))
**next**
**next**
**qed**
**lemmas** (in DFS_{\text{invar}})
\( P \equiv P_{\text{invar}}[\text{THEN xfer_{\text{invar}}}] \)
The proof of the first lemma illustrates how the proof language Isar is used to write down a human-readable proof. The different cases that we have to handle correspond to the assumptions of the lemma establish_invar. The **interpret** command makes available all definitions and facts from the locale DFS_{\text{invar}}, which can then be used to show the statement. The second lemma just transfers the invariant to the DFS_{\text{invar}} locale, in which the fact \( P \) is now available by the name \( P \).
Note that this Isar proof scheme is only suited for invariants with complex proofs. Simpler invariant proofs can often be stated on a single line. For example, finiteness of the discovered edges is proved as follows:
**lemma is_invar (\lambda s. finite (edges s))**
by (induction rule: establish_invar) auto
---
**6.1 Library of Invariants**
In the previous section we have described the proof architecture, which enables us to establish invariants of the depth-first search algorithm. In this section, we show how this architecture is put to use.
We define an abstract DFS algorithm, which is an instance of the generic algorithm presented in Section 4. Its state contains discovery and finished times of nodes, and a search forest with additional back and cross edges. In detail, the state consists of:
stack search stack
pending set of pending edges, i.e., the workset
\( \delta \) partial function mapping each node to its discovery time, i.e., \( \text{dom } \delta \) denotes the set of discovered nodes
\( \varphi \) partial function mapping each node to its finishing time, i.e., \( \text{dom } \varphi \) denotes the set of finished nodes
tree the search tree, i.e., the edges leading to the discovery of new nodes
back_edges the set of edges going back onto the stack
cross_edges the set of edges going to an already finished node
time current time
The abstract basic operations (e.g., finish, get_pending) are defined accordingly, fulfilling the requirements of the generic framework.
Based on this, we provide a variety of invariants, which use the information in the state at different levels of detail. Note that these invariants do not depend on the extension part of the state, and thus can be proven independently of the hook functions, which only update the extension part. Further note that we present them as they occur in the locale DFS_invar, which fixes the state and assumes that the most specific invariant holds (cf. Section 6).
For the sets \( \text{dom } \delta \) of discovered and \( \text{dom } \varphi \) of finished nodes, we prove, among others, the following properties:
**Lemma** stack_set_def: set stack = dom \( \delta \) − dom \( \varphi \)
**Lemma** finished_closed: \( E^* \text{dom } \varphi \subseteq \text{dom } \delta \)
**Lemma** nc_finished_eq_reachable:
\[ \neg \text{cond} \land \neg \text{is_break} \implies \text{dom } \varphi = E^* \text{V0} \]
The first lemma states that the nodes on the stack are exactly those that have already been discovered, but not yet finished. The second lemma states that edges from finished nodes lead to discovered nodes, and the third lemma states that the finished nodes are exactly the nodes reachable from \( \text{V0} \) when the algorithm terminates without being interrupted.
We also prove more sophisticated properties found in standard textbooks (e.g., [2] pp. 606–608), like the Parenthesis Theorem (the discovered/finished intervals of two nodes are either disjoint or the one is contained in the other, but there is no overlap) or the White-Path-Theorem (a node \( v \) is reachable in the search tree from a node \( u \) iff there is a white path from \( v \) to \( u \), i.e., a path where all nodes are not yet discovered when \( v \) is).
**Lemma** parenthesis:
\[ \begin{align*}
\text{assumes } & \; v \in \text{dom } \varphi \land w \in \text{dom } \varphi \\
\text{and } & \; \delta v < \delta w \\
\text{shows } & \; (\neg \text{disjoint } \land \varphi v < \delta w) \land (\varphi v \text{ contains } w) \land \varphi w < \varphi v
\end{align*} \]
**Lemma** white_path:
\[ \begin{align*}
\text{assumes } & \; v \in E^* \setminus \text{V0} \land w \in E^* \setminus \text{V0} \\
\text{and } & \; \neg \text{cond} \land \neg \text{is_break} \\
\text{shows } & \; \text{white_path } v w \iff (v, w) \in \text{tree}^*
\end{align*} \]
The Parenthesis Theorem is important to reason about paths in the search tree, as it allows us to gain insights just by looking at the timestamps:
**Lemma** tree_path_iff_parenthesis:
\[ \begin{align*}
\text{assumes } & \; v \in \text{dom } \varphi \land w \in \text{dom } \varphi \\
\text{shows } & \; (v, w) \in \text{tree}^* \\
\iff & \; \delta v < \delta w \land \varphi v > \varphi w
\end{align*} \]
From the location of two nodes in the search tree, we can deduce several properties of those nodes (e.g., the \( \to \) direction of \( \text{tree_path_iff_parenthesis} \)). This can be used, for example, to show properties of back edges, as
**Lemma** back_edge_impl_tree_path:
\[ (v, w) \in \text{back_edges; } v \neq w \implies (v, w) \in \text{tree}^+. \]
It can also be used to establish invariants about the root of a strongly connected component, i.e., the node of an SCC with the highest position in the tree, because
**Lemma** scc_root_scc_tree_transcl:
\[ \text{scc_root } v \text{ scc; } x \in \text{scc; } x \in \text{dom } \delta; x \neq v \implies (v, x) \in \text{tree}^+. \]
Utilizing the knowledge about the search tree, we can then show that a node of an SCC is its root iff it has the minimum discovery time of the SCC. This is an important fact, for example in the proof of Tarjan’s SCC Algorithm.
**Example 6.2.** The idea of cycles in the set of reachable edges is independent of any DFS instantiation. Therefore we can provide invariants about the (a)cyclicity of those edges in the general library, the most important one linking acyclicity to the existence of back edges:
**Lemma** cycle_iff_back_edges:
\( \text{acyclic edges } \iff \text{back_edges } = \{ \} \)
Here, \text{edges} is the union of all tree, cross, and back edges.
The \( \to \) direction follows as an obvious corollary of the lemma \text{back_edge_impl_tree_path} shown above. The \( \leftarrow \) direction follows from the fact that \( \text{acyclic } (\text{tree } \cup \text{cross_edges}) \), the proof of which uses the Parenthesis Theorem.
Moreover, we need the fact that at the end of the search \text{edges} is the set of all reachable edges:
**Lemma** nc_edges_covered:
\[ \begin{align*}
\text{assumes } & \; \neg \text{cond} \land \neg \text{is_break} \land \text{cond } s \\
\text{shows } & \; E \cap (E^* \setminus \text{V0}) \times \text{UNIV } = \text{edges } s
\end{align*} \]
With those facts from the library, we recall the definition of the cyclicity checker in our framework as presented in Example[6.3]. Let \( \text{cyc} \) be that instantiation.
As the \text{cyc} flag is set when a back edge is encountered, the following invariant is easily proved:
**Lemma** i_cyc_eq_back:
\[ \text{is_invar } (\lambda s. \text{cyc } s \iff \text{back_edges } s \neq \{\}) \]
**Apply** (induct rule: establish_invar)
**Apply** (simp_all add: cond_def cong: cyc_more_cong)
**Apply** (simp_all add: empty_state_def)
**Done**
This happens to be the only invariant that needs to be shown for the correctness proof. Using the invariants mentioned above, we easily get the following lemma inside the locale \text{DFS_invar}, i.e., under the assumption \( I_s \):
**Lemma** (in DFS_invar) cyc_correct_aux:
\[ \begin{align*}
\text{assumes } & \; \neg \text{cond } s \\
\text{shows } & \; \text{cyc } s \iff \neg \text{acyclic } (E \cap (E^* \setminus \text{V0}) \times \text{UNIV})
\end{align*} \]
Intuitively, this lemma states that the \text{cyc} flag is equivalent to the existence of a reachable cycle upon termination of the algorithm. Finally, we gain the correctness lemma of the cyclicity checker as an easy consequence:
\[ \text{cyc } E \text{V0 } \neq \text{spec } s. \]
\[ \text{cyc } s \iff \neg \text{acyclic } (E \cap (E^* \setminus \text{V0}) \times \text{UNIV}). \]
7. Refinement
In Section 6 we have described the abstract DFS framework. We phrased the algorithm as a step-function on a state that contained detailed information. In order to implement an actual DFS-algorithm, most of this information is typically not required, and the required information should be represented by efficient data structures. Moreover, we want to choose between different implementation styles, like recursive or tail-recursive.
For this, our framework splits the refinement process into three phases: In the projection phase, we get rid of unnecessary information in the state. In the structural refinement phase, we choose the implementation style. Finally, in the code generation phase, we choose efficient data structures to represent the state, and extract executable code from the formalization.
Although the refinements are applied sequentially, our design keeps the phases as separated as possible, to avoid a cubic blowup of the formalization in the number of different states, implementation styles and efficient data structures.
7.1 Projection
To get a version of the algorithm over a state that only contains the necessary information, we use data refinement: We define a relation between the original abstract state and the reduced concrete state, as well as the basic operations on the concrete state. Then, we show that the operations on the concrete state refine their abstract counterparts. Using the refinement calculus provided by the Isabelle Refinement Framework, we easily lift this result to show that the concrete algorithm refines the abstract one.
In order to be modular w.r.t. the hook operations, we provide a set of standard implementations together with their refinement proofs, assuming that we have a valid refinement for the hooks. As for the abstract state, we also use extensible records for the concrete state. Thus, we obtain templates for concrete implementations, which are instantiated with a concrete data structure for the extension part of the state, a set of concrete hook operations, and refinement theorems for them.
Example 7.1. For many applications, such as the cyclicity checker from Example 6.1 it suffices to keep track of the stack, the pending edges, and the set of discovered nodes. We define a state type
\texttt{record v simple \_ state =}
\texttt{stack :: (v * v set) list, on \_ stack :: v set, visited :: v set}
and a corresponding refinement relation
\text{(s, s') \in R X \equiv}
\text{stack s' = map (λu. (u, pending s \_ \_ \{u\})) (stack s) \land}
\text{on \_ stack s' = set (stack s) \land}
\text{visited s' = dom (δ s) \land}
\text{(more s', more s) \in X.}
We also provide further implementations, which both require the hooks for back and cross edges to have no effect on the state. As an additional optimization we pre-initialize the set of visited nodes to simulate a search with some nodes excluded. As an example, this is used in the inner DFS of the nested DFS algorithm (cf. Example 4.2).
For the cyclicity checker, we define the concrete state by extending the simple state record:
\texttt{record v cycc \_ state \_ impl = v simple \_ state +}
\texttt{cyc :: bool}
The extension state will be refined by identity, i.e., the refinement relation for the concrete state is \textit{R}_D. We also define a set of concrete hook operations (which look exactly like their abstract counterparts)
\text{on \_ init \_ impl: cyc = False}
\text{is \_ break \_ impl: cyc}
\text{on \_ back \_ edge \_ impl u v: cyc = True}
It is trivial to show that these refine their abstract counterparts w.r.t. \textit{R}_D. Once this is done, the DFS framework gives us a cyclicity checker over the concrete state, and a refinement theorem:
\texttt{cycc \_ impl \subseteq R}_D cycc
7.2 Structural Refinement
Another aspect of refinement is the structure of the algorithm. Up to now, we have represented the algorithm as a while-loop over a step-function. This representation greatly simplifies the proof architecture, however, it is not how one would implement a concrete DFS algorithm. We provide two standard implementations: A tail-recursive one and a recursive one. The tail-recursive implementation uses only while and foreach loops, maintaining the stack explicitly, while the recursive implementation uses a recursive function and requires no explicit stack.
We are interested in making the structural refinement of the algorithm independent of the projection, such that we can combine different structural refinements with different projections, without doing a quadratic number of refinement proofs. For this purpose we formalize the structural refinements in the generic setting (cf. Section 4) first. Depending on the desired structure, we have to add some minimal assumptions on the state and generic operations, as will be detailed below. The resulting generic algorithms are then instantiated by the concrete state and operations from the projection phase, thereby discharging the additional assumptions.
The following listing depicts the pseudocode for the tail-recursive implementation, using the basic DFS operations:
\texttt{lemma discover \_ refine:}
\texttt{assumes (s, s') \in R X}
\texttt{shows discover u v s' \subseteq R}_X discover u v s
Assuming refinement of all hook operations, we get refinement of the abstract algorithm:
\texttt{lemma refine:}
\texttt{assumes on \_ init' \subseteq R X on \_ init}
\texttt{and \\forall v0 s0 s'. \{pre \_ on \_ new \_ root v0 s0 s; (s, s') \in R_X\}}
\texttt{⇒ on \_ new \_ root s' \subseteq R}_X on \_ new \_ root s
\texttt{and . . .}
\texttt{shows DFS' \subseteq R}_X DFS
where DFS' is the DFS-algorithm over the concrete operations. We also provide further implementations, which both require the hooks for back and cross edges to have no effect on the state. Thus the corresponding cases can be collapsed and there is no need to implement the on_stack set. As an additional optimization we pre-initialize the set of visited nodes to simulate a search with some nodes excluded. As an example, this is used in the inner DFS of the nested DFS algorithm (cf. Example 4.2).
As in the tail-recursive implementation, we iterate over all root nodes. This implementation iterates over all root nodes. For each root node, it calls new_root and then executes steps of the original algorithm until the stack is empty again. Note that we effectively replace the arbitrary choice of the next root node by the outer foreach-loop. In order for this implementation to be a refinement of the original generic algorithm, we have to assume that 1) the stack is initially empty, such that we can start with choosing a root node, and 2) the same root node cannot be chosen twice, so that we are actually finished when we have iterated over all root nodes. In order to ensure 2), we assume that new_root sets the node to discovered, and no operation can decrease the set of discovered nodes.
With these assumptions, we can use the infrastructure of the Isabelle Refinement Framework to show that the algorithm tailrec_DFS refines the original DFS.
The next listing depicts the pseudocode for the recursive implementation:
```
tailrec_DFS:
init; on_init
foreach v0 in V0 do
if is_break then break
if not discovered v0 then
new_root v0; on_new_root v0
while (stack ≠ [] ∧ ¬is_break) do
(u, V) = get_pending
case V of
None ⇒ finish u; on_finish u
Some v ⇒ {
if discovered v then
if finished v then
cross_edge u v; on_cross_edge u v
else
back_edge u v; on_back_edge u v
else
discover u v; on_discover u v
}
end
end
This implementation iterates over all root nodes. For each root node, it calls new_root and then executes steps of the original algorithm until the stack is empty again. Note that we effectively replace the arbitrary choice of the next root node by the outer foreach-loop. In order for this implementation to be a refinement of the original generic algorithm, we have to assume that 1) the stack is initially empty, such that we can start with choosing a root node, and 2) the same root node cannot be chosen twice, so that we are actually finished when we have iterated over all root nodes. In order to ensure 2), we assume that new_root sets the node to discovered, and no operation can decrease the set of discovered nodes.
With these assumptions, we can use the infrastructure of the Isabelle Refinement Framework to show that the algorithm tailrec_DFS refines the original DFS.
The next listing depicts the pseudocode for the recursive implementation:
```
recursive_DFS:
init; on_init
foreach v0 in V0 do
if is_break then break
if not discovered v0 then
new_root v0; on_new_root v0
inner_dfs v0
end
end
inner_dfs u:
foreach v in E`\{u\} do {
if is_break then break
choose_pending u (Some v)
if discovered v then
if finished v then
cross_edge u v; on_cross_edge u v
else
back_edge u v; on_back_edge u v
else
discover u v; on_discover u v
if ¬is_break then inner_dfs v
}
choose_pending u None;
finish u; on_finish u
```
As in the tail-recursive implementation, we iterate over all root nodes. For each root node, we call the recursive function inner_dfs. Intuitively, this function handles a newly discovered node: It iterates over its successors, and for each successor, it decides whether it induces a cross or back edge, or leads to a newly discovered node. In the latter case, inner_dfs is called recursively on this newly discovered node. Finally, if all successor nodes have been processed, the node is finished.
Intuitively, this implementation replaces the explicit stack of the original algorithm by recursion.
Apart from the assumptions 1) and 2) from tailrec_DFS, we need some additional assumptions to show that this implementation refines the original algorithm: 3) The operation new_root initializes the stack to only contain v0, and the pending edges to all outgoing edges of v0; the operation discover u pushes u on the stack and adds its outgoing edges to the set of pending edges; the finish-operation pops the topmost node from the stack. 4) The get_pending-operation of the original algorithm must have the form of selecting a pending edge from the top of the stack, if any, and then calling the operation choose_pending for this edge, where choose_pending removes the edge from the set of pending edges.
With these assumptions we show that recursive_DFS refines the original DFS algorithm. Note that the refinement proof requires the state to contain a stack, which is however not used by the recursive algorithm. Provided that the parameterization does not require a stack either, we can add an additional data refinement step to remove the stack. For convenience, we combine this step with the automatic refinement to efficient data structures, which is described below.
Note that these assumptions are natural for any set of operations on a DFS state. The advantage of this formulation is its independence from the actual operations. Thus, the same formalization can be used to derive implementations for all states and corresponding operations, which reduces redundancies, and even makes proofs more tractable, as it abstracts from the details of a concrete data structure to its essential properties.
Example 7.2. Recall the simple state from Example [7.1] The simple implementation satisfies all assumptions required for the tail-recursive and recursive implementation, independent of the parameterization. Thus, upon refining an algorithm to simple_state, we automatically get a tail-recursive and a recursive implementation, together with their refinement theorems. In case of the cyclicity checker, we get:
```
cycc_tr_impl ≤⇒ R Id cycc
```
and
```
cycc_rec_impl ≤⇒ R Id cycc
```
7.3 Code Generation
After projection and structural refinement have been done, the algorithm is still described in terms of quite abstract data structures like sets and lists. In a last refinement step, these are refined to efficiently executable data structures, like hash-tables and array-lists. To this end, the Isabelle Collections Framework [10] provides a large library of efficient data structures and generic algorithms, and the Autoref-tool [7] provides a mechanism to automatically synthesize an efficient implementation and a refinement theorem, guided by user-configurable heuristics.
Note that we do this last refinement step only after we have fully instantiated the DFS-scheme. This has the advantage that we can choose the most adequate data structures for the actual algorithm. The fact that the refinements for the basic DFS operations are performed redundantly for each actual algorithm does not result in larger formalizations, as it is done automatically.
Example 7.3. In order to generate an executable cyclicity checker, we start with the constant cycc_tr_impl, which is the tail-recursive version of the cyclicity checker, using the simple_state (cf. Example [7.2]). The state consists of a stack, an on-stack set, a visited set, and the cyc_flag. Based on this, we define the cyclicity checker by:
```
cyc_checker E V0:
s = cycc_tr_impl E V0;
return (cyc s)
```
To generate executable code, we first have to write a few lines of canonical boilerplate to set up Autoref to work with the extension state of the cyclicity checker. The executable version of the algorithm is then synthesized by the following Isabelle commands:
```isar
cycc_impl:
fixes V0::v::hashable set and E::(v×v) set
defines V ≡ Id :: (v × v) set
assumes unfolded V_def,autoref_rules:
(succi,E)∈(V)sig_rel
(V0,V0)∈(V)list_set_rel
notes unfolded V_def,autoref_tyrel =
TYREL/where R = (V)dfst_abs_rel
TYREL/where R = (V × (V)list_set_rel)ras_rel
shows nres_of {(c::v add res) ≤d?R (cyc_checker E V0)}
unfolding cyc_tr_impl_def,abs_def,cyc_checker_def
by autoref_monadic
```
The first command uses the Autoref-tool to synthesize a refinement. The `fixes` line declares the types of the abstract parameters, restricting the node-type to be of the `hashable` typeclass. The next line defines a shortcut for the implementation relation for nodes, which is fixed to identity here. The assumptions declare the refinement of the abstract to the concrete parameters: The edges are implemented by a successor function, using the relator `sllg_rel`, which is provided by the CAVA automata library. The set of start nodes are implemented by a duplication-free list, using the relator `list_set_rel` from the Isabelle Collections Framework, which is roughly the same as `ls_rel` from Example 5.3.
Finally, the `notes`-part gives some hints to the heuristics: The first hint causes sets of nodes to be implemented by hash-tables. This hint matches the on-stack and visited fields of the state. The second hint matches the stack field, and causes it to be implemented by an array-list, where the sets of pending nodes are implemented by duplication-free lists of their elements. Again, the required datatypes and their relators `dfst_abs_rel` and `ras_rel` are provided by the Isabelle Collections Framework.
Ultimately, the `autoref_monadic` method generates a refinement theorem of the shape indicated by the `show`-part, where `?c` is replaced by the concrete algorithm, and `?R` is replaced by the refinement relation for the result. The second command defines a new constant for the synthesized algorithm, and also provides a refinement theorem with the constant folded. As the generated algorithm only uses executable data structures, the code generator of Isabelle/HOL can be used to generate efficient Standard ML code.
8. Conclusion and Future Work
In this paper, we have presented a framework that supports a stepwise refinement development approach of DFS-based algorithms. On the abstract level, we have a generic formalization of DFS, which is parameterized by hook functions that operate on an opaque extension state. Properties of the algorithm are proved via invariants.
To establish new invariants, only their preservation by the hook functions has to be shown. Moreover, invariants can be established incrementally, i.e., already proven invariants can be used when establishing new ones. To this end, our framework provides a large library of parameterization-independent standard invariants, which greatly simplify the correctness proofs of actual instantiations of the framework. For example, the cyclicity checker (cf. Example 4.1) required only one additional invariant with a straightforward 3-line proof.
Furthermore, the framework allows to refine both, data-structures and algorithm-structure, where the latter is (in general) independent of the actual instantiation. The data-refinement, as shown in this paper, is the prerequisite for the aforementioned library of invariants, as it allows to project a detailed abstract state to a small concrete state. This way, it is possible to have proof-supporting information without the necessity to actually gather it at runtime.
The framework supports various default concrete states. Using them only requires a refinement proof of the hook-functions.
To show the usability of the presented framework, we have formalized several examples from easy (Cyclicity Checker) to more advanced (Tarjan’s SCC algorithm). In this paper, we presented the development of the Cyclicity Checker and the approach for formalizing a nested DFS algorithm.
The main contribution of this paper is the DFS-framework itself, and its design approach, which is not limited to DFS algorithms. The first general design principle is the technique of incrementally establishing invariants, which allows us to provide a standard library of invariants, which are independent of the actual instantiation. In Isabelle/HOL, this technique is elegantly implemented via locales.
The second general principle is to provide an algorithm over a detailed state at the abstract level, and then use refinement to project away the unused parts of the state for the implementation. This allows us to have a common abstract base for all instantiations.
Finally, we provide different implementation styles for the same algorithm, in a way that is independent of the concrete data structures, only making some basic assumptions. This allows us to decouple the data refinement and the structural refinement.
8.1 Future Work
An interesting direction of future work is to extend the framework to more general classes of algorithms. For example, when dropping the restriction that pending edges need to come from the top of the stack, one gets a general class of search algorithms, also including breadth-first search, and best-first search.
Currently, our framework only supports an invariant-based proof style. However, in many textbooks, proofs about DFS algorithms are presented by arguing over the already completed search forest. This proof style can be integrated in our framework by (conceptually) splitting the DFS algorithm into two phases: The first phase creates the DFS forest, only using the base state, while the second phase recurses over the created forest and executes the hook functions. It remains future work to elaborate this approach and explore whether it results in more elegant proofs.
References
|
{"Source-Url": "https://pure.manchester.ac.uk/ws/files/88293874/cpp2015_dfs.pdf", "len_cl100k_base": 15129, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39329, "total-output-tokens": 17437, "length": "2e13", "weborganizer": {"__label__adult": 0.0004742145538330078, "__label__art_design": 0.0005240440368652344, "__label__crime_law": 0.0005369186401367188, "__label__education_jobs": 0.0011310577392578125, "__label__entertainment": 0.00010311603546142578, "__label__fashion_beauty": 0.0002416372299194336, "__label__finance_business": 0.00031757354736328125, "__label__food_dining": 0.0004837512969970703, "__label__games": 0.001453399658203125, "__label__hardware": 0.0010671615600585938, "__label__health": 0.000904560089111328, "__label__history": 0.0004429817199707031, "__label__home_hobbies": 0.00015211105346679688, "__label__industrial": 0.0006580352783203125, "__label__literature": 0.0004742145538330078, "__label__politics": 0.00044417381286621094, "__label__religion": 0.0007171630859375, "__label__science_tech": 0.07696533203125, "__label__social_life": 0.00011461973190307616, "__label__software": 0.00572967529296875, "__label__software_dev": 0.9052734375, "__label__sports_fitness": 0.0004963874816894531, "__label__transportation": 0.0009407997131347656, "__label__travel": 0.000274658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64251, 0.00899]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64251, 0.42552]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64251, 0.82909]], "google_gemma-3-12b-it_contains_pii": [[0, 1300, false], [1300, 6569, null], [6569, 12934, null], [12934, 20633, null], [20633, 27688, null], [27688, 35006, null], [35006, 41850, null], [41850, 47980, null], [47980, 55183, null], [55183, 62357, null], [62357, 64251, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1300, true], [1300, 6569, null], [6569, 12934, null], [12934, 20633, null], [20633, 27688, null], [27688, 35006, null], [35006, 41850, null], [41850, 47980, null], [47980, 55183, null], [55183, 62357, null], [62357, 64251, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64251, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64251, null]], "pdf_page_numbers": [[0, 1300, 1], [1300, 6569, 2], [6569, 12934, 3], [12934, 20633, 4], [20633, 27688, 5], [27688, 35006, 6], [35006, 41850, 7], [41850, 47980, 8], [47980, 55183, 9], [55183, 62357, 10], [62357, 64251, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64251, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
eea846cc0e9e4cca4a145ef7d05e652e0db2bd88
|
Analyzing Permission Transfer Channels for Dynamically Typed Languages
Théo Rogliano, Guillermo Polito, Luc Fabresse, Stéphane Ducasse
To cite this version:
Théo Rogliano, Guillermo Polito, Luc Fabresse, Stéphane Ducasse. Analyzing Permission Transfer Channels for Dynamically Typed Languages. DLS 2021 - 17th ACM SIGPLAN International Symposium on Dynamic Languages, Oct 2021, Chicago, France. hal-03347573
HAL Id: hal-03347573
https://hal.science/hal-03347573
Submitted on 17 Sep 2021
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Analyzing Permission Transfer Channels for Dynamically Typed Languages
Théo Rogliano
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
theo.rogliano@inria.fr
Guillermo Polito
Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
guillermo.polito@univ-lille.fr
Luc Fabresse
IMT Lille Douai, Institut Mines-Télécom, Univ. Lille, Centre for Digital Systems, F-59000 Lille, France
luc.fabresse@imt-lille-douai.fr
Stéphane Ducasse
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
stephane.ducasse@inria.fr
Abstract
Communicating Sequential Process (CSP) is nowadays a popular concurrency model in which threads/processes communicate by exchanging data through channels. Channels help in orchestrating concurrent processes but do not solve per-se data races. To prevent data races in the channel model, many programming languages rely on type systems to express ownership and behavioural restrictions such as immutability. However, dynamically-typed languages require run-time mechanisms because of the lack of type information at compile-time.
In this paper, we propose to augment channels with four different permission transfer semantics. We explore two mechanisms to implement such permission transfers at run-time: write barriers and partial-read barriers. To validate our approach we implemented a channel framework in Pharo, and we extended it with different permission transfer semantics. We report on performance measurements of both (a) the transfer overhead on a single object and on a graph of objects, and (b) the per-object access overhead incurred by ownership checks. This work stands as a cornerstone of future work on adaptive optimizations for permission transfer channels.
Keywords: Concurrency, Channels, Ownership, Permission Transfer, Dynamic Language.
1 Introduction
Communicating Sequential Process (CSP) is nowadays a popular concurrency model in which threads/processes communicate by exchanging data through channels [14]. Channels help in orchestrating concurrent processes but do not solve per-se data races [10]. A data race is a non-deterministic access by at least two processes1 to the same memory location or data and at least one process is modifying the content of this data. Those situations lead to incorrect values being processed. To avoid data races, the data needs to be accessible by a unique process during a write operation (See Section 2).
To prevent data races in the channel model, several programming languages implement an ownership transfer model where an object has a unique owner at any point in time. In this model, the owner ensures that operations are synchronised to avoid concurrent accesses and ownership-transfer happens when an object is sent through a channel. Most of the existing channel implementations rely on object copies to express ownership on separated memory inspired by the Go language [16, 23, 26, 29, 32]. Channel implementations on shared memory involve type systems to express ownership and behavioural restrictions such as immutability [20, 27, 28, 30], hence they are not suitable for dynamically-typed languages. Instead, dynamically-typed languages require run-time mechanisms because there is not much information available at compile time.
In this paper, we analyze channel-based permission transfers for dynamically-typed languages. Based on our analysis of existing work, we identify and refine four different permission transfer semantics: copy value, full-permissions transfer, exclusive-write permission transfer and read-only permission transfer. We argue that our classification in only four different semantics captures all mechanisms that we encounter in analyzed languages and related work. We leave outside the scope of this paper how such semantics combine.
We evaluate these semantics by implementing a channel framework in Pharo (See Section 3). We extended this framework with our different permission transfer semantics (See Section 4). Then, we report on our experiments using different mechanisms to implement such permission transfers at
1In this paper, we use the term process to designate a concurrent execution being it a full process or a lighter one (a.k.a thread).
The contributions of this paper are:
- An analysis of existing permission transfer semantics in concurrent scenarios.
- A categorization of permission transfer semantics into four families, eliminating redundant and inconsistent semantics.
- A framework to experiment with permission transfer semantics.
- An evaluation of the implementation of each of the identified families.
2 Problem: Efficient and Correct Object Graph Transfer in Dynamically-Typed Languages
2.1 Context: Pharo’s Concurrency Model
The Pharo programming language implements concurrency with so-called processes: lightweight green-threads scheduled by the virtual machine. The process scheduler schedules processes given their priority. Processes are cooperative amongst the same priority and preemptive amongst different priorities. That is, a process can *yield* to give priority to another process in the same priority, and a process is suspended as soon as a higher priority process is ready [8]. Process switches happen on a timely basis but only at safe execution points: message sends and back jumps.
2.2 Object Graph Transfer by Example
To introduce the problems of object graph transfer, let’s consider the example illustrated in Figure 1. The example presents two processes and many objects shared between them. In this example, one process has a reference to the Alice object, and the other process a reference to the Bob object. Alice has a car, which contains a disc, a key and some gas, and Bob has no reference to it: Bob cannot read, write or send messages to any of these objects.
If at some point during execution Bob needs to use the car we need to send a reference to the car from Alice to Bob, for example, by executing `bob car: alice car`, leading to the situation in Figure 2. As soon as Bob has a reference to the car, he obtains complete access to it *i.e.*, reading, writing, and sending messages to it and all objects reachable from the car.
Handling how objects are shared in a concurrent environment needs special attention. Such a model, in which object sharing relies on just sharing references, *i.e.*, an unrestricted sharing policy, introduces potential data-races. Indeed, if both Alice and Bob have regular references to the car, both may access and modify the entire object graph at the same time producing conflicting side effects.
Even if we take care of revoking Alice’s reference to the car (*e.g.*, nilling it), there is still a possibility of data-races when there are shared objects, as it happens in the example with the key object which is directly referenced by Alice and also reachable by Bob from the car. Likewise, if the key has a reference to the car, the car is still reachable by Alice through the key.
### 2.3 Challenges of Object Graph Transfer
From the example above, we observe that sharing objects in a concurrent environment presents the following challenges:
**Permission Transfer.** Unrestricted object reference transfers provide full permissions to the referee on the referred object. To solve this problem we need to control the permissions on shared objects and how those permissions are granted and revoked. As shown in the example above, references give different types of permissions such as read, write, and execution (in the form of message sends). In addition, we need to define a permission model that allows a proper scoping of the sharing.
**Object Graph Delimitation.** Objects do not exist in isolation but in complex object graphs. When sharing an object, implicit access to its reachable object graph is granted too. We need to control how objects shared between the different graphs behave and how permissions are granted and revoked on an entire object graph. In our example, it would be desirable to grant Bob access permissions to the car without access permissions to the key.
In other words, we need a sharing model preventing shared objects by construction or a model in which we can delimit within an object graph how access is transferred. These models may be left as pure developer responsibility or provide (semi-)automatic ways to do such a delimitation.
**Permission Check.** Transferring object (and graph) permissions may incur serious performance overheads either when the permissions are transferred or when the objects are accessed. For example, solutions that copy the object graph pay the cost of allocating and copy memory at the moment of the transfer. Solutions using instrumentation to check object access will have an impact on overall performance. An optimal solution will minimize both data transfer overhead and data access overheads.
### 3 Canal: An Extensible Channel Framework
In this section, we present an overview of our channel framework to experiment with different permission transfer semantics. We decided to use channels as a permission transfer mechanism because they allow a clear delimitation between the sender and the receiver processes while making object sharing explicit. Processes that receive objects from a channel gain some permissions on those objects and processes that sent them may lose some permissions on them. We also describe our per-process ownership model to control write permission on shared objects and thus prevent data races.
#### 3.1 Extensible Channels and Hooks
Figure 3 depicts the general view of using a channel to transfer an object (the car object on the figure) between two processes. We distinguish two kind of roles a process can take regarding a channel: the sender process and the receiver process. A sender process sends object references into a channel when it does not use this object anymore or wants to share it with other processes. A receiver process acquires references to objects to process them by receiving a reference from a channel.
**Channel overview.** A channel is a shared data structure between processes that allows one to exchange references. Our channels are first class objects. Channels are unidirectional and can be shared between multiple senders and multiple receivers. Channels are in shared memory, any process having a reference of a channel is able to use it. Channel is the base class of the Channel hierarchy. To guarantee atomicity they are implemented using thread-safe atomic FIFO queues that allow one to transfer any type of objects. The public API is minimalistic with only new, send: and receive: messages. The send: and receive: messages are the ones responsible for the permission transfer and are the hooks to define tailored channel subclasses. The general API is composed of three main messages:
**Channel creation.** Creating a channel consists only in sending the new message to a specific Channel subclass. It is an extension point for specific initialisations.
**Channel send.** Sending an object consists in sending the message send: with the desired object as argument. To define channels with specific semantics, the send: message is redefined. The send operation is non-blocking. First specific policies are applied to the object such as revoking write permission then the object is enqueued in the channel.
**Channel receive.** Receiving an object from a channel consists in sending the receive message to a channel. This message is blocking for the receiver process in the case the channel queue is empty. We chose to make them blocking in this case because when sending a receive message a process expects an object to be returned. Returning nil or an unexpected object only defers possible cause of bugs. When an object is dequeued, its permission is updated according to the receiver process.
### 3.2 Channel Transfer By Example
The Channel abstract class is the base class of our framework. It implements the exchange of object references using a unique atomic thread-safe queue. Listing 1 shows the Pharo code of the send- and receive methods of this class. Those methods need to be redefined to add the permission transfer.
```plaintext
Channel>>send: anObject
queue nextPut: anObject.
Channel>>receive
| result |
| keepWaiting |
keepWaiting := false.
self isClosed
ifTrue: [ChannelClosedException signal].
result := queue
nextIfNone: [ keepWaiting := true ].
keepWaiting ] whileTrue: [ queue waitForNewItems ].
result
Listing 1. Definition of send: and receive methods of the Channel base class.
```
Listing 2 shows a Ping Pong example where two processes exchange a ping and a pong object through two channels. The sender process first creates a new channel (line 1) to send a Ping object and another channel (line 2) to receive a Pong object. A receiver process is created using the fork message (line 6) sent to a block (lexical closure syntactically delimited by square brackets). This receiver process waits until it receives an object from the channel (line 7) and then sends a Pong object into the channel (line 6). The sender process sends a Ping object (line 8) and then waits until it receives Pong object (line 10).
```plaintext
1 pingChannel := ExampleChannel new.
2 pongChannel := ExampleChannel new.
3 "receiver process"
4
5 [ objectReceived := pingChannel receive.
6 pongChannel send: Pong new ] fork.
7
8 pingChannel send: Ping new.
9
10 pongChannel receive.
Listing 2. Usage Example of a Channel.
```
This example uses a Channel subclass that does not redefine send: and receive methods but it would be mostly unchanged using more specialized channels. In the following section, we will extend this minimal model to build specific channels by subclassing the Channel class. By carefully choosing specialized channels, the developer prevents data races on the transferred objects.
### 3.3 Per-Process Ownership Model
To avoid data races, concurrency models typically impose a unique writer process at any time for a single object [10]. Ownership models, using message passing, achieve this by attaching a unique owner to all objects. These models may be too restrictive because they prevent non-owner processes to access an object.
In our framework, each object has a unique owner process stored in its instance variable named owner. An object’s owner is the only process that has the write permission on this object. Initially, the process that creates an object is its owner. Changing the ownership of an object only requires assigning another process in its owner instance variable. An attempt to write to an object from a process that doesn’t own an object results in an error. Our ownership model allows multiple read-only references on an object while still guaranteeing the uniqueness of the writer. The process scheduler of Pharo ensures that a read operation does not happen during a write operation.
In the following section we will show how our framework models permission transfer at the level of channels.
### 4 Permission Transfer Channels
In this section, we first report on our identification of four relevant permission transfer semantics. Then, each following subsection describes a Pharo implementation of each of these semantics by extending our Channel framework presented in Section 3.
#### 4.1 Identifying Permission Transfer Semantics
A Canal channel transfers references to objects along with permissions to those objects. We distinguish three kinds of permissions: write, read, and execute (sending a message). As we explain in what follows, not all combinations of permissions are meaningful, hence it is not necessary to implement them. For example, a channel where both the receiver and the sender processes lose all permissions would result in the object being unusable. To constrain the field of what is possible, we followed two rules:
**Write Implies Read Rule.** Write permissions imply read permissions, read permissions imply execution permissions. The first part of this rule means that to write the fields of an object we require the permission to read the fields of that object. The second part of this rule implies that to read the field of an object, we need
Analyzing Permission Transfer Channels for Dynamically Typed Languages
Table 1. Four permission transfer semantics based on the evolution of the sender and receiver processes’ permissions on the transferred object A. The letters W and R represent respectively the write and read permissions of a process on A. Having a ‘’∅’’ instead of a permission means that a process does not have this permission on the object. ∅ means that a process does not hold a reference on the object because it never had it or lost it. A’ is a copy of object A.
<table>
<thead>
<tr>
<th>Permissions Transfer</th>
<th>Sender Process (SP)</th>
<th>Receiver Process (RP)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Pre-send:</td>
<td>Post-send:</td>
</tr>
<tr>
<td>(1) Copy value (from process with ownership)</td>
<td>A_{w,r}</td>
<td>A_{w,r}</td>
</tr>
<tr>
<td>(2) Copy value</td>
<td>A_{-r}</td>
<td>A_{-r}</td>
</tr>
<tr>
<td>(3) Full transfer (from process with ownership)</td>
<td>A_{w,r}</td>
<td>∅</td>
</tr>
<tr>
<td>(4) Full transfer</td>
<td>A_{-r}</td>
<td>A_{-r}</td>
</tr>
<tr>
<td>(5) Exclusive Write (from process with ownership)</td>
<td>A_{w,r}</td>
<td>A_{-r}</td>
</tr>
<tr>
<td>(6) Exclusive Write</td>
<td>A_{-r}</td>
<td>A_{-r}</td>
</tr>
<tr>
<td>(7) Read-only (from process with ownership)</td>
<td>A_{w,r}</td>
<td>A_{w,r}</td>
</tr>
<tr>
<td>(8) Read-only</td>
<td>A_{-r}</td>
<td>A_{-r}</td>
</tr>
</tbody>
</table>
A=Object, A’=Object A copy, W = write, R = read, ∅ = no references, - = not permitted.
To be able to send it a message. This last part arises from the fact that object fields (instance variables) are encapsulated in Pharo and can only be accessed by the object itself.
Conservation of Permissions Rule. The set of permissions owned by the sender before the transfer must be equal to the set of permissions owned together by the sender and the receiver after the transfer. A first corollary of this rule is that a process cannot grant a permission that it did not have beforehand thus permissions cannot be forged on an object. A second corollary of this rule is that overall permissions over an object cannot be lost, preventing strange situations where an object reference exists but cannot be accessed by any other object.
Given these two rules we identified four permission transfer semantics (See Table 1) in languages based on the evolution of permissions of the sender and receiver processes before and after the transfer. Since message sending to an object is never restricted in our semantics, we omit the execution permission from the rest of the paper. Note that writing to an object is sending a message but we do not prevent from sending the message and instead throw an error.
Table 1 reads as follow. A group of two rows represent a permission transfer semantics. The first row of the group represents a permission transfer when the sender process has the ownership of the transferred object. The second row of the group represents a transfer when the sender process does not have ownership of the transferred object. The first column is the name of the semantics. The second column shows the permissions the sender has before sending an object. The third column shows the permissions the sender has after sending an object. The last column shows the permissions the receiver has after receiving an object.
Taking as example the full transfer semantics represented by the third and forth row. The first column confirms that we are looking at the full transfer semantics.
Reading the forth row. In the second column, A_{w,r} means that the sender process will send an object A and has write (ownership) and read permissions on this object. In the third column, ∅ means that the sender process, after sending A, lost all references on A and all permissions on A. In the last column, A_{w,r} means that the receiver process received a reference on object A and have all permissions on A.
Reading the forth row. In the second column, A_{-r} means that the sender process will send an object A and has only read permission on this object (no ownership). In the third column, A_{-r} means that the sender process, after sending A, kept a read-only reference on A. In the last column, ∅ means that the receiver process never received a reference on object A (in this case because we aborted the transfer).
In the following subsections, we present more in details these four permission transfer semantics: copy value, full ownership, exclusive write and read-only.
4.2 Copy Value Graph Transfer (CVGT)
A Copy Value Graph Transfer channel corresponds to the first and second rows of Table 1. When sending an object A through this channel, the sender process keeps a reference to A and sends a copy of object A graph to the receiver called A’. The receiver process has all permissions on A’.
While, in a first thought, it seems to break the conservation of permission rule, it does not. The sender process keeps exactly the same permission over object A and the receiver cannot access A (the original object).
During a transfer, the sender process makes the object A’ a copy of A. Copying object does not keep invariants such as read-only so the sender process has the unique reference on A’ with all permissions. After a transfer, the sender process loses this unique reference to A’ hence loses all permissions on it. The receiver process gains all permissions A’. This semantics is also found in the solutions of Rust channel [22] or Kilim [27]. This behaviour is achieved by revoking recursively all references in the object graph. Listing 5 shows the Pharo code of the redefined send: method for this channel. We implemented this channel using Pharo’s atomic object reference swapping (i.e., become: is used in the graphBecome: method). Using pointer-swapping, all original references to the sent object are replaced by references to the argument object. After pointer-swapping, the channel object is the only one that has a reference to the object to send.
```
CopyValueGraphTransferChannel>>send: anObject
| copiedObject |
copiedObject := anObject deepCopy.
copiedObject graphOwner: nil.
super send: copiedObject
Listing 3. Redefinition of send: for Copy Value Graph Transfer Channel.
We redefine the method receive to set the receiver as the new owner of each object copies inside the object graph, thanks to the method graphOwner:. Thus, the receiver process gains write permission on all objects of the graph.
```
CopyValueGraphTransferChannel>>receive
| receivedObject |
receivedObject := super receive.
receivedObject beReadOnlyObject.
receivedObject graphOwner: Processor activeProcess.
receivedObject beWritableObject.
↑ receivedObject
Listing 4. Redefinition of receive for Copy Value Graph Transfer Channel.
A CopyValueGraphTransferChannel guarantees that two reads or two writes cannot happen concurrently on the same object because two separate copies of the graph exist at the same time. Moreover, a deep-copy does not produce shared objects but this channel suffers the duplication problem: we can modify the two copies independently.
4.3 Full Ownership Graph Transfer (FOGT)
A Full Ownership Graph Transfer channel corresponds to the third and forth rows of table 1. The third row represents the case when the sender process has ownership of object A. After a transfer, the sender process loses all references on A thus all permissions are represented by 0. The receiver process gains all permissions.
This behaviour respects the conversation of permissions rule since the sender permissions become the receiver permissions. If the sender process does not own object A as in the forth row then the channel throws an error and the transfer does not happen. The receiver has no references on object A. This behaviour also complies with the conservation of permissions rule since the permission did not change. This semantics is also found in the solutions of Rust channel [22] or Kilim [27].
This behaviour is achieved by revoking recursively all references in the object graph. Listing 5 shows the Pharo code of the redefined send: method for this channel. We implemented this channel using Pharo’s atomic object reference swapping (i.e., become: is used in the graphBecome: method). Using pointer-swapping, all original references to the sent object are replaced by references to the argument object. After pointer-swapping, the channel object is the only one that has a reference to the object to send.
```
FullOwnershipObjectTransferChannel>>send: anObject
| objectToSend |
"Create placeholder object"
Processor activeProcess := anObjectOwner
ifTrue:[ anObject graphOwner: nil ]
ifFalse: [ self error: 'Cannot full transfer an object not owned' ].
objectToSend := Object new.
"Swap references"
anObject graphBecome: objectToSend.
"At this point, objectToSend has the sole reference to the sent object"
queue nextPut: objectToSend
Listing 5. Redefinition of send: for Full Ownership Object Transfer Channel.
Later on, when a process calls receive and consumes the reference from the channel, it will get the unique reference to that object. Moreover, the new owner of the object graph is assigned to the receiver process as shown by the redefinition of the receive method in Listing 6.
```
CopyValueGraphTransferChannel>>receive
| receivedObject |
receivedObject owner ifNil:["gain ownership"
receivedObject := super receive.
receivedObject beWritableObject.
receivedObject graphOwner: Processor activeProcess.
receivedObject beReadOnlyObject.
]
↑ receivedObject
Listing 6. Redefinition of receive for Full Ownership Object Transfer Channel.
4.4 Exclusive Write Object Transfer (EWOT)
An Exclusive Write Object Transfer channel corresponds to the fifth and sixth rows of Table 1. In the sixth row the sender process starts with all permissions on object A. During the transfer, the sender process loses the write permission but keeps at least one reference on object A. The receiver process gains all permissions over object A. The receiver process ends up being the only one with write permissions.
This behaviour complies with the conversation of permissions rules because the permissions of the sender before the
Analyzing Permission Transfer Channels for Dynamically Typed Languages
Transfer are the permissions of the receiver after the transfer. If a process does not possess the write permission on object A as in the sixth row then the channel throws an error and the transfer does not happen. It allows us to comply with the conservation of permissions rule. One way to achieve this behaviour is to instrument all writes to object fields and check if the writing is being done from the owner process. This semantics is also found in Haskell or Clojure channel implementation with persistent data [24].
Our current implementation makes use of pre-existing per-object low-overhead write barriers [2] in Pharo.
Listing 7 shows the code of the send: method for the EWOT Channel. Before adding the transferred object into the channel queue, its owner is reset (set to nil) thus preventing any further write access by the sender.
``` Smalltalk
ExclusiveWriteObjectTransferChannel>>send: anObject
Processor activeProcess = anObject owner
ifTrue: [ anObject owner: nil.
queue nextPut: objectToSend ]
ifFalse: [ self error:
'Trying to send a not owned object'
]
```
**Listing 7.** Redefinition of send: for Exclusive Write Object Transfer Channel.
In its receive method (See Listing 8), the channel sets the owner of the object to the current process before returning it.
``` Smalltalk
ExclusiveWriteObjectTransferChannel>>receive
| receivedObject |
receivedObject := super receive.
receivedObject beWritableObject.
receivedObject owner: Processor activeProcess.
receivedObject beReadOnlyObject.
receivedObject
```
**Listing 8.** Redefinition of receive for Exclusive Object Transfer Channel.
It is important to note that thanks to the Pharo’s concurrency model (See Section 2.1), writes are atomic thus a read cannot occur while a process is writing on the shared object such as modifying its owner. Also note, the write permission granting is on a per object basis and not directly the whole object graph. It allows one to manually delimit the granting of the write permission on the object graph.
4.5 Read-only Object Transfer (ROOT)
A Read-Only Object Transfer channel corresponds to the penultimate and ultimate rows of Table 1. In both rows, the sender process is keeping the same permissions it had over object A. The receiver process gains a reference on object A and has only the read permission.
This behaviour complies with the conservation of permissions rules because the sender process does not change and the receiver process has only the read permission.
We implemented it with the same write barrier mechanism used in the exclusive write object transfer (EWOT) except that the object ownership remains unmodified. Since the object’s owner does not change the receiver process is only able to read the object. Note that the sender may or may not have write permissions on the object.
In Listing 9, a person object is created and its owner is manually set to nil. This removes the write permission of the sender process on this object. Nevertheless, the sender is still able to send the object through a read-only object transfer channel. In this example, the receiver process does not gain the write permission but only the read permission on the object.
``` Smalltalk
channel := ReadOnlyObjectTransferChannel new.
objectToTransfer := Person new.
"Change the ownership"
objectToTransfer owner: nil.
objectToTransfer name: 'Alice'. "Raise an exception"
"receiver process"
[ objectReceived := channel receive.
objectReceived name: 'Bob'. "Raise an exception"
] fork.
channel send: objectToTransfer.
```
**Listing 9.** Usage Example of a Read-Only Object Transfer Channel.
4.6 Channel limits: transactionality and inconsistent reads
Table 2 summarizes the different semantics and characteristics of all channel semantics.
The EWOT channel granting the read permission to other processes induces inconsistent read. Back to the car example, let’s say bob owns the car. Alice reads the title of the disc and process it. Now, Bob changes the disc and Alice reads the number of track of the disc. Alice will read the number of track of the new disc.
Inconsistent reads also occurs with the CVGT channel. Alice gives the car to Bob expecting that Bob does an action with the car. Since Bob has a copy, Alice cannot see the action effect. A new synchronisation is necessary to avoid inconsistent reads. A callback re-transferring the copied object back, or a merging approach is then necessary for the sender process to access the modified object once the receiver is done.
We believe this issue is proper to transactional systems, and is orthogonal to the permission transfer that channels allow.
5 Comparing the different Channels
Concurrency mechanisms target first correctness to avoid inconsistencies and then performance. Most of the time, implementations are a trade-off between correctness and performance [12]. In this section, we compare the performance of our different channels. In our case, solutions using copy or pointer-swapping suffer an overhead during the object transfer via a channel meanwhile solutions based on the write barrier do not. In contrast, solutions based on the write barrier suffer from overhead on object access meanwhile the others do not.
In this section, we report on our results benchmarking different scenarios. The Pharo bench message measures the number of times a message is sent per second. The time taken to send a message is inversely proportional to the result of the bench message. In other words, the higher the result of the bench message is, the faster it is. Each channel of each scenario is benched 100 times. A box summarizes 100 benchmarks on a channel. The first and third quartile form the box, the lowest and maximum value form the whiskers. We run all measurements on the same computer with a 2.4 GHz Intel Core i5 quadcore processor and 16 Gio 2133 MHz LPDDR3 ram with all other applications closed.
5.1 Scenario 1: Single Object Transfer Speed
In this scenario, we measure the cost of transferring only one object with our different channels. To achieve this, we reuse a modified version of our Pong example (See Listing 2) with the different channels. The transferred object has 3 instance variables: its owner process, a name and a potential collection of friends not initialized for this scenario.
```plaintext
channel := OwnershipGraphTransferPartialReadBarrierChannel new.
objectToTransfer := OwnedPerson new name: 'Alice'.
[objectReceived := channel receive.
channel send: objectReceived
fork.
channel send: biggerObjectToTransfer.
channel receive]
```
Listing 10. Code example for benchmarks.
Figure 4 shows that Copy Value Graph Transfer channel (red) is on par with the Exclusive Write Object Transfer channel (green). Copying a small object is almost as fast as sending a reference through a channel. Both are around 10% slower than the Read-Only Object Transfer channel (purple). The ROOT channel does not transfer ownership so it does not have to update the ownership status and does not need a graph traversal. It explains the better performance of this channel in transfer speed. The Full Ownership Graph Transfer channel is 8 times slower than the other ones. Pointer-swapping is slower than a field update for ownership transfer and also slower than copying small objects. The conclusion is that except for the Full Ownership Graph Transfer channel implementation using pointer-swapping, they are all in the same order.
5.2 Scenario 2: Object Graph Transfer Speed
In this scenario, we measure the cost of transferring an object but for different size of object graphs. The code is similar to scenario 5.1 but the transferred object now references a list of friends in its variable objectToTransfer. In this scenario, we use three different sizes for the friends list. The EWOT channel and ROOT channel operate the transfer at an object granularity and not on a graph granularity. To be fair in the
Analyzing Permission Transfer Channels for Dynamically Typed Languages
Figure 5. Data transfer speed result for object graphs of different sizes. It is on number of times executed per seconds, the higher the value is, the fastest the implementation is.
CVGT = Copy Value Graph Transfer (red).
FOGT = Full Ownership Graph Transfer (blue).
EWOT = Exclusive Write Object Transfer (green).
Comparison, we adapted the EWOT channel to a graph granularity i.e., we recursively apply a write barrier to all objects in the transferred graph. We omitted the ROOT channel in this comparison because transferring an object reference already gives access to the graph. There is no modification on this channel, hence the result is the one from the previous scenario 5.1.
Figure 5 shows that the Copy Value Graph Transfer channel linear-time in the size of the object graph. It is already 2 times slower by adding two friends in the object graph compared to no friends. It becomes slower than the Full Ownership Graph Transfer after adding five friends. The Full Ownership Object Graph Transfer based on the become message and the adapted Exclusive Write Transfer channel does not vary much for this sample. Since they all perform the same graph traversal, we conclude that the creation of new objects is expensive. For the FOGT channel, it is quite surprising to repeat a seemingly costly operation and to not degrade performance. An alternative that we did not explore, is pointer-swapping all the objects in the object graph at once. Indeed, the become operation is based on a primitive that performs the pointer-swapping from elements inside an array to elements from another array. In the become’s case, both arrays contain one element each, the two objects to swap. The alternative solution is then to collect all the object graphs inside an array and to swap with an array of filler objects by using directly the primitive. An inspection of the implementation of this primitive is necessary to potentially understand our result.
5.3 Scenario 3: Single Object Access Speed
In this scenario we measure the cost of accessing an object with the write barrier compared to accessing without it. Channels using copy or become are not penalized on data access, thus measuring accessing without the barrier is equivalent.
Figure 6 shows that accessing an object field with the write barrier is in average 6% slower than a regular access. For the transfer of an object graph of size 1768, it needs 7000 accesses to the object to have a bigger overhead than doing a copy of the object. For an object graph of size 2312, it requires more than 18000 accesses to the object to have a bigger overhead than doing a copy of the object. In conclusion, channels based on copy lose performance on data transfer depending on the size of the object graph to transfer. Those channels are more suitable for programs heavily accessing objects and doing few object transfers. Channel based on a write barrier lose performance on data access. Those channels are more suitable for programs exchanging a lot of objects and performing few accesses. Finally, channels based on a partial-read barrier mechanism are more appropriate for programs both transferring and accessing a lot of objects.
5.4 Discussion
Mechanism comparison in languages. Other languages having potentially more efficient write barriers do not change the conclusion brought by our results. Those write barriers will still introduce an overhead on object access. It will only change at what point accessing object becomes more expensive than copying or vice versa. In the same way,
others language having potential better copying algorithms will still introduce an overhead on object transfer.
Partial-read barrier in the form of the become message does not relate as much in other languages. The logic behind the become message is hidden in the virtual machine supporting the Pharo language and an inspection of this latter could give us a better understanding.
**Memory usage.** We did not measure the memory consumption induced by our different channels. Nevertheless, our write barrier is implemented by marking and checking an unused bit in the header of the objects. Therefore, the memory size of the objects is not affected at all. In contrast, the partial-read barrier leaves a placeholder object and copying duplicates the object, which both increase memory usage. Some techniques, such as persistent data, diminish the number of copies but do not completely eliminate them. In constrained memory environments still offering concurrency, a programmer should opt for channels using the write barrier.
### 6 Related Work
#### 6.1 Permission and ownership
Capabilities as presented by Mark Miller [21] is an association between an object reference and the access permissions on this object. It can be a proxy or a handle on this association directly exchanged by processes. Capabilities evolve only by restricting further the permissions and not necessarily during a capability transfer. In our model, we exchange direct references and permissions evolve during the transfer.
Object ownership was originally introduced to control the effects of object aliasing in the context of Flexible Alias Protection. It was first embodied as a type system with ownership types [4]. Gordon et al. [11] provides ownership for dynamically-typed language for encapsulation. The ownership is by object and forms ownership trees. It encapsulates the object graph but does not handle which process is able to use this object graph. The ownership model proposed is also restrictive, the owner has all permissions while the others have none. Other models exist with more relaxed permission models such as the one proposed by Wernli et all [31]. In our model, permission is also more fine-grained granting also write or read permission.
#### 6.2 Message passing
Message passing is present in languages focusing on distributed computing such as Erlang, Go, Scala. Singularity OS [9] also focuses on message passing between their isolated process. Pipelines, the Communicating Sequential Process (CSP) model [14], and the actor model [13] are message passing models where processes synchronize by passing messages. In Go [1], when one process finishes processing a datum, it signals it. Processes wait their turn to access a datum and consume it. This is achieved with FIFO queues called channels for CSP [15], pipe for pipeline or mailboxes for actors. Programatically, the advantages are that the communication is easy for developers to reason about as a mean of synchronisation. Nevertheless, it usually requires to copy the whole graph of data to be referenced. This is not trivial to handle [19] since the graph may be large and in the worst case, can be the whole application data. The notion of permission is not explicit with those queues but their semantics is comparable to our Copy Value Graph Transfer channel.
#### 6.3 Shared memory
In a shared memory model, processes share some parts or all of their memory among them. Writing concurrent programs with shared memory is difficult and error prone [17] due to data races. Nowadays, each programming language provides its own ownership transfer model to support concurrency. We categorize them in two categories: the run-time and compile-time checking approaches.
**Run-time checking approaches.** Older languages mostly rely on run-time checking approaches. Many models exist such as the thread/mutex model [7] and the Software Transactional Memory(STM) [25] model. The most known is the mutex model in which data access is controlled by a mutex. A process has the right to access the data only if it was able to lock the mutex hence gaining ownership of the data. While it solves data races issues other problems appear such as dead or live locks [33]. Acquiring the mutex is an implicit ownership transfer and write permission gain. This semantics is similar to our semantics of Exclusive Write Object Transfer channel except the transfer is explicit in our channel.
<table>
<thead>
<tr>
<th>Permissions Transfer</th>
<th>CVGT</th>
<th>FOGT</th>
<th>EWOT</th>
<th>ROOT</th>
</tr>
</thead>
<tbody>
<tr>
<td>Rust</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Kilim</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Erlang</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>✓</td>
</tr>
<tr>
<td>Go</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>C++</td>
<td>✓</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Java</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Javascript</td>
<td>✓</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Kotlin</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Lua</td>
<td>✓</td>
<td>X</td>
<td>✓</td>
<td>X</td>
</tr>
<tr>
<td>Clojure</td>
<td>X</td>
<td>X</td>
<td>✓</td>
<td>X</td>
</tr>
<tr>
<td>Haskell</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
</tr>
<tr>
<td>Pony</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
✓ = the language offers a channel with the semantics, X = the language does not offer a channel with this semantics.
Table 3. Channels permission transfer semantics offered in other languages.
Some CSP models coupled with shared memory exist but they only allow exchanging immutable or frozen data [18]. At run time those properties are checked when accessing the data. If the property is broken they return a run-time exception [30]. Note that only the data transferred is immutable and data referenced by this one are still freely mutable. For example, in the case of a frozen array only the array is immutable but all the elements inside are mutable, it is the developer responsibility to freeze all the elements inside the array when needed. This semantics is similar to our Read-Only Object Transfer channel except in those models the write permission is completely lost whereas in our model a process retains the write permission.
Checking during execution induces an overhead specifically on frequently accessed data. The STM model with the use of persistent data aims to reduce the number of checks. A persistent data structure [24] is a data structure that preserves one or multiple previous versions of itself when it is modified. Here one process writes on the to-be-modified version meanwhile other processes read on a preserved version. The check is delayed when the modified version needs to become the preserved version. Instead of having many little checks during execution, there is only one big check. Processes accessing the data only to read are then not penalized. With all those solutions an overhead still exists at least when writing onto the data but the data transfer cost is close to non-existent. The semantics is the one of our Exclusive Write Object Transfer channel in the fact that only one process has the write permission but has the side-effect of the Copy Value object Graph Transfer channel where inconsistent reads happen.
**Compile-time checking approaches.** The idea for type annotation in the objective of sharing [3] data has only been demonstrated in some recent languages such as Rust, Pony and Project Midori. To synchronize between processes, Rust offers a CSP with channels but with shared memory [22]. It guarantees the uniqueness of a reference to a datum with a static analysis during compilation with a borrow-checker. The owner of this unique reference is simply the owner of the data. Rust channels are Full Ownership Object Transfer channels. Pony offers an actor model, it is one of the few actor model with shared memory. Pony [5] guarantees the uniqueness of the writer with a static analysis during compilation with type annotations. Contrary to Rust, it is possible to have multiple references to a datum but with different capabilities. If there is already a reference with the write capabilities all further reference will not have this capability for the lifetime of the first reference with the write capability. Pony type annotations allow one to express the same transfer semantics than our channels. While compile-time checking does not suffer from overhead or bigger memory usage at run time, it imposes a discipline on the developer to produce code in accordance with the permission rules [6]. However this technique is possible for statically typed language, it is not an easy feat for dynamically typed language where the control flow graph depends on the type of the receiver. They are a starting point to enhance the performance of our channels.
Table 3 summarizes the semantics tied to object transfer through channels in other languages. This list of languages is not exhaustive. Some languages are not represented because we could not determine exactly in which category they belong such as C# and Ruby. Rust and Kilim both offer FOGT semantics thanks to compile-time checks. Even though Erlang effectively copy messages, they only allow the sharing of immutable data thus having the same semantics as ROOT. Other languages that we did not list (notably functional ones) take this approach. Most of the languages with CVGT semantics follow the Go trend. They deep-copy the data to send. Note that the notion of pointer exists in some of those languages and sending a pointer is not restricted causing data races. Clojure and Haskell propose channels coupled with STM or persistent data that allow one writer and many readers. This is the EWOT semantics. Finally, Pony with its type system allows for a fine grain of permission transfer and offers each of the semantics.
### 7 Conclusion
We showed that sharing an object between processes is not only sharing an object but also sharing all the graph of object reachable from this object as root. In a concurrent environment with shared memory, this object graph will be subject to data races. To avoid this issue, we need to control process permissions on shared object graphs while keeping good performance. Channels set a proper framework to experiment with permission transmission because of the clear delimitation between the processes send objects and the ones that acquire them. We propose an extensible channel-based permission transfer framework for experimentation, and designed four kinds of permission transfer.
We compared the performance of our transfer permission channels. On one hand, permissions transfer using pointer swapping is constantly 7 to 8 times slower than the baseline. Using a deep copy is linearly slower depending on the size of the object graph to transfer. On the other hand, using a write barrier introduces an overhead of up to 6% on all object field writes but it does not penalize object field reads.
As future work, we want to allow the combination of channels. In another future work, we aim to improve performance. Some optimizations already exists with static analysis such as escape analysis. For dynamically-typed languages such an analysis is only possible after a number of interpretation of the program. With this analysis, clear delimitations of which part of the object graph are really used appear. Then, only for those objects are copied or the write barrier is activated. Furthermore, we would like to explore type annotation similar
to Rust or Pony and their implication for dynamically-typed languages.
References
|
{"Source-Url": "https://hal.science/hal-03347573/file/hal-version.pdf", "len_cl100k_base": 10627, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 40596, "total-output-tokens": 13217, "length": "2e13", "weborganizer": {"__label__adult": 0.0003802776336669922, "__label__art_design": 0.00031495094299316406, "__label__crime_law": 0.00031113624572753906, "__label__education_jobs": 0.0006008148193359375, "__label__entertainment": 8.124113082885742e-05, "__label__fashion_beauty": 0.0001437664031982422, "__label__finance_business": 0.0001798868179321289, "__label__food_dining": 0.000347137451171875, "__label__games": 0.000560760498046875, "__label__hardware": 0.000751495361328125, "__label__health": 0.0005049705505371094, "__label__history": 0.00029921531677246094, "__label__home_hobbies": 7.981061935424805e-05, "__label__industrial": 0.00031495094299316406, "__label__literature": 0.0003273487091064453, "__label__politics": 0.00030541419982910156, "__label__religion": 0.000446319580078125, "__label__science_tech": 0.02435302734375, "__label__social_life": 0.00011408329010009766, "__label__software": 0.005359649658203125, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.0002913475036621094, "__label__transportation": 0.000537872314453125, "__label__travel": 0.00020968914031982425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56470, 0.03167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56470, 0.58784]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56470, 0.90191]], "google_gemma-3-12b-it_contains_pii": [[0, 1033, false], [1033, 5379, null], [5379, 8114, null], [8114, 12487, null], [12487, 17373, null], [17373, 22286, null], [22286, 27544, null], [27544, 32315, null], [32315, 35610, null], [35610, 39234, null], [39234, 44615, null], [44615, 50639, null], [50639, 56470, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1033, true], [1033, 5379, null], [5379, 8114, null], [8114, 12487, null], [12487, 17373, null], [17373, 22286, null], [22286, 27544, null], [27544, 32315, null], [32315, 35610, null], [35610, 39234, null], [39234, 44615, null], [44615, 50639, null], [50639, 56470, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56470, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56470, null]], "pdf_page_numbers": [[0, 1033, 1], [1033, 5379, 2], [5379, 8114, 3], [8114, 12487, 4], [12487, 17373, 5], [17373, 22286, 6], [22286, 27544, 7], [27544, 32315, 8], [32315, 35610, 9], [35610, 39234, 10], [39234, 44615, 11], [44615, 50639, 12], [50639, 56470, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56470, 0.09786]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3f652b01d7f522c882196f0ffe8e2d00a514679e
|
Contents
1 Introduction ................................................. 3
1.1 Features .................................................. 3
1.2 Disclaimer ................................................ 3
1.3 Installation ............................................. 4
1.4 Example usage and output ............................. 4
1.5 Configuration ......................................... 6
1.6 Error codes ............................................. 6
1.7 Related tools .......................................... 9
2 Advanced usage ............................................ 11
2.1 Automated tests ........................................ 11
2.2 Configuring tests ....................................... 11
2.3 Skip file header ........................................ 12
3 pycodestyle API ........................................... 13
3.1 Checker Classes ......................................... 13
3.2 Report Classes ......................................... 14
3.3 Utilities ............................................... 15
4 Developer’s notes .......................................... 17
4.1 Source code ............................................ 17
4.2 Direction ............................................... 17
4.3 Contribute .............................................. 17
4.4 Changes ................................................. 19
5 Indices and tables ......................................... 33
6 Credits ..................................................... 35
7 License ....................................................... 37
Python Module Index ....................................... 39
Index ........................................................ 41
Python style guide checker
pycodestyle (formerly pep8) is a tool to check your Python code against some of the style conventions in PEP 8.
Contents:
pycodestyle is a tool to check your Python code against some of the style conventions in PEP 8.
1.1 Features
- Plugin architecture: Adding new checks is easy.
- Parseable output: Jump to error location in your editor.
- Small: Just one Python file, requires only stdlib. You can use just the `pycodestyle.py` file for this purpose.
- Comes with a comprehensive test suite.
1.2 Disclaimer
This utility does not enforce every single rule of PEP 8. It helps to verify that some coding conventions are applied but it does not intend to be exhaustive. Some rules cannot be expressed with a simple algorithm, and other rules are only guidelines which you could circumvent when you need to.
Always remember this statement from PEP 8:
A style guide is about consistency. Consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is most important.
Among other things, these features are currently not in the scope of the pycodestyle library:
- **naming conventions**: this kind of feature is supported through plugins. Install flake8 and the pep8-naming extension to use this feature.
- **docstring conventions**: they are not in the scope of this library; see the pydocstyle project.
- **automatic fixing**: see the section PEP8 Fixers in the related tools page.
### 1.3 Installation
You can install, upgrade, uninstall pycodestyle.py with these commands:
```bash
$ pip install pycodestyle
$ pip install --upgrade pycodestyle
$ pip uninstall pycodestyle
```
### 1.4 Example usage and output
```bash
$ pycodestyle --first optparse.py
optparse.py:69:11: E401 multiple imports on one line
optparse.py:77:1: E302 expected 2 blank lines, found 1
optparse.py:88:5: E301 expected 1 blank line, found 0
optparse.py:222:34: W602 deprecated form of raising exception
optparse.py:347:31: E211 whitespace before '{'
optparse.py:357:17: E201 whitespace after '{'
optparse.py:472:29: E221 multiple spaces before operator
optparse.py:544:21: W601 .has_key() is deprecated, use 'in'
```
You can also make pycodestyle.py show the source code for each error, and even the relevant text from PEP 8:
```bash
$ pycodestyle --show-source --show-pep8 testsuite/E40.py
import os, sys
^ Imports should usually be on separate lines.
Okay: import os\nimport sys
E401: import sys, os
```
Or you can display how often each error was found:
```bash
$ pycodestyle --statistics -qq Python-2.5/Lib
232 E201 whitespace after '{'
599 E202 whitespace before ')
631 E203 whitespace before ';
842 E211 whitespace before '(
2531 E221 multiple spaces before operator
4473 E301 expected 1 blank line, found 0
```
You can also make pycodestyle.py show the error text in different formats by using --format having options default/pylint/custom:
```
$ pycodestyle testsuite/E40.py --format=default
testsuite/E40.py:2:10: E401 multiple imports on one line
$ pycodestyle testsuite/E40.py --format=pylint
testsuite/E40.py:2: [E401] multiple imports on one line
$ pycodestyle testsuite/E40.py --format='%(path)s|%(row)d|%(col)d| %s %s'
testsuite/E40.py|2|10| E401 multiple imports on one line
```
Variables in the custom format option
<table>
<thead>
<tr>
<th>Variable</th>
<th>Significance</th>
</tr>
</thead>
<tbody>
<tr>
<td>path</td>
<td>File name</td>
</tr>
<tr>
<td>row</td>
<td>Row number</td>
</tr>
<tr>
<td>col</td>
<td>Column number</td>
</tr>
<tr>
<td>code</td>
<td>Error code</td>
</tr>
<tr>
<td>text</td>
<td>Error text</td>
</tr>
</tbody>
</table>
Quick help is available on the command line:
```
$ pycodestyle -h
```
Usage: pycodestyle [options] input ...
Options:
- --version show program's version number and exit
- -h, --help show this help message and exit
- -v, --verbose print status messages, or debug with -vv
- -q, --quiet report only file names, or nothing with -qq
- --first show first occurrence of each error
- --exclude=patterns exclude files or directories which match these comma separated patterns (default: .svn,CVS,.bzr,.hg,.git)
- --filename=patterns when parsing directories, only check filenames matching these comma separated patterns (default: *.py)
- --select=errors select errors and warnings (e.g. E,W6)
- --ignore=errors skip errors and warnings (e.g. E4,W)
- --show-source show source code for each error
- --show-pep8 show text of PEP 8 for each error (implies --first)
- --statistics count errors and warnings
- --count print total number of errors and warnings to standard error and set exit code to 1 if total is not null
- --max-line-length=n set maximum allowed line length (default: 79)
- --max-doc-length=n set maximum allowed doc line length and perform these checks (unchecked if not set)
- --hang-closing hang closing bracket instead of matching indentation of opening bracket's line
--format=format set the error format [default|pylint|<custom>]
--diff report only lines changed according to the unified diff
received on STDIN
Testing Options:
--benchmark measure processing speed
Configuration:
The project options are read from the [pycodestyle] section of the
tox.ini file or the setup.cfg file located in any parent folder of the
path(s) being processed. Allowed options are: exclude, filename,
select, ignore, max-line-length, max-doc-length, hang-closing, count,
format, quiet, show-pep8, show-source, statistics, verbose.
--config=path user config file location
(default: ~/.config/pycodestyle)
1.5 Configuration
The behaviour may be configured at two levels, the user and project levels.
At the user level, settings are read from the following locations:
If on Windows: ~\.pycodestyle
Otherwise, if the XDG_CONFIG_HOME environment variable is defined: XDG_CONFIG_HOME/
pycodestyle
Else if XDG_CONFIG_HOME is not defined: ~/.config/pycodestyle
Example:
[pycodestyle]
count = False
ignore = E226,E302,E41
max-line-length = 160
statistics = True
At the project level, a setup.cfg file or a tox.ini file is read if present. If none of these files have a
[pycodestyle] section, no project specific configuration is loaded.
1.6 Error codes
This is the current list of error and warning codes:
<table>
<thead>
<tr>
<th>code</th>
<th>sample message</th>
</tr>
</thead>
<tbody>
<tr>
<td>E1</td>
<td>Indentation</td>
</tr>
<tr>
<td>E101</td>
<td>indentation contains mixed spaces and tabs</td>
</tr>
<tr>
<td>E111</td>
<td>indentation is not a multiple of four</td>
</tr>
<tr>
<td>E112</td>
<td>expected an indented block</td>
</tr>
<tr>
<td>E113</td>
<td>unexpected indentation</td>
</tr>
<tr>
<td>E114</td>
<td>indentation is not a multiple of four (comment)</td>
</tr>
</tbody>
</table>
### 1.6. Error codes
<table>
<thead>
<tr>
<th>code</th>
<th>sample message</th>
</tr>
</thead>
<tbody>
<tr>
<td>E115</td>
<td>expected an indented block (comment)</td>
</tr>
<tr>
<td>E116</td>
<td>unexpected indentation (comment)</td>
</tr>
<tr>
<td>E117</td>
<td>over-indented</td>
</tr>
<tr>
<td>E121</td>
<td>continuation line under-indented for hanging indent</td>
</tr>
<tr>
<td>E122</td>
<td>continuation line missing indentation or outdented</td>
</tr>
<tr>
<td>E123</td>
<td>closing bracket does not match indentation of opening bracket’s line</td>
</tr>
<tr>
<td>E124</td>
<td>closing bracket does not match visual indentation</td>
</tr>
<tr>
<td>E125</td>
<td>continuation line with same indent as next logical line</td>
</tr>
<tr>
<td>E126</td>
<td>continuation line over-indented for hanging indent</td>
</tr>
<tr>
<td>E127</td>
<td>continuation line over-indented for visual indent</td>
</tr>
<tr>
<td>E128</td>
<td>continuation line under-indented for visual indent</td>
</tr>
<tr>
<td>E129</td>
<td>visually indented line with same indent as next logical line</td>
</tr>
<tr>
<td>E131</td>
<td>continuation line unaligned for hanging indent</td>
</tr>
<tr>
<td>E133</td>
<td>closing bracket is missing indentation</td>
</tr>
<tr>
<td>E2</td>
<td><strong>Whitespace</strong></td>
</tr>
<tr>
<td>E201</td>
<td>whitespace after ‘(’</td>
</tr>
<tr>
<td>E202</td>
<td>whitespace before ‘)’</td>
</tr>
<tr>
<td>E203</td>
<td>whitespace before ‘:’</td>
</tr>
<tr>
<td>E211</td>
<td>whitespace before ‘(’</td>
</tr>
<tr>
<td>E221</td>
<td>multiple spaces before operator</td>
</tr>
<tr>
<td>E222</td>
<td>multiple spaces after operator</td>
</tr>
<tr>
<td>E223</td>
<td>tab before operator</td>
</tr>
<tr>
<td>E224</td>
<td>tab after operator</td>
</tr>
<tr>
<td>E225</td>
<td>missing whitespace around operator</td>
</tr>
<tr>
<td>E226</td>
<td>missing whitespace around arithmetic operator</td>
</tr>
<tr>
<td>E227</td>
<td>missing whitespace around bitwise or shift operator</td>
</tr>
<tr>
<td>E228</td>
<td>missing whitespace around modulo operator</td>
</tr>
<tr>
<td>E231</td>
<td>missing whitespace after ‘;’, ‘:', or ‘:’</td>
</tr>
<tr>
<td>E241</td>
<td>multiple spaces after ‘,’</td>
</tr>
<tr>
<td>E242</td>
<td>tab after ‘,’</td>
</tr>
<tr>
<td>E251</td>
<td>unexpected spaces around keyword / parameter equals</td>
</tr>
<tr>
<td>E261</td>
<td>at least two spaces before inline comment</td>
</tr>
<tr>
<td>E262</td>
<td>inline comment should start with ‘# ’</td>
</tr>
<tr>
<td>E265</td>
<td>block comment should start with ‘# ’</td>
</tr>
<tr>
<td>E266</td>
<td>too many leading ‘#’ for block comment</td>
</tr>
<tr>
<td>E271</td>
<td>multiple spaces after keyword</td>
</tr>
<tr>
<td>E272</td>
<td>multiple spaces before keyword</td>
</tr>
<tr>
<td>E273</td>
<td>tab after keyword</td>
</tr>
<tr>
<td>E274</td>
<td>tab before keyword</td>
</tr>
<tr>
<td>E275</td>
<td>missing whitespace after keyword</td>
</tr>
</tbody>
</table>
Continued on next page
Table 1 – continued from previous page
<table>
<thead>
<tr>
<th>Code</th>
<th>Message</th>
</tr>
</thead>
<tbody>
<tr>
<td>E3</td>
<td>Blank line</td>
</tr>
<tr>
<td>E301</td>
<td>expected 1 blank line, found 0</td>
</tr>
<tr>
<td>E302</td>
<td>expected 2 blank lines, found 0</td>
</tr>
<tr>
<td>E303</td>
<td>too many blank lines (3)</td>
</tr>
<tr>
<td>E304</td>
<td>blank lines found after function decorator</td>
</tr>
<tr>
<td>E305</td>
<td>expected 2 blank lines after end of function or class</td>
</tr>
<tr>
<td>E306</td>
<td>expected 1 blank line before a nested definition</td>
</tr>
<tr>
<td>E4</td>
<td>Import</td>
</tr>
<tr>
<td>E401</td>
<td>multiple imports on one line</td>
</tr>
<tr>
<td>E402</td>
<td>module level import not at top of file</td>
</tr>
<tr>
<td>E5</td>
<td>Line length</td>
</tr>
<tr>
<td>E501 (*)</td>
<td>line too long (82 > 79 characters)</td>
</tr>
<tr>
<td>E502</td>
<td>the backslash is redundant between brackets</td>
</tr>
<tr>
<td>E7</td>
<td>Statement</td>
</tr>
<tr>
<td>E701</td>
<td>multiple statements on one line (colon)</td>
</tr>
<tr>
<td>E702</td>
<td>multiple statements on one line (semicolon)</td>
</tr>
<tr>
<td>E703</td>
<td>statement ends with a semicolon</td>
</tr>
<tr>
<td>E704 (*)</td>
<td>multiple statements on one line (def)</td>
</tr>
<tr>
<td>E711 (*)</td>
<td>comparison to None should be ‘if cond is None:’</td>
</tr>
<tr>
<td>E712 (*)</td>
<td>comparison to True should be ‘if cond is True:’ or ‘if cond:’</td>
</tr>
<tr>
<td>E713</td>
<td>test for membership should be ‘not in’</td>
</tr>
<tr>
<td>E714</td>
<td>test for object identity should be ‘is not’</td>
</tr>
<tr>
<td>E721 (*)</td>
<td>do not compare types, use ‘isinstance()’</td>
</tr>
<tr>
<td>E722</td>
<td>do not use bare except, specify exception instead</td>
</tr>
<tr>
<td>E731</td>
<td>do not assign a lambda expression, use a def</td>
</tr>
<tr>
<td>E741</td>
<td>do not use variables named ‘l’, ‘O’, or ‘I’</td>
</tr>
<tr>
<td>E742</td>
<td>do not define classes named ‘l’, ‘O’, or ‘I’</td>
</tr>
<tr>
<td>E743</td>
<td>do not define functions named ‘l’, ‘O’, or ‘I’</td>
</tr>
<tr>
<td>E9</td>
<td>Runtime</td>
</tr>
<tr>
<td>E901</td>
<td>SyntaxError or IndentationError</td>
</tr>
<tr>
<td>E902</td>
<td>IOError</td>
</tr>
<tr>
<td>W1</td>
<td>Indentation warning</td>
</tr>
<tr>
<td>W191</td>
<td>indentation contains tabs</td>
</tr>
<tr>
<td>W2</td>
<td>Whitespace warning</td>
</tr>
<tr>
<td>W291</td>
<td>trailing whitespace</td>
</tr>
<tr>
<td>W292</td>
<td>no newline at end of file</td>
</tr>
<tr>
<td>W293</td>
<td>blank line contains whitespace</td>
</tr>
<tr>
<td>W3</td>
<td>Blank line warning</td>
</tr>
<tr>
<td>W391</td>
<td>blank line at end of file</td>
</tr>
<tr>
<td>W5</td>
<td>Line break warning</td>
</tr>
<tr>
<td>W503 (*)</td>
<td>line break before binary operator</td>
</tr>
</tbody>
</table>
Continued on next page
Table 1 – continued from previous page
<table>
<thead>
<tr>
<th>code</th>
<th>sample message</th>
</tr>
</thead>
<tbody>
<tr>
<td>W504 (*)</td>
<td>line break after binary operator</td>
</tr>
<tr>
<td>W505 (**)</td>
<td>doc line too long (82 > 79 characters)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>W6</th>
<th>Deprecation warning</th>
</tr>
</thead>
<tbody>
<tr>
<td>W601</td>
<td><code>.has_key()</code> is deprecated, use ‘in’</td>
</tr>
<tr>
<td>W602</td>
<td>deprecated form of raising exception</td>
</tr>
<tr>
<td>W603</td>
<td>‘<>’ is deprecated, use ‘!’</td>
</tr>
<tr>
<td>W604</td>
<td>backticks are deprecated, use ‘repr()’</td>
</tr>
<tr>
<td>W605</td>
<td>invalid escape sequence ‘x’</td>
</tr>
<tr>
<td>W606</td>
<td>‘async’ and ‘await’ are reserved keywords starting with Python 3.7</td>
</tr>
</tbody>
</table>
(*) In the default configuration, the checks E121, E123, E126, E133, E226, E241, E242, E704, W503, W504 and W505 are ignored because they are not rules unanimously accepted, and PEP 8 does not enforce them. Please note that if the option --ignore=errors is used, the default configuration will be overridden and ignore only the check(s) you skip. The check W503 is mutually exclusive with check W504. The check E133 is mutually exclusive with check E123. Use switch --hang-closing to report E133 instead of E123. Use switch --max-doc-length=n to report W505.
(^) These checks can be disabled at the line level using the # noqa special comment. This possibility should be reserved for special cases.
_Special cases aren’t special enough to break the rules._
Note: most errors can be listed with such one-liner:
```bash
$ python pycodestyle.py --first --select E,W testsuite/ --format '%(code)s: %(text)s'
```
1.7 Related tools
The flake8 checker is a wrapper around pycodestyle and similar tools. It supports plugins.
Other tools which use pycodestyle are referenced in the Wiki: list of related tools.
2.1 Automated tests
You can also execute pycodestyle tests from Python code. For example, this can be highly useful for automated testing of coding style conformance in your project:
```python
import unittest
import pycodestyle
class TestCodeFormat(unittest.TestCase):
def test_conformance(self):
"""Test that we conform to PEP-8."""
style = pycodestyle.StyleGuide(quiet=True)
result = style.check_files(['file1.py', 'file2.py'])
self.assertEqual(result.total_errors, 0,
"Found code style errors (and warnings)."
)
```
There’s also a shortcut for checking a single file:
```python
import pycodestyle
fchecker = pycodestyle.Checker('testsuite/E27.py', show_source=True)
file_errors = fchecker.check_all()
print("Found %s errors (and warnings)" % file_errors)
```
2.2 Configuring tests
You can configure automated pycodestyle tests in a variety of ways.
For example, you can pass in a path to a configuration file that pycodestyle should use:
You can also set specific options explicitly:
```python
style = pycodestyle.StyleGuide(ignore=['E501'])
```
## 2.3 Skip file header
Another example is related to the feature request [#143]: skip a number of lines at the beginning and the end of a file. This use case is easy to implement through a custom wrapper for the PEP 8 library:
```python
#!/python
import pycodestyle
LINES_SLICE = slice(14, -20)
class StyleGuide(pycodestyle.StyleGuide):
"""This subclass of pycodestyle.StyleGuide will skip the first and last lines of each file."""
def input_file(self, filename, lines=None, expected=None, line_offset=0):
if lines is None:
assert line_offset == 0
line_offset = LINES_SLICE.start or 0
lines = pycodestyle.readlines(filename)[LINES_SLICE]
return super(StyleGuide, self).input_file(
filename, lines=lines, expected=expected, line_offset=line_offset)
if __name__ == '__main__':
style = StyleGuide(parse_argv=True, config_file=True)
report = style.check_files()
if report.total_errors:
raise SystemExit(1)
```
This module declares a lines’ window which skips 14 lines at the beginning and 20 lines at the end. If there’s no line to skip at the end, it could be changed with `LINES_SLICE = slice(14, None)` for example.
You can save it in a file and use it with the same options as the original `pycodestyle`.
The library provides classes which are usable by third party tools.
- **Checker Classes**
- **Report Classes**
- **Utilities**
### 3.1 Checker Classes
The *StyleGuide* class is used to configure a style guide checker instance to check multiple files. The *Checker* class can be used to check a single file.
```python
class pycodestyle.StyleGuide(parse_argv=False, config_file=None, parser=None, paths=None, report=None, **kwargs)
# Initialize a PEP-8 instance with few options.
init_report (reporter=None)
# Initialize the report instance.
check_files (paths=None)
# Run all checks on the paths.
input_file (filename, lines=None, expected=None, line_offset=0)
# Run all checks on a Python source file.
input_dir (dirname)
# Check all files in this directory and all subdirectories.
excluded (filename, parent=None)
# Check if the file should be excluded.
# Check if ‘options.exclude’ contains a pattern matching filename.
```
ignore_code (code)
Check if the error code should be ignored.
If ‘options.select’ contains a prefix of the error code, return False. Else, if ‘options.ignore’ contains a
prefix of the error code, return True.
get_checks (argument_name)
Get all the checks for this category.
Find all globally visible functions where the first argument name starts with argument_name and which
contain selected tests.
class pycodestyle.Checker (filename=None, lines=None, report=None, **kwargs)
Load a Python source file, tokenize it, check coding style.
readline ()
Get the next line from the input buffer.
run_check (check, argument_names)
Run a check plugin.
check_physical (line)
Run all physical checks on a raw input line.
build_tokens_line ()
Build a logical line from tokens.
check_logical ()
Build a line from tokens and run all logical checks on it.
check_ast ()
Build the file’s AST and run all AST checks.
generate_tokens ()
Tokenize file, run physical line checks and yield tokens.
check_all (expected=None, line_offset=0)
Run all checks on the input file.
3.2 Report Classes
class pycodestyle.BaseReport (options)
Collect the results of the checks.
start ()
Start the timer.
stop ()
Stop the timer.
init_file (filename, lines, expected, line_offset)
Signal a new file.
increment_logical_line ()
Signal a new logical line.
error (line_number, offset, text, check)
Report an error, according to options.
get_file_results ()
Return the count of errors and warnings for this file.
get_count (prefix="")
Return the total count of errors and warnings.
get_statistics (prefix="")
Get statistics for message codes that start with the prefix.
prefix="" matches all errors and warnings
prefix='E' matches all errors
prefix='W' matches all warnings
prefix='E4' matches all errors that have to do with imports
print_statistics (prefix="")
Print overall statistics (number of errors and warnings).
print_benchmark ()
Print benchmark numbers.
class pycodestyle.FileReport (options)
Collect the results of the checks and print the filenames.
class pycodestyle.StandardReport (options)
Collect and print the results of the checks.
class pycodestyle.DiffReport (options)
Collect and print the results for the changed lines only.
3.3 Utilities
pycodestyle.expand_indent (line)
Return the amount of indentation.
Tabs are expanded to the next multiple of 8.
```python
>>> expand_indent(' ')
4
>>> expand_indent('\t')
8
>>> expand_indent(' \t')
8
>>> expand_indent(' \t')
16
```
pycodestyle.mute_string (text)
Replace contents with 'xxx' to prevent syntax matching.
```python
>>> mute_string("abc")
"xxx"
>>> mute_string("\'\'\'abc\'\'\'")
"\'\'\'xxx\'\'\'"
>>> mute_string("r\'abc")
"r\'xxx""
```
pycodestyle.read_config (options, args, arglist, parser)
Read and parse configurations.
If a config file is specified on the command line with the “–config” option, then only it is used for configuration.
Otherwise, the user configuration (~/.config/pycodestyle) and any local configurations in the current directory or above will be merged together (in that order) using the read method of ConfigParser.
pycodestyle documentation, Release 2.5.0
pycodestyle.**process_options** *(arglist=None, parse_argv=False, config_file=None)*
Process options passed either via arglist or command line args.
Passing in the `config_file` parameter allows other tools, such as flake8 to specify their own options to be processed in pycodestyle.
**pycodestyle.register_check** *(func_or_cls, codes=None)*
Register a new check object.
4.1 Source code
The source code is currently available on GitHub under the terms and conditions of the Expat license. Fork away!
- Source code and issue tracker on GitHub.
- Continuous tests against Python 2.7 and 3.4+ as well as the nightly Python build and PyPy, on Travis CI platform.
4.2 Direction
Some high-level aims and directions to bear in mind for contributions:
- `pycodestyle` is intended to be as fast as possible. Using the `ast` module defeats that purpose. The `pep8-naming` plugin exists for this sort of functionality.
- If you want to provide extensibility / plugins, please see `flake8 - pycodestyle` doesn’t want or need a plugin architecture.
- `pycodestyle` aims to have no external dependencies.
4.3 Contribute
You can add checks to this program by writing plugins. Each plugin is a simple function that is called for each line of source code, either physical or logical.
Physical line:
- Raw line of text from the input file.
Logical line:
- Multi-line statements converted to a single line.
• Stripped left and right.
• Contents of strings replaced with "xxx" of same length.
• Comments removed.
The check function requests physical or logical lines by the name of the first argument:
```python
def maximum_line_length(physical_line)
def extraneous_whitespace(logical_line)
def blank_lines(logical_line, blank_lines, indent_level, line_number)
```
The last example above demonstrates how check plugins can request additional information with extra arguments. All attributes of the `Checker` object are available. Some examples:
• `lines`: a list of the raw lines from the input file
• `tokens`: the tokens that contribute to this logical line
• `line_number`: line number in the input file
• `total_lines`: number of lines in the input file
• `blank_lines`: blank lines before this one
• `indent_char`: indentation character in this file (" " or "\t")
• `indent_level`: indentation (with tabs expanded to multiples of 8)
• `previous_indent_level`: indentation on previous line
• `previous_logical`: previous logical line
Check plugins can also maintain per-file state. If you need this, declare a parameter named `checker_state`. You will be passed a dict, which will be the same one for all lines in the same file but a different one for different files. Each check plugin gets its own dict, so you don’t need to worry about clobbering the state of other plugins.
The docstring of each check function shall be the relevant part of text from PEP 8. It is printed if the user enables `--show-pep8`. Several docstrings contain examples directly from the PEP 8 document.
```python
Okay: spam(ham[1], {eggs: 2})
E201: spam( ham[1], {eggs: 2})
```
These examples are verified automatically when `pycodestyle.py` is run with the `--doctest` option. You can add examples for your own check functions. The format is simple: "Okay" or error/warning code followed by colon and space, the rest of the line is example source code. If you put 'r' before the docstring, you can use \n for newline and \t for tab.
Then be sure to pass the tests:
```bash
$ python pycodestyle.py --testsuite testsuite
$ python pycodestyle.py --doctest
$ python pycodestyle.py --verbose pycodestyle.py
```
When contributing to pycodestyle, please observe our Code of Conduct.
To run the tests, the core developer team and Travis CI use tox:
```bash
$ pip install -r dev-requirements.txt
$ tox
```
All the tests should pass for all available interpreters, with the summary of:
4.4 Changes
4.4.1 2.5.0 (2019-01-29)
New checks:
- E117: Over-indented code blocks
- W505: Maximum doc-string length only when configured with --max-doc-length
Changes:
- Remove support for EOL Python 2.6 and 3.3. PR #720.
- Add E117 error for over-indented code blocks.
- Allow W605 to be silenced by # noqa and fix the position reported by W605
- Allow users to omit blank lines around one-liner definitions of classes and functions
- Include the function return annotation (->) as requiring surrounding whitespace only on Python 3
- Verify that only names can follow await. Previously we allowed numbers and strings.
- Add support for Python 3.7
- Fix detection of annotated argument defaults for E252
- Correct the position reported by W504
4.4.2 2.4.0 (2018-04-10)
New checks:
- Add W504 warning for checking that a break doesn’t happen after a binary operator. This check is ignored by default. PR #502.
- Add W605 warning for invalid escape sequences in string literals. PR #676.
- Add W606 warning for ‘async’ and ‘await’ reserved keywords being introduced in Python 3.7. PR #684.
- Add E252 error for missing whitespace around equal sign in type annotated function arguments with defaults values. PR #717.
Changes:
- An internal bisect search has replaced a linear search in order to improve efficiency. PR #648.
- pycodestyle now uses PyPI trove classifiers in order to document supported python versions on PyPI. PR #654.
- ‘setup.cfg’ ‘[wheel]’ section has been renamed to ‘[bdist_wheel]’, as the former is legacy. PR #653.
- pycodestyle now handles very long lines much more efficiently for python 3.2+. Fixes #643. PR #644.
- You can now write ‘pycodestyle.StyleGuide(verbose=True)’ instead of ‘pycodestyle.StyleGuide(verbos True, paths=[‘-v’])’ in order to achieve verbosity. PR #663.
- The distribution of pycodestyle now includes the license text in order to comply with open source licenses which require this. PR #694.
• ‘maximum_line_length’ now ignores shebang (‘#!’) lines. PR #736.
• Add configuration option for the allowed number of blank lines. It is implemented as a top level dictionary which can be easily overwritten. Fixes #732. PR #733.
Bugs:
• Prevent a ‘DeprecationWarning’, and a ‘SyntaxError’ in future python, caused by an invalid escape sequence. PR #625.
• Correctly report E501 when the first line of a docstring is too long. Resolves #622. PR #630.
• Support variable annotation when variable start by a keyword, such as class variable type annotations in python 3.6. PR #640.
• pycodestyle internals have been changed in order to allow ‘python3 -m cProfile’ to report correct metrics. PR #647.
• Fix a spelling mistake in the description of E722. PR #697.
• ‘pycodestyle –diff’ now does not break if your ‘gitconfig’ enables ‘mnemonicprefix’. PR #706.
4.4.3 2.3.1 (2017-01-31)
Bugs:
• Fix regression in detection of E302 and E306; #618, #620
4.4.4 2.3.0 (2017-01-30)
New Checks:
• Add E722 warning for bare except clauses
• Report E704 for async function definitions (async def)
Bugs:
• Fix another E305 false positive for variables beginning with “class” or “def”
• Fix detection of multiple spaces between async and def
• Fix handling of variable annotations. Stop reporting E701 on Python 3.6 for variable annotations.
4.4.5 2.2.0 (2016-11-14)
Announcements:
• Added Make target to obtain proper tarball file permissions; #599
Bugs:
• Fixed E305 regression caused by #400; #593
4.4.6 2.1.0 (2016-11-04)
Announcements:
• Change all references to the pep8 project to say pycodestyle; #530
Changes:
• Report E302 for blank lines before an “async def”; #556
• Update our list of tested and supported Python versions which are 2.6, 2.7, 3.2, 3.3, 3.4 and 3.5 as well as the nightly Python build and PyPy.
• Report E742 and E743 for functions and classes badly named ‘l’, ‘O’, or ‘I’.
• Report E741 on ‘global’ and ‘nonlocal’ statements, as well as prohibited single-letter variables.
• Deprecated use of `[pep8]` section name in favor of `[pycodestyle]`; #591
• Report E722 when bare except clause is used; #579
Bugs:
• Fix opt_type AssertionError when using Flake8 2.6.2 and pycodestyle; #561
• Require two blank lines after toplevel def, class; #536
• Remove accidentally quadratic computation based on the number of colons. This will make pycodestyle faster in some cases; #314
4.4.7 2.0.0 (2016-05-31)
Announcements:
• Repository renamed to pycodestyle; Issue #466 / #481.
• Added joint Code of Conduct as member of PyCQA; #483
Changes:
• Added tox test support for Python 3.5 and pypy3
• Added check E275 for whitespace on from ... import ... lines; #489 / #491
• Added W503 to the list of codes ignored by default ignore list; #498
• Removed use of project level .pep8 configuration file; #364
Bugs:
• Fixed bug with treating ~ operator as binary; #383 / #384
• Identify binary operators as unary; #484 / #485
4.4.8 1.7.0 (2016-01-12)
Announcements:
• Repository moved to PyCQA Organization on GitHub: https://github.com/pycqa/pep8
Changes:
• Reverted the fix in #368, “options passed on command line are only ones accepted” feature. This has many unintended consequences in pep8 and flake8 and needs to be reworked when I have more time.
• Added support for Python 3.5. (Issue #420 & #459)
• Added support for multi-line config_file option parsing. (Issue #429)
• Improved parameter parsing. (Issues #420 & #456)
Bugs:
• Fixed BytesWarning on Python 3. (Issue #459)
4.4.9 1.6.2 (2015-02-15)
Changes:
• Added check for breaking around a binary operator. (Issue #197, Pull #305)
Bugs:
• Restored config_file parameter in process_options(). (Issue #380)
4.4.10 1.6.1 (2015-02-08)
Changes:
• Assign variables before referenced. (Issue #287)
Bugs:
• Exception thrown due to unassigned local_dir variable. (Issue #377)
4.4.11 1.6.0 (2015-02-06)
News:
• Ian Lee <ianlee1521@gmail.com> joined the project as a maintainer.
Changes:
• Report E731 for lambda assignment. (Issue #277)
• Report E704 for one-liner def instead of E701. Do not report this error in the default configuration. (Issue #277)
• Replace codes E111, E112 and E113 with codes E114, E115 and E116 for bad indentation of comments. (Issue #274)
• Report E266 instead of E265 when the block comment starts with multiple #. (Issue #270)
• Report E402 for import statements not at the top of the file. (Issue #264)
• Do not enforce whitespaces around ** operator. (Issue #292)
• Strip whitespace from around paths during normalization. (Issue #339 / #343)
• Update --format documentation. (Issue #198 / Pull Request #310)
• Add tox/ to default excludes. (Issue #335)
• Do not report E121 or E126 in the default configuration. (Issues #256 / #316)
• Allow spaces around the equals sign in an annotated function. (Issue #357)
• Allow trailing backslash if in an inline comment. (Issue #374)
• If --config is used, only that configuration is processed. Otherwise, merge the user and local configurations are merged. (Issue #368 / #369)
Bug fixes:
• Don’t crash if Checker.build_tokens_line() returns None. (Issue #306)
• Don’t crash if os.path.expanduser() throws an ImportWarning. (Issue #297)
• Missing space around keyword parameter equal not always reported, E251. (Issue #323)
• Fix false positive E711/E712/E713. (Issues #330 and #336)
• Do not skip physical checks if the newline is escaped. (Issue #319)
• Flush sys.stdout to avoid race conditions with printing. See flake8 bug: https://gitlab.com/pycqa/flake8/issues/17 for more details. (Issue #363)
4.4.12 1.5.7 (2014-05-29)
Bug fixes:
• Skip the traceback on “Broken pipe” signal. (Issue #275)
• Do not exit when an option in setup.cfg or tox.ini is not recognized.
• Check the last line even if it does not end with a newline. (Issue #286)
• Always open files in universal newlines mode in Python 2. (Issue #288)
4.4.13 1.5.6 (2014-04-14)
Bug fixes:
• Check the last line even if it has no end-of-line. (Issue #273)
4.4.14 1.5.5 (2014-04-10)
Bug fixes:
• Fix regression with E22 checks and inline comments. (Issue #271)
4.4.15 1.5.4 (2014-04-07)
Bug fixes:
• Fix negative offset with E303 before a multi-line docstring. (Issue #269)
4.4.16 1.5.3 (2014-04-04)
Bug fixes:
• Fix wrong offset computation when error is on the last char of a physical line. (Issue #268)
4.4.17 1.5.2 (2014-04-04)
Changes:
• Distribute a universal wheel file.
Bug fixes:
• Report correct line number for E303 with comments. (Issue #60)
• Do not allow newline after parameter equal. (Issue #252)
• Fix line number reported for multi-line strings. (Issue #220)
• Fix false positive E121/E126 with multi-line strings. (Issue #265)
• Fix E501 not detected in comments with Python 2.5.
• Fix caret position with `--show-source` when line contains tabs.
4.4.18 1.5.1 (2014-03-27)
Bug fixes:
• Fix a crash with E125 on multi-line strings. (Issue #263)
4.4.19 1.5 (2014-03-26)
Changes:
• Report E129 instead of E125 for visually indented line with same indent as next logical line. (Issue #126)
• Report E265 for space before block comment. (Issue #190)
• Report E713 and E714 when operators `not in` and `is not` are recommended. (Issue #236)
• Allow long lines in multiline strings and comments if they cannot be wrapped. (Issue #224).
• Optionally disable physical line checks inside multiline strings, using `# noqa`. (Issue #242)
• Change text for E121 to report “continuation line under-indented for hanging indent” instead of indentation not being a multiple of 4.
• Report E131 instead of E121 / E126 if the hanging indent is not consistent within the same continuation block. It helps when error E121 or E126 is in the `ignore` list.
• Report E126 instead of E121 when the continuation line is hanging with extra indentation, even if indentation is not a multiple of 4.
Bug fixes:
• Allow the checkers to report errors on empty files. (Issue #240)
• Fix ignoring too many checks when `--select` is used with codes declared in a flake8 extension. (Issue #216)
• Fix regression with multiple brackets. (Issue #214)
• Fix `StyleGuide` to parse the local configuration if the keyword argument `paths` is specified. (Issue #246)
• Fix a false positive E124 for hanging indent. (Issue #254)
• Fix a false positive E126 with embedded colon. (Issue #144)
• Fix a false positive E126 when indenting with tabs. (Issue #204)
• Fix behaviour when `exclude` is in the configuration file and the current directory is not the project directory. (Issue #247)
• The logical checks can return `None` instead of an empty iterator. (Issue #250)
• Do not report multiple E101 if only the first indentation starts with a tab. (Issue #237)
• Fix a rare false positive W602. (Issue #34)
4.4.20 1.4.6 (2013-07-02)
Changes:
- Honor `# noqa` for errors E711 and E712. (Issue #180)
- When both a `tox.ini` and a `setup.cfg` are present in the project directory, merge their contents. The `tox.ini` file takes precedence (same as before). (Issue #182)
- Give priority to `--select` over `--ignore`. (Issue #188)
- Compare full path when excluding a file. (Issue #186)
- New option `--hang-closing` to switch to the alternative style of closing bracket indentation for hanging indent. Add error E133 for closing bracket which is missing indentation. (Issue #103)
- Accept both styles of closing bracket indentation for hanging indent. Do not report error E123 in the default configuration. (Issue #103)
Bug fixes:
- Do not crash when running AST checks and the document contains null bytes. (Issue #184)
- Correctly report other E12 errors when E123 is ignored. (Issue #103)
- Fix false positive E261/E262 when the file contains a BOM. (Issue #193)
- Fix E701, E702 and E703 not detected sometimes. (Issue #196)
- Fix E122 not detected in some cases. (Issue #201 and #208)
- Fix false positive E121 with multiple brackets. (Issue #203)
4.4.21 1.4.5 (2013-03-06)
- When no path is specified, do not try to read from stdin. The feature was added in 1.4.3, but it is not supported on Windows. Use `--filename` argument to read from stdin. This usage is supported since 1.3.4. (Issue #170)
- Do not require `setuptools` in setup.py. It works around an issue with `pip` and Python 3. (Issue #172)
- Add `__pycache__` to the ignore list.
- Change misleading message for E251. (Issue #171)
- Do not report false E302 when the source file has a coding cookie or a comment on the first line. (Issue #174)
- Reorganize the tests and add tests for the API and for the command line usage and options. (Issues #161 and #162)
- Ignore all checks which are not explicitly selected when `select` is passed to the `StyleGuide` constructor.
4.4.22 1.4.4 (2013-02-24)
- Report E227 or E228 instead of E225 for whitespace around bitwise, shift or modulo operators. (Issue #166)
- Change the message for E226 to make clear that it is about arithmetic operators.
- Fix a false positive E128 for continuation line indentation with tabs.
- Fix regression with the `--diff` option. (Issue #169)
- Fix the `TestReport` class to print the unexpected warnings and errors.
4.4.23 1.4.3 (2013-02-22)
- Hide the --doctest and --testsuite options when installed.
- Fix crash with AST checkers when the syntax is invalid. (Issue #160)
- Read from standard input if no path is specified.
- Initiate a graceful shutdown on Control+C.
- Allow changing the checker_class for the StyleGuide.
4.4.24 1.4.2 (2013-02-10)
- Support AST checkers provided by third-party applications.
- Register new checkers with register_check(func_or_cls, codes).
- Allow constructing a StyleGuide with a custom parser.
- Accept visual indentation without parenthesis after the if statement. (Issue #151)
- Fix UnboundLocalError when using # noqa with continued lines. (Issue #158)
- Re-order the lines for the StandardReport.
- Expand tabs when checking E12 continuation lines. (Issue #155)
- Refactor the testing class TestReport and the specific test functions into a separate test module.
4.4.25 1.4.1 (2013-01-18)
- Allow sphinx.ext.autodoc syntax for comments. (Issue #110)
- Report E703 instead of E702 for the trailing semicolon. (Issue #117)
- Honor # noqa in addition to # nopep8. (Issue #149)
- Expose the OptionParser factory for better extensibility.
4.4.26 1.4 (2012-12-22)
- Report E226 instead of E225 for optional whitespace around common operators (*, *, /, + and -). This new error code is ignored in the default configuration because PEP 8 recommends to “use your own judgement”. (Issue #96)
- Lines with a # nopep8 at the end will not issue errors on line length E501 or continuation line indentation E12*. (Issue #27)
- Fix AssertionError when the source file contains an invalid line ending "\r\n\n". (Issue #119)
- Read the [pep8] section of tox.ini or setup.cfg if present. (Issue #93 and #141)
- Add the Sphinx-based documentation, and publish it on https://pycodestyle.readthedocs.io/. (Issue #105)
4.4.27 1.3.4 (2012-12-18)
- Fix false positive E124 and E128 with comments. (Issue #100)
- Fix error on stdin when running with bpython. (Issue #101)
- Fix false positive E401. (Issue #104)
- Report E231 for nested dictionary in list. (Issue #142)
- Catch E271 at the beginning of the line. (Issue #133)
- Fix false positive E126 for multi-line comments. (Issue #138)
- Fix false positive E221 when operator is preceded by a comma. (Issue #135)
- Fix --diff failing on one-line hunk. (Issue #137)
- Fix the --exclude switch for directory paths. (Issue #111)
- Use --filename to read from standard input. (Issue #128)
4.4.28 1.3.3 (2012-06-27)
- Fix regression with continuation line checker. (Issue #98)
4.4.29 1.3.2 (2012-06-26)
- Revert to the previous behaviour for --show-pep8: do not imply --first. (Issue #89)
- Add E902 for IO errors. (Issue #87)
- Fix false positive for E121, and missed E124. (Issue #92)
- Set a sensible default path for config file on Windows. (Issue #95)
- Allow verbose in the configuration file. (Issue #91)
- Show the enforced max-line-length in the error message. (Issue #86)
4.4.30 1.3.1 (2012-06-18)
- Explain which configuration options are expected. Accept and recommend the options names with hyphen instead of underscore. (Issue #82)
- Do not read the user configuration when used as a module (except if config_file=True is passed to the StyleGuide constructor).
- Fix wrong or missing cases for the E12 series.
- Fix cases where E122 was missed. (Issue #81)
4.4.31 1.3 (2012-06-15)
Warning: The internal API is backwards incompatible.
• Remove global configuration and refactor the library around a StyleGuide class; add the ability to configure various reporters. (Issue #35 and #66)
• Read user configuration from ~/.config/pep8 and local configuration from ./.pep8. (Issue #22)
• Fix E502 for backslash embedded in multi-line string. (Issue #68)
• Fix E225 for Python 3 iterable unpacking (PEP 3132). (Issue #72)
• Enable the new checkers from the E12 series in the default configuration.
• Suggest less error-prone alternatives for E712 errors.
• Rewrite checkers to run faster (E22, E251, E27).
• Fixed a crash when parsed code is invalid (too many closing brackets).
• Fix E127 and E128 for continuation line indentation. (Issue #74)
• New option --format to customize the error format. (Issue #23)
• New option --diff to check only modified code. The unified diff is read from STDIN. Example: hg diff | pep8 --diff (Issue #39)
• Correctly report the count of failures and set the exit code to 1 when the --doctest or the --testsuite fails.
• Correctly detect the encoding in Python 3. (Issue #69)
• Drop support for Python 2.3, 2.4 and 3.0. (Issue #78)
4.4.32 1.2 (2012-06-01)
• Add E121 through E128 for continuation line indentation. These checks are disabled by default. If you want to force all checks, use switch --select=E,W. Patch by Sam Vilain. (Issue #64)
• Add E721 for direct type comparisons. (Issue #47)
• Add E711 and E712 for comparisons to singletons. (Issue #46)
• Fix spurious E225 and E701 for function annotations. (Issue #29)
• Add E502 for explicit line join between brackets.
• Fix E901 when printing source with --show-source.
• Report all errors for each checker, instead of reporting only the first occurrence for each line.
• Option --show-pep8 implies --first.
4.4.33 1.1 (2012-05-24)
• Add E901 for syntax errors. (Issues #63 and #30)
• Add E271, E272, E273 and E274 for extraneous whitespace around keywords. (Issue #57)
• Add tox.ini configuration file for tests. (Issue #61)
• Add .travis.yml configuration file for continuous integration. (Issue #62)
4.4.34 1.0.1 (2012-04-06)
• Fix inconsistent version numbers.
4.4.35 1.0 (2012-04-04)
- Fix W602 `raise` to handle multi-char names. (Issue #53)
4.4.36 0.7.0 (2012-03-26)
- Now --first prints only the first occurrence of each error. The --repeat flag becomes obsolete because it is the default behaviour. (Issue #6)
- Allow specifying --max-line-length. (Issue #36)
- Make the shebang more flexible. (Issue #26)
- Add testsuite to the bundle. (Issue #25)
- Fixes for Jython. (Issue #49)
- Add PyPI classifiers. (Issue #43)
- Fix the --exclude option. (Issue #48)
- Fix W602, accept `raise` with 3 arguments. (Issue #34)
- Correctly select all tests if DEFAULT_IGNORE == ".
4.4.37 0.6.1 (2010-10-03)
- Fix inconsistent version numbers. (Issue #21)
4.4.38 0.6.0 (2010-09-19)
- Test suite reorganized and enhanced in order to check more failures with fewer test files. Read the run_tests docstring for details about the syntax.
- Fix E225: accept `print >>sys.stderr, "..." syntax.
- Fix E501 for lines containing multibyte encoded characters. (Issue #7)
- Fix E221, E222, E223, E224 not detected in some cases. (Issue #16)
- Fix E211 to reject `v = dic['a'] ['b']`. (Issue #17)
- Exit code is always 1 if any error or warning is found. (Issue #10)
- --ignore checks are now really ignored, especially in conjunction with --count. (Issue #8)
- Blank lines with spaces yield W293 instead of W291: some developers want to ignore this warning and indent the blank lines to paste their code easily in the Python interpreter.
- Fix E301: do not require a blank line before an indented block. (Issue #14)
- Fix E203 to accept NumPy slice notation `a[0, :]`. (Issue #13)
- Performance improvements.
- Fix decoding and checking non-UTF8 files in Python 3.
- Fix E225: reject `True+False` when running on Python 3.
- Fix an exception when the line starts with an operator.
- Allow a new line before closing `)`, `)` or `]`. (Issue #5)
4.4.39 0.5.0 (2010-02-17)
- Changed the --count switch to print to sys.stderr and set exit code to 1 if any error or warning is found.
- E241 and E242 are removed from the standard checks. If you want to include these checks, use switch --select=E,W. (Issue #4)
- Blank line is not mandatory before the first class method or nested function definition, even if there’s a docstring. (Issue #1)
- Add the switch --version.
- Fix decoding errors with Python 3. (Issue #13)
- Add --select option which is mirror of --ignore.
- Add checks E261 and E262 for spaces before inline comments.
- New check W604 warns about deprecated usage of backticks.
- New check W603 warns about the deprecated operator <>.
- Performance improvement, due to rewriting of E225.
- E225 now accepts:
– no whitespace after unary operator or similar. (Issue #9)
– lambda function with argument unpacking or keyword defaults.
- Reserve “2 blank lines” for module-level logical blocks. (E303)
- Allow multi-line comments. (E302, issue #10)
4.4.40 0.4.2 (2009-10-22)
- Decorators on classes and class methods are OK now.
4.4.41 0.4 (2009-10-20)
- Support for all versions of Python from 2.3 to 3.1.
- New and greatly expanded self tests.
- Added --count option to print the total number of errors and warnings.
- Further improvements to the handling of comments and blank lines. (Issue #1 and others changes.)
- Check all py files in directory when passed a directory (Issue #2). This also prevents an exception when traversing directories with non *.py files.
- E231 should allow commas to be followed by ). (Issue #3)
- Spaces are no longer required around the equals sign for keyword arguments or default parameter values.
4.4.42 0.3.1 (2009-09-14)
- Fixes for comments: do not count them when checking for blank lines between items.
- Added setup.py for pypi upload and easy_installability.
These issues refer to the previous issue tracker.
4.4.43 0.2 (2007-10-16)
- Loads of fixes and improvements.
4.4.44 0.1 (2006-10-01)
- First release.
- Online documentation: https://pycodestyle.readthedocs.io/
- Source code and issue tracker: https://github.com/pycqa/pycodestyle
CHAPTER 5
Indices and tables
- genindex
- search
Credits
Created by Johann C. Rocholl.
Maintained by Florent Xicluna and Ian Lee.
The pycodestyle library is provided under the terms and conditions of the Expat license:
```verbatim
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
```
Python Module Index
p
pycodestyle, 13
Index
B
BaseReport (class in pycodestyle), 14
build_tokens_line() (pycodestyle.Checker method), 14
C
check_all() (pycodestyle.Checker method), 14
check_ast() (pycodestyle.Checker method), 14
check_files() (pycodestyle.StyleGuide method), 13
check_logical() (pycodestyle.Checker method), 14
check_physical() (pycodestyle.Checker method), 14
Checker (class in pycodestyle), 14
D
DiffReport (class in pycodestyle), 15
E
environment variable
XDG_CONFIG_HOME, 6
error() (pycodestyle.BaseReport method), 14
excluded() (pycodestyle.StyleGuide method), 13
expand_indent() (in module pycodestyle), 15
F
FileReport (class in pycodestyle), 15
G
generate_tokens() (pycodestyle.Checker method), 14
get_checks() (pycodestyle.StyleGuide method), 14
get_count() (pycodestyle.BaseReport method), 14
get_file_results() (pycodestyle.BaseReport method), 14
get_statistics() (pycodestyle.BaseReport method), 15
I
ignore_code() (pycodestyle.StyleGuide method), 13
increment_logical_line() (pycodestyle.BaseReport method), 14
init_file() (pycodestyle.BaseReport method), 14
init_report() (pycodestyle.StyleGuide method), 13
input_dir() (pycodestyle.StyleGuide method), 13
input_file() (pycodestyle.StyleGuide method), 13
M
mute_string() (in module pycodestyle), 15
P
print_benchmark() (pycodestyle.BaseReport method), 15
print_statistics() (pycodestyle.BaseReport method), 15
process_options() (in module pycodestyle), 15
pycodestyle (module), 13
R
read_config() (in module pycodestyle), 15
readline() (pycodestyle.Checker method), 14
register_check() (in module pycodestyle), 16
run_check() (pycodestyle.Checker method), 14
S
StandardReport (class in pycodestyle), 15
start() (pycodestyle.BaseReport method), 14
stop() (pycodestyle.BaseReport method), 14
StyleGuide (class in pycodestyle), 13
X
XDG_CONFIG_HOME, 6
|
{"Source-Url": "https://buildmedia.readthedocs.org/media/pdf/pycodestyle/latest/pycodestyle.pdf", "len_cl100k_base": 14183, "olmocr-version": "0.1.49", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 81943, "total-output-tokens": 16593, "length": "2e13", "weborganizer": {"__label__adult": 0.0002830028533935547, "__label__art_design": 0.0003333091735839844, "__label__crime_law": 0.00017654895782470703, "__label__education_jobs": 0.0007138252258300781, "__label__entertainment": 6.937980651855469e-05, "__label__fashion_beauty": 9.948015213012697e-05, "__label__finance_business": 0.00011646747589111328, "__label__food_dining": 0.0002092123031616211, "__label__games": 0.0005903244018554688, "__label__hardware": 0.00032067298889160156, "__label__health": 0.00012445449829101562, "__label__history": 0.0001289844512939453, "__label__home_hobbies": 7.87973403930664e-05, "__label__industrial": 0.00014102458953857422, "__label__literature": 0.0002167224884033203, "__label__politics": 0.00014495849609375, "__label__religion": 0.00024580955505371094, "__label__science_tech": 0.0011720657348632812, "__label__social_life": 9.626150131225586e-05, "__label__software": 0.0141754150390625, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00019502639770507812, "__label__transportation": 0.0001392364501953125, "__label__travel": 0.00014460086822509766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52105, 0.06629]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52105, 0.14085]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52105, 0.68903]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1740, false], [1740, 1740, null], [1740, 1891, null], [1891, 1891, null], [1891, 2579, null], [2579, 4562, null], [4562, 6622, null], [6622, 8400, null], [8400, 12234, null], [12234, 14139, null], [14139, 15976, null], [15976, 15976, null], [15976, 16992, null], [16992, 18406, null], [18406, 19336, null], [19336, 20827, null], [20827, 22454, null], [22454, 22872, null], [22872, 23900, null], [23900, 26365, null], [26365, 28313, null], [28313, 29918, null], [29918, 31768, null], [31768, 33506, null], [33506, 34865, null], [34865, 37039, null], [37039, 39398, null], [39398, 41229, null], [41229, 42813, null], [42813, 44937, null], [44937, 46805, null], [46805, 48730, null], [48730, 48963, null], [48963, 48963, null], [48963, 49014, null], [49014, 49014, null], [49014, 49096, null], [49096, 49096, null], [49096, 50261, null], [50261, 50261, null], [50261, 50300, null], [50300, 50300, null], [50300, 52105, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1740, true], [1740, 1740, null], [1740, 1891, null], [1891, 1891, null], [1891, 2579, null], [2579, 4562, null], [4562, 6622, null], [6622, 8400, null], [8400, 12234, null], [12234, 14139, null], [14139, 15976, null], [15976, 15976, null], [15976, 16992, null], [16992, 18406, null], [18406, 19336, null], [19336, 20827, null], [20827, 22454, null], [22454, 22872, null], [22872, 23900, null], [23900, 26365, null], [26365, 28313, null], [28313, 29918, null], [29918, 31768, null], [31768, 33506, null], [33506, 34865, null], [34865, 37039, null], [37039, 39398, null], [39398, 41229, null], [41229, 42813, null], [42813, 44937, null], [44937, 46805, null], [46805, 48730, null], [48730, 48963, null], [48963, 48963, null], [48963, 49014, null], [49014, 49014, null], [49014, 49096, null], [49096, 49096, null], [49096, 50261, null], [50261, 50261, null], [50261, 50300, null], [50300, 50300, null], [50300, 52105, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52105, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52105, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1740, 3], [1740, 1740, 4], [1740, 1891, 5], [1891, 1891, 6], [1891, 2579, 7], [2579, 4562, 8], [4562, 6622, 9], [6622, 8400, 10], [8400, 12234, 11], [12234, 14139, 12], [14139, 15976, 13], [15976, 15976, 14], [15976, 16992, 15], [16992, 18406, 16], [18406, 19336, 17], [19336, 20827, 18], [20827, 22454, 19], [22454, 22872, 20], [22872, 23900, 21], [23900, 26365, 22], [26365, 28313, 23], [28313, 29918, 24], [29918, 31768, 25], [31768, 33506, 26], [33506, 34865, 27], [34865, 37039, 28], [37039, 39398, 29], [39398, 41229, 30], [41229, 42813, 31], [42813, 44937, 32], [44937, 46805, 33], [46805, 48730, 34], [48730, 48963, 35], [48963, 48963, 36], [48963, 49014, 37], [49014, 49014, 38], [49014, 49096, 39], [49096, 49096, 40], [49096, 50261, 41], [50261, 50261, 42], [50261, 50300, 43], [50300, 50300, 44], [50300, 52105, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52105, 0.12069]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
6eb1c8c6c5e748aa614b94aee58fff4941451dec
|
Extending Desbordante with Probabilistic Functional Dependency Discovery Support
Ilia Barutkin, Maxim Fofanov, Sergey Belokonny, Vladislav Makeev, George Chernishev
Saint-Petersburg University
Saint-Petersburg, Russia
{ilia.d.barutkin, max.fofanov, belokoniy, makeev.vladislav.d, chernishev}@gmail.com
Abstract—Data profiling aims to extract complex patterns from data for further analysis and use that data in domains such as data cleaning, data deduplication, anomaly detection, and many more.
Functional dependencies (FDs) are one of the most well-known patterns. However, they are poorly suited for these tasks, as real data is usually dirty, and the rigid definition of FDs does not allow algorithms to locate them. For this reason, there are several formulations aimed at relaxing FDs to support dirty data, with approximate functional dependency (AFD) being the most popular one. Another formulation is the Probabilistic Functional Dependency (pFD), which we aim to support inside Desbordante — a science-intensive, high-performance and open-source data profiling tool implemented in C++. However, pFDs are relatively poorly studied, compared to AFDs.
In this paper we study pFDs, both analytically and empirically. We start by assessing how different pFDs and AFDs are by studying cases in which pFDs have an edge over AFDs. Then, we implement the algorithm for pFD discovery, as well as study its run time and memory consumption. We also compare it with an AFD discovery algorithm. Lastly, we study the output of both algorithms to learn whether or not it is possible to use AFD discovery algorithm to get pFDs and vice versa.
I. INTRODUCTION
Currently, growing volumes of data pose a serious challenge to data analysts. Data, however, offers only a moderate value in and of itself, and it is instead the facts contained within that data that are of interest to analysts. The volumes of data in question far exceed the size that could be grasped by the human eye, so automatic approaches become more and more in demand.
Data profiling [1] aims to extract facts from data. There are two kinds of data profiling — naive and science-intensive. Naive approach concerns itself with simple statistics, such as the number of rows and columns, number of nulls in them, their mean and variance, etc. There are dozens of tools for this kind of profiling. On the other hand, science-intensive profiling aims to extract complex patterns represented by structures which we will refer to as primitives. Examples of such patterns are database dependencies (functional [2], inclusion [3]), association rules [4], algebraic constraints [5], inferred semantic data types [6], and others. Such patterns have many applications:
• for scientific data, they may indicate a presence of some regularity [7], which may promote the formulation of a hypothesis, which, in turn, may lead to a scientific discovery;
• for business data, it is possible [8] to use the discovered primitives for cleaning errors in data, finding inexact duplicates, performing schema matching, finding outliers, and solving many other problems;
• for machine learning, data primitives can help in feature engineering and in choosing the direction for the ablation study;
• for databases, they can help with validating and discovering various advanced integrity constraints.
Extracting and validating primitives is computationally expensive, which becomes a serious issue with the scaling of datasets. Therefore, it requires complex algorithms and efficient implementations. These are some of the major contributing factors as to why such kind of profiling is now a developing area and why science-intensive profilers are rare. Currently, there exist two science-intensive data profilers — Metanome [9] and Desbordante.
Desbordante (Spanish for boundless) [10] is a science-intensive, high-performance and open-source data profiling tool implemented in C++. To the best of our knowledge, Desbordante is currently the only profiler that possesses these three qualities. It is capable of discovering and validating many primitives, including functional dependencies (both exact and approximate), conditional functional dependencies, metric functional dependencies, and others. The full list can be found on the web-site [11].
One of the well-known primitives is the functional dependency, which states that if two records of the table are equal in attribute X, then they should be equal in attribute Y. The formal definition is given in Section II.
Primitives can be classified into three groups:
1) Exact by definition. These primitives define instances which hold over the whole dataset. Classic functional dependency is an example of the exact primitive.
2) Approximate by definition. In this case, approximate means that found instances hold over the whole dataset, but with some degree of error predefined by user at the start of the algorithm. Thus, there are records in the dataset that may not conform to the exact definition.
3) Approximate by discovery procedure. In this case approximate means that discovery algorithm returns primitive instances that may hold or may not hold. While such instances require verification, such approach may
be of use as it allows the speed up of discovery by up to an order of magnitude [12], [13].
In this paper, we will only consider dependencies that are approximate by definition, also called relaxed dependencies [14]. Such dependencies are of a particular interest for the end-users of science-intensive profilers. There is a simple reason for this: real-life data is always dirty — it contains inconsistencies, missing values, and other artifacts. Therefore, exact dependencies rarely hold on such data and discovery algorithm will not locate them.
For functional dependencies, there are several approximate variants that are built upon the family of $g_1, g_2, g_3$ metrics, proposed by J.Kivinen and H.Mannila in their seminal paper “Approximate inference of functional dependencies from relations” [15]. The most well-known variant is Approximate Functional Dependency (AFD) [16], which is based on an adaptation of the $g_1$ metric for defining maximum permissible error. One of the alternatives is the Probabilistic Functional Dependency [17], [18], which uses $g_3$.
We are considering the addition of pFD discovery functionality to Desbordante. Before this, it is necessary to evaluate pFDs, since they are significantly less studied than AFDs. At the same time, Desbordante supports discovery and validation of FDs and AFDs, so it is natural to compare them with each other. Our goal is twofold: firstly, it is essential to study how expensive in terms of run time and memory consumption pFDs are when compared to exact approaches. Secondly, it is also necessary to understand if pFD support provides value to the end-user. This includes answering questions “how different are pFDs from AFDs”, and “how dependencies of both types that are returned by discovery algorithm relate to each other”.
Overall, in our study we pose the following research questions (RQs):
**RQ1** Are pFDs of interest to the end-user? How different are they from AFDs, what kind of FD violations are they more tolerant to? Does this definition allow for discovery of dependencies that could not be discovered by the AFD definition? And vice versa: how much AFDs are lost by it?
**RQ2** How computationally expensive is the candidate validation procedure of pFD discovery algorithm compared to the AFD?
**RQ3** How does maximum error threshold affect run time and memory consumption of the pFD discovery algorithm?
**RQ4** What is the run time and memory expenses of pFD discovery, compared to AFD?
Overall, the contribution of the paper is the following:
- A discussion of pFDs, their comparison with AFDs.
- A survey of approximate primitives that are based on $g_1, g_2, g_3$ metrics, as it is the basis for the majority of existing approximate primitives, including AFDs and pFDs.
- An open-source C++ implementation of a pFD discovery algorithm, which — to the best of our knowledge — is the only one currently available.
- An empirical evaluation of the pFD discovery algorithm, and its comparison with the AFD one.
This paper is organized as follows. In Section II we formally present pFDs and AFDs. We compare them and discuss their differences, while providing examples. Next, in Section III we discuss related work concerning approximate dependencies based on $g_1, g_2, g_3$ metrics. In Section IV we describe the algorithm and discuss our modifications. We evaluate our implementation and compare pFD and AFD discovery in Section V. We conclude this paper with Section VI.
## II. Background
Let us start with basic definitions imperative to understanding the paper’s context.
A functional dependency [19] over a relation $R$ is an expression denoted as $X \rightarrow A$, where $X \subseteq R$ and $A \in R$. We also denote set $X$ as left-hand side (LHS), and the attribute $A$ as right-hand side (RHS). The dependency is satisfied if, for all pairs of tuples $t, u \in r$, the following holds: if $\forall B \subseteq X (t[B] = u[B])$, then $t[A] = u[A]$, or, equivalently, $t$ and $u$ agree on $X$ and $A$. In this case, we also say that the functional dependency is correct or holds.
Let a certain relation $r$ with schema $R$ over $R$ be given. Then we assert that a pair $(u, v)$ of tuples from $r$ violates the dependency, or equivalently, is a violating pair, if $u[X] = v[X]$, but $u[Y] \neq v[Y]$. From this, it can be concluded that the dependency holds on the relation if the relation contains no violating pairs. A tuple $u$ is termed violating if it is a part of a violating pair.
A relaxed functional dependency is a functional dependency that is almost satisfied. An example of this could be the relationship between columns “phone number” and “department”, as several departments within company may share the same phone, albeit rarely. There are several ways to define the relaxation of functional dependency. The first one is the notion of Approximate Functional Dependency. In the original TANE paper [19] authors proposed to use the $g_1$ metric to define and discover AFDs, which is as follows:
$$g_1(X \rightarrow Y, r) = 1 - \frac{\max\{|s|s \subseteq r, s \models X \rightarrow Y\}}{|r|}$$
Almost two decades later S. Kruse and F. Naumann [16] developed PYRO — a novel AFD discovery algorithm. However, they have used a modified $g_1$ metric for their AFD definition, which is as follows:
$$g_1'(X \rightarrow Y, r) = \frac{|\{(t_1, t_2) \in r^2 | t_1[X] = t_2[X] \land t_1[Y] \neq t_2[Y]\}|}{|r|^2 - |r|}$$
Thus, currently there exist two algorithms for AFD discovery and two AFD definitions. Metrics which are used in these definitions can be put into both existing algorithms.
Desbordante has both TANE and PYRO implementations [20]. Having started our project, we have decided to stick to the modern definition, and thus in this paper we
consider AFDs that are based on the modified $g_1$ metric. It is also worth mentioning that our implementation of TANE is a modified one, similarly to TANE implementation in the Metanome project [21].
The second relaxation approach is the Probabilistic Functional Dependency, which is defined as follows. Let $R$ be a relation, $X$ — a set of attributes, and $A$ — an attribute in $R$. A probabilistic functional dependency [17] is denoted as $pFD : X \xrightarrow{p} A$, where $p$ is the likelihood of the $X \rightarrow A$ being correct.
To define said probability, let $D_X = \{ t[X] \mid t \in R \}$ be a set of tuples with those values of $X$ as $V_X = \{ t[X] \mid t \in R \}$, and another set of tuples as $V_Y, V_X = \{ t[X] \mid t[Y] = \text{argmax} \{ |V_X \cap V_Y| \} \}$. The probability of a dependency holding on a subset of tuples with value of attribute $X$ equal to $X_1$ is therefore defined as $P(X \rightarrow Y, V_X) = |V_Y, V_X| / |D_X|$. The metric $g_1$ is an average of the probabilities of a dependency being correct for each distinct value of $X$, whereas PerTuple metric accounts for the frequency of values of $X$ amongst all tuples in a relation.
Finally, the probability of a functional dependency between attributes $X$ and $Y$ in $R$ is defined via two formulas, namely PerValue and PerTuple:
$$P_{\text{PerValue}}(X \rightarrow Y, R) = \frac{\sum_{V_X \in D_X} P(X \rightarrow Y, V_X)}{|D_X|}$$
$$P_{\text{PerTuple}}(X \rightarrow Y, R) = \sum_{V_X \in D_X} \frac{|V_X|}{|R|} P(X \rightarrow Y, V_X)$$
It is evident PerValue is an average of the probabilities of a dependency being correct for each distinct value of $X$, whereas PerTuple metric accounts for the frequency of values of $X$ amongst all tuples in a relation. We also say that a pFD $X \xrightarrow{p} Y$ is minimal if, for any proper subset $X' \subset X$, $X' \xrightarrow{p} Y$ does not hold. A pFD is called trivial, if $Y \in X$.
Note that it is possible to add an attribute to LHS of a pFD, and the resulting dependency will remain a pFD, if PerTuple metric is used. The same holds true for AFDs and their $g_1$ metric. However, this is not always true in case of pFD PerValue.
Finally, it is evident that PerTuple metric is the same as $g_3$, which is defined using notion of probability:
$$P_{\text{PerTuple}}(X \rightarrow Y, R) = 1 - g_3(X \rightarrow Y, R)$$
### III. RELATED WORK
In the world of relaxed dependencies, there are three major metrics used for defining how well a given relaxed dependency holds on a particular dataset. They are called $g_1, g_2, g_3$ and were proposed by J.Kivinen and H.Mannila in “Approximate inference of functional dependencies from relations” [15]. Despite the fact that the original paper considers relaxed functional dependencies, the concept is easily generalized owing to the flexibility of the provided definitions. As the result, these metrics gave rise to many other types of relaxed dependencies which we are going to survey in this paper.
### A. $g_1$ and $g_2$ metrics
Let $G_1$ be defined as the number of violating pairs for the dependency $X \rightarrow Y$ in the relation $r$:
$$G_1(X \rightarrow Y, r) = |\{(u, v) | u, v \in r, u[X] = v[X] \land u[Y] \neq v[Y]\}|$$
Then, the metric $g_1$ represents a normalized version of $G_1$.
$$g_1(X \rightarrow Y, r) = G_1(X \rightarrow Y, r) / |r|^2$$
$G_2$ is the number of violating tuples for the dependency $X \rightarrow Y$ in the relation $r$.
$$G_2(X \rightarrow Y, r) = |\{u \mid u[X] \neq r \land u[Y] \neq v[Y]\}|$$
The metric $g_2$, in turn, represents a normalized version of $G_2$.
$$g_2(X \rightarrow Y, r) = G_2(X \rightarrow Y, r) / |r|$$
The $g_1$ and $g_2$ metrics, as shown in the previously mentioned paper [15], are applied for defining approximate functional dependencies. However, due to their poorer generalizability and greater computational complexity, they are not as widely used as $g_3$ [22].
**TABLE I. EXAMPLE OF $g_1$, $g_2$ AND $g_3$**
<table>
<thead>
<tr>
<th>X</th>
<th>Y</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
</tr>
<tr>
<td>b</td>
<td>2</td>
</tr>
<tr>
<td>c</td>
<td>3</td>
</tr>
<tr>
<td>d</td>
<td>4</td>
</tr>
</tbody>
</table>
Consider an example presented in Table I. In case of $g_1$ its value is calculated as follows. Since there is only a single violating tuple $\{(a, 1), (a, 3)\}$, we get:
$$g_1(X \rightarrow Y, r) = \frac{1}{5^2} = 0.04.$$
For $g_2$, there are two values with different right-hand sides: $(a, 1)$ and $(a, 3)$, hence the value of the metric $g_2$ being:
$$g_2(X \rightarrow Y, r) = \frac{2}{5} = 0.4.$$
Now, let us consider various relaxed dependencies that are based on either $g_1$ or $g_2$.
### 1. Approximate functional dependencies
We have discussed the notion of AFDs in the Background section. An AFD example is presented in Table II.
PYRO [16] is an algorithm for discovery of AFDs that are based on the modern definition. In this algorithm, an adaptation of the $g_1$ metric, referred to by the authors of the article as $e$ defined in equation II is employed. PYRO demonstrates excellent performance due to employing several interesting optimizations, one of them being the error calculation approach.
Let \( r \) be a relation with schema \( R \) and \( X \subseteq R \) be a set of attributes. A cluster is defined as the set of all tuple indices from \( r \) that have identical values for \( X \), or \( c(t) = \{ i | t_i[X] = t[X] \} \). The PLI for \( X \) is all such sets, excluding singleton clusters:
\[
\hat{\pi}(X) = \{ c(t) | t \in r \land |c(t)| > 1 \}
\]
The size of the resultant index is denoted as \( ||\hat{\pi}(X)|| = \sum_{c \in \hat{\pi}(X)} |c| \).
Hence, the calculation of the error metric \( e \) is as follows: tuple pairs that agree on \( X \) and disagree on \( A \) (for the candidate \( X \rightarrow A \)) are considered violating pairs, which need to be counted. However, instead of counting them directly, PYRO employs a more efficient method. For each cluster \( \hat{\pi}(X) \), the number of tuple pairs that also agree on \( A \) is calculated, and this result is then subtracted from the total number of tuple pairs in the cluster. This is achieved through \( v_A \), a vector in which information about the content of the cluster is recorded in one-hot-encoding format. Summing up the errors for each cluster yields the final error. The pseudocode for this algorithm is presented in Listing 1.
Algorithm 1 Calculation of \( e \) for AFD using PYRO
Require: Set of tuples \( \pi(X) \), values of attribute \( A \) as \( v_A \)
Ensure: Metric \( e \) for AFD
\[
e \leftarrow 0
\]
for each cluster \( c \in \pi(X) \) do
\[
\text{counter} \leftarrow \text{dictionary with default value 0}
\]
for each item \( i \in c \) do
\[
\text{if } v_A[i] \neq 0 \text{ do}
\]
\[
\text{counter}[v_A[i]] \leftarrow \text{counter}[v_A[i]] + 1
\]
end for
end for
\[
e \leftarrow e + |c|^2 - |c| - \sum_{A \in \text{counter}} \text{counter}[A]^2 - \text{counter}[A]
\]
return \( e \)
2. Approximate unique column combinations. Approximate Unique Column Combinations (AUCCs) represent another type of relaxed dependency that can be discovered using the PYRO algorithm.
Let \( r \) be a relation with schema \( R \) and attribute sets \( X, Y \subseteq R \). According to [16], \( X \) is a Unique Column Combination (UCC) if, for all tuple pairs \( t_1, t_2 \in r \), from \( t_1[X] \neq t_2[X] \) it follows that \( t_1[Y] = t_2[Y] \).
The error metric for AUCC is defined as follows:
\[
e(X \rightarrow A, r) = \frac{1}{|r|^2 - |r|} \left| \left\{ (t_1, t_2) \in r^2 \mid t_1[X] \neq t_2[X] \land t_1[A] = t_2[A] \right\} \right|
\]
For Approximate UCC, unlike AFD, the error calculation for \( \hat{\pi}(X) \) is trivial. This happens because all tuple pairs within each cluster are the violating tuples themselves.
Algorithm 2 Calculation of \( e \) for AUCC using PYRO
Require: Set of tuples \( \pi(X) \), total number of tuples \( |r| \)
Ensure: Metric \( e \) for AUCC
\[
e \leftarrow \sum_{c \in \pi(X)} \frac{|c|^2 - |c|}{|r|^2 - |r|}
\]
return \( e \)
Approximate Unique Column Combinations are utilized in tasks such as data cleaning, database normalization, and query optimization.
3. Denial Constraints. Denial constraint (DC) is a type of an integrity constraint used in databases to ensure data quality. DC describes conditions that must not occur within the database. For example, it might state that two rows in a table cannot have certain value combinations. If an insertion or an update of a row violates a DC, the operation is generally aborted.
Approximate denial constraints in databases are a form of constraint that permits a degree of flexibility or exceptions. Unlike exact denial constraints that rigorously prohibit certain data value combinations, approximate denial constraints allow for a limited number of violations.
In this case, the \( g_1 \) metric is utilized for calculating the error measure [23].
DCs and their approximate variants are essential for upholding data consistency and reliability within a database, as they avert the introduction of invalid or conflicting information.
B. \( g_3 \) metric
Let \( G_3 \) represent the number of tuples for the dependency \( X \rightarrow Y \) within the relation \( r \) that must be removed to establish an exact dependency. Formally:
\[
G_3(X \rightarrow Y, r) = |r| - \max\{|s| : s \subseteq r, s \models X \rightarrow Y\}
\]
\[
g_3(X \rightarrow Y, r) = G_3(X \rightarrow Y, r)/|r|
\]
The \( g_3 \) metric is acknowledged as an industry standard and is applied in the context of various approximate dependencies: Approximate Functional Dependencies, Approximate Inclusion Dependencies, Probabilistic Functional Dependencies.
Its calculation is as follows. For the example presented in table I:
\[
g_3(X \rightarrow Y, r) = \frac{5 - 4}{5},
\]
this is because it is sufficient to remove one tuple for the “exact” dependency to be satisfied. This example illustrates the
practical utility of the $g_3$ in assessing the degree of violation of a dependency within a dataset. Now, let us consider various relaxed dependencies that are based on this metric.
1. **Approximate functional dependencies.** Despite the fact that modern AFD discovery papers utilize the $g_1$ metric, the initial paper proposing the AFD concept employed $g_3$. This paper also proposed the TANE algorithm [19], designed for mining exact functional dependencies which can also be modified for mining approximate functional dependencies. The metric was defined as follows:
$$e(X \rightarrow A) = \min\left\{\frac{|s|}{|r|} : s \subset r \text{ and } X \rightarrow A \text{ holds in } r \setminus s\right\}$$
Another algorithm [24] utilizing the $g_3$ metric is called DiMc. This highly-optimized algorithm employs a level-wise approach in candidate generation, starting with singleton sets at level zero. Additionally, the authors claim that the algorithm can be adapted for use with other metrics and even different types of dependencies.
Functional and approximate functional dependencies are instrumental in database normalization, data cleaning, and also aid analysts in uncovering hidden trends within data. Their implementation and optimization in algorithms like TANE and DiMc highlight their significance in managing and analyzing large data sets efficiently.
2. **Approximate inclusion dependencies.** An inclusion dependency [16] (IND) over a schema $R$ is a statement of the form $R_i[X] \subseteq R_j[Y]$, $R_i, R_j \in R$, $X \subseteq R_i$, $Y \subseteq R_j$. The size (or arity) of such a dependency is denoted as $i = |R_i[X] \subseteq R_j[Y]|$, where $|i| = |X| = |Y|$. Inclusion dependencies of size one are commonly referred to as unary inclusion dependencies.
An inclusion dependency is satisfied if all values from the left side are present in the right side. To assess the degree of approximation, a variant of the $g_3$ metric, denoted as $g_3^*$, is used. This version is adapted for inclusion dependencies and conveys essentially the same meaning.
Despite the lack of separate algorithms for detecting approximate inclusion dependencies, several algorithms for finding “exact” dependencies have been adapted for this task, such as MIND [25], Spider [26], or S-indd [27].
MIND employs a level-wise approach, where candidates of size $i+1$ are generated from already discovered dependencies of size $i$. In the case of approximate dependencies, during the candidate validation stage, approximate dependencies that meet a user-defined threshold for $g_3^*$ are also considered.
The primary application area for both “exact” and approximate inclusion dependencies is in the identification of foreign keys in databases [28]. This is crucial for database design, integrity, and normalization processes, facilitating effective data management and interrelation of different data sets within a database system. An example of AIND is presented in Table III.
3. **Graph Entity Dependencies.** A Graph Entity Dependency (GED) is a constraint within a property graph $G$, expressed as a pair $\phi = (Q[u], X \rightarrow Y)$. It states that for any instance of the pattern in the graph $Q[u]$ within $G$, the dependency $X \rightarrow Y$ must be upheld. This denotes that if specific conditions defined by $X$ are met within a pattern instance, then other conditions outlined by $Y$ must also be satisfied. The metric $g_3$ is employed in its original form as a measure of approximation for GED [29].
Graph Entity Dependencies are employed for several key objectives within the realm of graph databases and data management. They ensure data integrity and consistency and aid in the optimization of complex queries.
4. **Approximate Interval-based Temporal Dependencies.** Approximate Interval-based Temporal Functional Dependencies (AITFDs) [30], [31] are a type of constraint in temporal databases. They extend the concept of functional dependencies to consider the temporal aspect of data, specifically focusing on time intervals. They use $g_3$ as a metric of approximation as follows.
Let $X$ and $Y$ be sets of atemporal attributes of a temporal relation schema $R = R(U, B, E)$, an Allen’s Interval relation and $\epsilon$ a real number $0 \leq \epsilon \leq 1$. An instance $r$ of $R$ satisfies an ITFD $X \rightarrow Y$ with approximation $\epsilon$ if there exists a subset $r' \subseteq r$ for which $r \setminus r' \models X \rightarrow Y$ and $|r'| \leq \epsilon \cdot |r|$.
AITFDs are used for maintaining data integrity in temporal databases by ensuring that relationships among data attributes adhere to specified patterns over time. They are particularly useful for analyzing historical data, identifying trends, and predicting future values by understanding the temporal dynamics of data relationships.
**Wrap-up.** Concluding this section, we can state that, to the best of our knowledge, there were no studies where comparison between pFDs and AFDs was performed. The reasons for this are the following:
1. Both notions were developed long before the era of data profiling began.
2. Each notion was developed by a different research group and for a particular task.
3. The notions were assessed by its applicability to this particular task only, or no comparisons were performed at all.
Currently, data profiling is gaining traction, and it is imperative to catalogue all available tools. Thus, it is essential to compare pFDs and AFDs with each other.
**IV. ALGORITHMS AND IMPLEMENTATION**
This paper considers an implementation of pFDTane algorithm designed to discover minimal non-trivial probabilistic functional dependencies.
<table>
<thead>
<tr>
<th>Table III: AIND Example: User Email (\rightarrow) Registered Email ((\epsilon = 0.34))</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
</tr>
<tr>
<td>-------</td>
</tr>
<tr>
<td>T001</td>
</tr>
<tr>
<td>T002</td>
</tr>
<tr>
<td>T003</td>
</tr>
</tbody>
</table>
The new algorithm is based on TANE [19], which is a graph-traversing algorithm in which a graph — called lattice — is comprised of vertices representing all possible sets of attributes and edges connecting nodes of a form X and XA, where X — set of vertices, and A — another attribute. This way every edge represents a functional dependency X → A. The algorithm consecutively checks for the existence of functional dependencies between neighboring levels of lattice, excluding vertices whenever possible.
Integration. In Desbordante, FDs discovery algorithms are implemented by inheriting FDAlgorithm or its subclasses and overriding ExecuteInternal method. Tane and PFD Tane shown on the diagram in Figure 1 inherit PliBasedAlgorithm, in which relation loading method is additionally overridden. PositionListIndex (PLI) is a useful data structure comprised of stripped partitions [19]. This means that the structure contains a set of equivalence classes, built with respect to the equality of attribute values. Stripped means that classes containing a single attribute are dropped to reduce memory consumption. In Desbordante, this set is represented by a double-ended queue — namely, std::deque. More specifically, inheritance and related classes are shown on Fig. 1.
Algorithm 3 Calculation of PerValue [17] metric
Require: Relation R, attributes X and A
Ensure: Metric PerValue for X → A
c ← t1(X); |π(X)| ← 1; count(c) ← 0
c’ ← t1(X, A); count(c’) ← 0; maxCount(c) ← 0
sum ← 0
for each t ∈ R do
if t(X) == c then
count(c) ← count(c) + 1
if t(X, A) == c’ then
count(c’) ← count(c’) + 1
else
if maxCount(c) < count(c’) then
maxCount(c) ← count(c’)
end if
c’ ← t(X, A); count(c’) ← 0
end if
if count(c) == 0 then
count(c) ← |π(X)| + 1
sum ← sum + maxCount(c)
c ← t(X); |π(X)| ← |π(X)| + 1
count(c) ← 0; maxCount(c) ← 0
end if
end for
return sum/|R|
The class PFD Tane uses LatticeLevel and LatticeVertex data structures, which contain the level and vertex information respectively. PFD Tane generates lattice levels and handle its life time, so there an aggregation dependency with LatticeLevel is shown. Meanwhile ExecuteInternal method uses LatticeVertex and PLI, LatticeLevel consists of instances of LatticeVertex and each PLI instance corresponds to LatticeVertex, which is shown as composition on the diagram.
Algorithm 4 Calculation of PerTuple metric
Require: Relation R, attributes X and A
Ensure: Metric PerTuple for X → A
c ← t1(X); count(c) ← 0
c’ ← t1(X, A); count(c’) ← 0; maxCount(c) ← 0
sum ← 0
for each t ∈ R do
if t(X) == c then
count(c) ← count(c) + 1
if t(X, A) == c’ then
count(c’) ← count(c’) + 1
else
if maxCount(c) < count(c’) then
maxCount(c) ← count(c’)
end if
c’ ← t(X, A); count(c’) ← 0
end if
if count(c) == 0 then
count(c) ← |π(X)| + 1
sum ← sum + maxCount(c)
c ← t(X); |π(X)| ← |π(X)| + 1
count(c) ← 0; maxCount(c) ← 0
end if
end for
return sum/|R|
Candidate Validation. Error measurement functions used for candidate validation is the essentially only part which had to be changed in order to adapt the existing TANE implementation for pFD discovery. The functions implements the algorithms presented in listings 3 and 4. Implemented functions for non-zero FDs take PLI of LHS attributes of dependency and PLI of a union of LHS attributes and RHS attribute as arguments. Sorting performed in the first lines of code in Listings 3 and 4 is done on the latter argument, i.e. union of PLIs. Thus, it is then possible to iterate over PLI clusters, calculating probability in linear time. Because of using PLI, the algorithm does not iterate over single value clusters, which has positive impact on the algorithms run time.
V. EVALUATION AND DISCUSSION
A. Methodology and Experimental Setup
Methodology. In order to answer research questions posed in the introduction, we have decided to perform quantitative and qualitative studies. For the former, we are going to analyze pFDs using examples and conduct an extensive literature review. For the latter, we are going to run a series of experiments, featuring AFD, pFD PerTuple, and pFD PerValue discovery algorithms. All these algorithms were implemented in Desbordante, and, more specifically, we used our TANE implementation. It is necessary to mention that, similarly to Metanome in Desbordante, TANE algorithm can cause a larger search space than necessary. However, due to the specific of implementation, this functionality incurs almost negligible RAM overhead. Turning to performance, we want to stress the fact that this also does not negatively affects our study.
since all experiments either compare methods relatively, or they compare algorithm output.
For pFD PerValue and pFD PerTuple algorithms, each dataset have been run with error thresholds ranging from 0 to 1 (inclusively) with an increment of 0.025. For TANE, datasets have been run with error thresholds located in captions of Tables IX–XII. This subset was selected due to the fact that error values close to zero are of more value to user and are expected to be used much more frequently.
A total of 10 iterations per error threshold value had been run, after which the average query execution time and maximum memory usage were calculated with confidence interval of 95%. Due to large confidence intervals for the run time of measures $v_2.csv$, an additional set of 20 iterations have been performed for that specific dataset in order to get a more accurate data.
**Datasets.** To perform experimental evaluation, we used datasets presented in Table VIII. Links to the datasets are available in the GitHub repository [32]. To perform comprehensive evaluation, we tried to select a collection of datasets with different properties. In this table we list the number of rows and attributes, file size and file source, as well as the number of minimal non-trivial FDs, AFDs, pFDs (both PerTuple and PerValue). The AFDs and pFDs were calculated with error threshold set to 0.01.
We have divided these datasets into two groups, which we present separately in two distinct figures. The reason for this is the dataset size difference, which will make them poorly readable if we put them in the same figure.
**Experimental setup.** Experiments were performed using the following hardware and software configuration. Hardware: AMD® Ryzen 5 7600X CPU @ 5.453GHz (6 cores), 32GB RAM. Software: Ubuntu 22.04 LTS, Kernel 6.5.0-15-generic (64-bit).
B. RQ1: Are pFDs of interest to the end-user? How much of a difference there is between pFDs and AFDs, and what kind of FD violations are pFDs more tolerant to? Does this definition allow for discovery of dependencies that could not be discovered by the AFD definition? And vice versa: how much AFDs are lost by it?
Let us start with the qualitative comparison of AFDs and pFDs.
**Observation 1.** First, lets consider a simplified dataset $R$ presented in Table IV and $X \rightarrow Y$ dependency.
pFD with PerValue metric is not affected by the frequency of $X$. Indeed, consider $|V_0| \rightarrow \infty$. In this case $P_{PerValue}(X \rightarrow Y, R)$ tends to 0 and its respective error $1 - P_{PerValue}(X \rightarrow Y, R)$ tends to $\frac{1}{2}$. At the same time, $g_3(X \rightarrow Y, R)$ and $e(X \rightarrow Y, R)$ tends to 1.
Thus, this metric can account for “faulty” LHS, if there are not too much of them. It allows having a lot of violating records if they correspond to relatively few distinct LHS values. For example, such “local” error may arise if a single sensor of an overall healthy set started to report faulty data.
**Observation 2.** Now, consider data presented in Table V and the same dependency.
The dependency is less likely to hold with the Per-Value metric: \( P_{\text{PerValue}}(X \rightarrow Y, R) = \frac{375}{125} = 0.625 \), and \( 1 - P_{\text{PerValue}}(X \rightarrow Y, R) = 0.375 \). At the same time \( g_3(X \rightarrow Y, R) = \frac{3}{11} = 0.27 \), and \( e(X \rightarrow Y, R) = \frac{3}{12} = 0.25 \).
Thus, pFD’s PerValue will report larger error when there are a lot of individual LHS where dependency does not hold. It will ignore the positive contribution of “0”, regardless of their number.
For a data scientist who explores data, such behavior may be undesirable and lead to valuable facts being missed. Suppose that there are one million of those “0” in this table, and the other six entries stay the same. This table will result in PerValue error of 0.375, which is rather large and therefore the pattern described by this pFD can be ignored. However, a more probable interpretation is the following: one million records are correct (since there is one million of them) and these six values are anomalies which should be deleted. At the same time, such interpretation can be located using AFDs.
FD guessing problem. pFDs were extensively used for problems where the goal was to “guess” FDs [17], [33], [34] from low-quality data. In these studies true FDs (gold standard) were known beforehand and had to be discovered using pFDs. Authors who originally proposed the pFD concept have performed experiments [17] which showed that PerTuple tends to yield better results than PerValue in finding correct dependencies in cases where data is of a lower quality (due to noise).
However, their subsequent experiments [18] with an improved version of TANE that uses transitivity rule have had PerValue outperforming PerTuple in the majority of cases. The authors measured recall, precision, and F-measure using a gold-standard collection.
Recently, the PerValue metric demonstrated [33], [34] better results for the problem of FD discovery in datasets containing missing values.
AFDs vs pFDs, quantitatively. The above-mentioned studies have not considered AFDs and their difference from pFDs. In our qualitative study we have demonstrated cases where pFDs can be of use and where they are inferior to AFDs.
Now, let us turn to quantitative part, which aims to answer the rest of the RQ1: “Does this definition allow for discovery of dependencies that could not be discovered by the AFD definition? And vice versa: how much AFDs are lost by it?”.
Table VI contains the results of a search for three different dependency types in the monkeypox.csv dataset: AFDs, pFDs with PerValue, and pFDs with PerTuple. The table shows that AFD fails to find some pFDs when run with certain error thresholds, despite the dataset containing a comparable number of minimal non-trivial AFDs.
Though the minimal sets indeed differ, it doesn’t immediately imply that the complete sets of pFDs and AFDs do exist. For example, for a fixed threshold, you may have found the following minimal dependencies: \( \text{pfd}_1 : XZ \rightarrow A \), \( \text{pfd}_2 : XY \rightarrow A \) and \( \text{afd}_1 : X \rightarrow A \). But \( \text{afd}_1 \) infers all other AFDs that have \( X \) in LHS. That implies \( \text{pfd}_1, \text{pfd}_2 \) are in set of all AFDs. In order to highlight the essential difference of pFDs, we have also included in the table the number of minimal pFDs that are neither in the set of minimal AFDs nor inferable from it.
Concluding this RQ, we can say that pFDs have their own strengths, and that they are different from AFDs. Specifically, having fixed error threshold, pFDs are not a mere subset of AFDs, nor are AFDs a subset of pFDs in general. Finally, existing studies have demonstrated that pFDs have found applications for the FD guessing problem. However, in those studies comparison with AFDs was not performed, and it is outside of scope of this paper.
C. RQ2: How computationally expensive is the candidate validation procedure of pFD discovery algorithm compared to the AFD?
To compare the run time and memory consumption of validation functions of pFDTane and AFDTane, the corresponding algorithms had been run with error set to 0. This setting guarantees that all algorithms traverse the same part of the lattice and they all are on the level playing field. The results presented in Table VII showcase the fact that both PerValue and PerTuple prove to work slower than the validation with \( g_1 \). On the other hand, PerValue and PerTuple do not demonstrate a significant difference in either run time or memory consumption.
We can also note that almost all datasets have had less memory consumed by pFDTane when compared to AFDTane. The exception — which was the SEA.csv dataset — used approximately the same amount of memory.
D. RQ3: How does maximum error threshold affect run time and memory consumption of the pFD discovery algorithm?
The pFDTane algorithm have been run with various error thresholds ranging from 0 to 1 on eight different datasets depicted in Table VIII. Figure 2a and Figure 2b show two different patterns of behaviours of pFDTane. When it’s supplied with the jena_climate_2009_2016.csv dataset, run time does not demonstrate a noteworthy difference past the 0.25 error threshold. Contrary to that, when run on the EpicVitals.csv dataset, the algorithm gets progressively faster as the error
The number of steps in Tane is determined by the number of vertices in the lattice. However, the algorithm discards some vertices during its execution due to the nature of the task of searching for the minimal functional dependencies. Thus, the observed trend could be explained by the difference between 0.2 and 0.8 error threshold not generating any new dependencies, which would subsequently lead to an inability to discard additional vertices in the lattice.
Finally, as could be observed from Figure 3, PerValue yields better results in terms of run time on every dataset but SEA.csv when compared to pFDs with PerTuple.
**TABLE VI. MINIMAL NON-TRIVIAL pFDs AND AFDs FOUND IN MONKEYPOX.CSV**
<table>
<thead>
<tr>
<th>Error</th>
<th>AFD</th>
<th>pFD PerValue</th>
<th>pFD PerTuple</th>
<th>[pFDs \ AFDs]</th>
<th>Non-iterable pFDs</th>
<th>[AFDs \ pFDs]</th>
<th>[pFDs \ AFDs]</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.01</td>
<td>126</td>
<td>142</td>
<td>134</td>
<td>0.893</td>
<td>216</td>
<td>216</td>
<td>260</td>
</tr>
<tr>
<td>0.05</td>
<td>69</td>
<td>71</td>
<td>62</td>
<td>0.269</td>
<td>807</td>
<td>810</td>
<td>1490</td>
</tr>
<tr>
<td>0.1</td>
<td>64</td>
<td>55</td>
<td>72</td>
<td>0.280</td>
<td>46</td>
<td>46</td>
<td>9</td>
</tr>
<tr>
<td>0.2</td>
<td>60</td>
<td>68</td>
<td>69</td>
<td>0.290</td>
<td>54</td>
<td>59</td>
<td>15</td>
</tr>
<tr>
<td>0.3</td>
<td>63</td>
<td>70</td>
<td>61</td>
<td>0.300</td>
<td>54</td>
<td>54</td>
<td>9</td>
</tr>
</tbody>
</table>
**TABLE VII. EXACT FDs DISCOVERY TIME AND MEMORY**
<table>
<thead>
<tr>
<th>Datasets</th>
<th>pFD Tane per_value</th>
<th>pFD Tane per_tuple</th>
<th>AFD Tane per_value</th>
<th>AFD Tane per_tuple</th>
<th>Memory (MB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>2.013</td>
<td>2.018</td>
<td>0.935</td>
<td>216</td>
<td>216</td>
</tr>
<tr>
<td>jena_climate_2009_2016</td>
<td>8.649</td>
<td>8.583</td>
<td>4.269</td>
<td>807</td>
<td>810</td>
</tr>
<tr>
<td>measures_v2</td>
<td>12.711</td>
<td>12.720</td>
<td>6.205</td>
<td>530</td>
<td>530</td>
</tr>
<tr>
<td>nuclear_exposions</td>
<td>16.245</td>
<td>16.334</td>
<td>15.278</td>
<td>1758</td>
<td>1758</td>
</tr>
<tr>
<td>parking_citations</td>
<td>2.198</td>
<td>2.203</td>
<td>0.795</td>
<td>189</td>
<td>189</td>
</tr>
<tr>
<td>SEA</td>
<td>25.908</td>
<td>25.953</td>
<td>7.360</td>
<td>1381</td>
<td>1381</td>
</tr>
<tr>
<td>games</td>
<td>1.903</td>
<td>1.956</td>
<td>1.195</td>
<td>279</td>
<td>279</td>
</tr>
<tr>
<td></td>
<td>3.207</td>
<td>3.222</td>
<td>1.492</td>
<td>399</td>
<td>399</td>
</tr>
</tbody>
</table>
**TABLE VIII. DATASETS USED FOR EXPERIMENTS**
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Rows</th>
<th>Attributes</th>
<th>Size</th>
<th>Source</th>
<th>pFD PT count</th>
<th>pFD PV count</th>
<th>AFD count</th>
</tr>
</thead>
<tbody>
<tr>
<td>EpicVitals.csv</td>
<td>1246303</td>
<td>7</td>
<td>33MB</td>
<td>U.S. PWS</td>
<td>3389</td>
<td>3712</td>
<td>901</td>
</tr>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>2370</td>
<td>17</td>
<td>180KB</td>
<td>EPF</td>
<td>10</td>
<td>13</td>
<td>21</td>
</tr>
<tr>
<td>games.csv</td>
<td>20058</td>
<td>16</td>
<td>7.67MB</td>
<td>kaggle</td>
<td>2264</td>
<td>1810</td>
<td>266</td>
</tr>
<tr>
<td>jena_climate_2009_2016.csv</td>
<td>420050</td>
<td>15</td>
<td>43.16MB</td>
<td>kaggle</td>
<td>3003</td>
<td>3148</td>
<td>210</td>
</tr>
<tr>
<td>measures_v2.csv</td>
<td>135816</td>
<td>13</td>
<td>3000.06MB</td>
<td>kaggle</td>
<td>642</td>
<td>573</td>
<td>144</td>
</tr>
<tr>
<td>nuclear_exposions.csv</td>
<td>2046</td>
<td>16</td>
<td>250KB</td>
<td>tidydataset repository</td>
<td>2795</td>
<td>3619</td>
<td>1459</td>
</tr>
<tr>
<td>parking_citations.csv</td>
<td>95433</td>
<td>13</td>
<td>10MB</td>
<td>norfolk opendata</td>
<td>224</td>
<td>269</td>
<td>565</td>
</tr>
<tr>
<td>SEA.csv</td>
<td>1000000</td>
<td>4</td>
<td>33MB</td>
<td>openml.com</td>
<td>3</td>
<td>3</td>
<td>9</td>
</tr>
<tr>
<td>monkeypox.csv</td>
<td>5875</td>
<td>14</td>
<td>516KB</td>
<td>who.int</td>
<td>134</td>
<td>142</td>
<td>126</td>
</tr>
</tbody>
</table>
Threshold increases. In our experiments six out of eight datasets behaved similarly to EpicVitals.
The number of steps in Tane is determined by the number of vertices in the lattice. However, the algorithm discards some vertices during its execution due to the nature of the task of searching for the minimal functional dependencies. Thus, the observed trend could be explained by the difference between 0.2 and 0.8 error threshold not generating any new dependencies, which would subsequently lead to an inability to discard additional vertices in the lattice.
Finally, as could be observed from Figure 3, PerValue yields better results in terms of run time on every dataset but SEA.csv when compared to pFDs with PerTuple.
E. **RQ4: What is the run time and memory expenses of pFD discovery, compared to AFD?**
To compare pFD Tane and AFD Tane algorithms in the probabilistic and approximate dependency discovery tasks, the algorithms’ implementations have been tested with different error thresholds. The results are presented in Table IX, Table X for the run time and in Table XI and Table XII for the memory consumption respectively. Each cell contains the ratio of pFD Tane to AFD Tane respective measurements. For the ease of understanding we have plotted the maximum amount memory consumed graph in Figure 4.
Almost all of the datasets depict AFD Tane as a faster discovery algorithm when compared to pFD Tane, with an exception of measures_v2.csv. The results suggest a decrease in performance difference with the error threshold exceeding 0.1.
Memory consumption have been observed to be lower for pFD Tane on all datasets but SEA.csv. In contrast to the run time metric, the memory consumption seems to be equal for pFD Tane and AFD Tane for each of error threshold values.
VI. **Conclusion**
We started with qualitative analysis of pFDs, as well as showing cases in which they have the edge over AFDs and vice versa. Essentially, we demonstrated that data interpretation and data context leave room for both of them, since neither can substitute the other one. Ultimately, it’s up to a data scientist to decide what to consider a violation of an exact FD, and the two concepts allow its user to target different cases. Experiments have also shown that pFD is capable of discovering some dependencies that AFD fails to find. The results featured in Table VI demonstrate that fact by counting
Fig. 3. Running time by error threshold
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Dataset</th>
<th>Error threshold</th>
<th>0.025</th>
<th>0.05</th>
<th>0.075</th>
<th>0.1</th>
<th>0.15</th>
<th>0.2</th>
<th>0.25</th>
<th>0.3</th>
<th>0.4</th>
<th>0.5</th>
</tr>
</thead>
<tbody>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>BKB_WaterQualityData_2020084</td>
<td></td>
<td>2.529</td>
<td>2.199</td>
<td>2.010</td>
<td>1.763</td>
<td>1.432</td>
<td>1.267</td>
<td>1.128</td>
<td>1.076</td>
<td>1.050</td>
<td>1.027</td>
</tr>
<tr>
<td>EpicVitals</td>
<td>EpicVitals</td>
<td></td>
<td>2.671</td>
<td>2.637</td>
<td>2.623</td>
<td>2.589</td>
<td>2.145</td>
<td>1.952</td>
<td>1.679</td>
<td>1.636</td>
<td>1.548</td>
<td>1.547</td>
</tr>
<tr>
<td>measures_v2</td>
<td>measures_v2</td>
<td></td>
<td>0.962</td>
<td>0.972</td>
<td>0.951</td>
<td>0.921</td>
<td>0.946</td>
<td>0.935</td>
<td>0.992</td>
<td>0.985</td>
<td>0.989</td>
<td>0.915</td>
</tr>
<tr>
<td>nuclear_explosions</td>
<td>nuclear_explosions</td>
<td></td>
<td>2.160</td>
<td>1.862</td>
<td>1.668</td>
<td>1.466</td>
<td>1.220</td>
<td>1.101</td>
<td>1.070</td>
<td>1.048</td>
<td>1.024</td>
<td>1.015</td>
</tr>
<tr>
<td>parking_citations</td>
<td>parking_citations</td>
<td></td>
<td>3.358</td>
<td>3.261</td>
<td>3.177</td>
<td>3.079</td>
<td>2.882</td>
<td>2.513</td>
<td>2.013</td>
<td>1.550</td>
<td>1.351</td>
<td>1.231</td>
</tr>
<tr>
<td>SEA</td>
<td>SEA</td>
<td></td>
<td>1.745</td>
<td>1.815</td>
<td>1.767</td>
<td>1.808</td>
<td>1.752</td>
<td>1.821</td>
<td>1.794</td>
<td>1.773</td>
<td>1.615</td>
<td>1.674</td>
</tr>
<tr>
<td>games</td>
<td>games</td>
<td></td>
<td>1.785</td>
<td>1.587</td>
<td>1.517</td>
<td>1.446</td>
<td>1.361</td>
<td>1.299</td>
<td>1.277</td>
<td>1.217</td>
<td>1.212</td>
<td>1.162</td>
</tr>
</tbody>
</table>
TABLE X. RATIO OF RUNNING TIME OF AFDTANE AND PFDTANE PER TUPLE ALGORITHMS
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Error threshold</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.025</td>
</tr>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>2.685</td>
</tr>
<tr>
<td>JenaClimate_2009_2016</td>
<td>1.465</td>
</tr>
<tr>
<td>Measures_y2</td>
<td>0.894</td>
</tr>
<tr>
<td>Nuclear Explosions</td>
<td>2.475</td>
</tr>
<tr>
<td>SEA</td>
<td>1.742</td>
</tr>
<tr>
<td>Games</td>
<td>2.064</td>
</tr>
</tbody>
</table>
TABLE XI. RATIO OF CONSUMED MEMORY OF AFDTANE AND PFDTANE PER TUPLE ALGORITHMS
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Error threshold</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.025</td>
</tr>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>0.931</td>
</tr>
<tr>
<td>EpicVitals</td>
<td>0.542</td>
</tr>
<tr>
<td>JenaClimate_2009_2016</td>
<td>0.356</td>
</tr>
<tr>
<td>Measures_y2</td>
<td>0.985</td>
</tr>
<tr>
<td>Nuclear Explosions</td>
<td>0.127</td>
</tr>
<tr>
<td>Parking Citations</td>
<td>0.926</td>
</tr>
<tr>
<td>SEA</td>
<td>1.029</td>
</tr>
<tr>
<td>Games</td>
<td>0.170</td>
</tr>
</tbody>
</table>
TABLE XII. RATIO OF CONSUMED MEMORY OF AFDTANE AND PFDTANE PER TUPLE ALGORITHMS
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Error threshold</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.025</td>
</tr>
<tr>
<td>BKB_WaterQualityData_2020084</td>
<td>0.942</td>
</tr>
<tr>
<td>EpicVitals</td>
<td>0.542</td>
</tr>
<tr>
<td>JenaClimate_2009_2016</td>
<td>0.356</td>
</tr>
<tr>
<td>Measures_y2</td>
<td>0.985</td>
</tr>
<tr>
<td>Nuclear Explosions</td>
<td>0.127</td>
</tr>
<tr>
<td>Parking Citations</td>
<td>0.926</td>
</tr>
<tr>
<td>SEA</td>
<td>1.029</td>
</tr>
<tr>
<td>Games</td>
<td>0.184</td>
</tr>
</tbody>
</table>
discovered dependencies with the same error threshold for both algorithms.
As for performance, the discovery of both pFD types is almost always considerably slower than AFD. However, the memory consumption shows the opposite trend, with pFD using less memory compared to AFD. The difference between run time of pFD and AFD decreases as the error threshold increases, though useful information is primarily mined with a low error threshold. Experiments have also shown similar run time and memory consumption for both pFD PerTuple and pFD PerValue.
Overall, we have introduced pFD discovery functionality into Desbordante for both PerValue and PerTuple metrics, as we had shown that their utility depends on data interpretation and context. At the same time, while building a science-intensive data profiler it is an imperative to expand the catalogue of available tools. Therefore, we hope that this primitive will become another useful tool which will allow our users to uncover knowledge hidden in data. Source code of the implementation is available in the GitHub repository (PR 300) [11].
REFERENCES
0.2
0.2
1
0.8
0.4
0.6
40x498
Fig. 4. Maximum memory consumption by error threshold
[32] Links to the datasets. [Online]. Available: https://gist.github.com/iliya-b/6f67da0a839e7ec0a5ab7849733cea31
|
{"Source-Url": "https://www.fruct.org/publications/volume-35/fruct35/files/Bar.pdf", "len_cl100k_base": 14031, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 49267, "total-output-tokens": 19060, "length": "2e13", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0006985664367675781, "__label__crime_law": 0.0006289482116699219, "__label__education_jobs": 0.0024871826171875, "__label__entertainment": 0.000171661376953125, "__label__fashion_beauty": 0.0002899169921875, "__label__finance_business": 0.000843048095703125, "__label__food_dining": 0.0004718303680419922, "__label__games": 0.0011529922485351562, "__label__hardware": 0.0012044906616210938, "__label__health": 0.0008215904235839844, "__label__history": 0.0006051063537597656, "__label__home_hobbies": 0.00022029876708984375, "__label__industrial": 0.0007352828979492188, "__label__literature": 0.0007715225219726562, "__label__politics": 0.0004284381866455078, "__label__religion": 0.0006537437438964844, "__label__science_tech": 0.40185546875, "__label__social_life": 0.00022327899932861328, "__label__software": 0.02655029296875, "__label__software_dev": 0.5576171875, "__label__sports_fitness": 0.00026917457580566406, "__label__transportation": 0.0005249977111816406, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61660, 0.07036]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61660, 0.39665]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61660, 0.84117]], "google_gemma-3-12b-it_contains_pii": [[0, 5225, false], [5225, 11007, null], [11007, 16085, null], [16085, 20880, null], [20880, 27018, null], [27018, 31694, null], [31694, 34771, null], [34771, 40151, null], [40151, 47001, null], [47001, 48693, null], [48693, 54838, null], [54838, 61660, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5225, true], [5225, 11007, null], [11007, 16085, null], [16085, 20880, null], [20880, 27018, null], [27018, 31694, null], [31694, 34771, null], [34771, 40151, null], [40151, 47001, null], [47001, 48693, null], [48693, 54838, null], [54838, 61660, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61660, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61660, null]], "pdf_page_numbers": [[0, 5225, 1], [5225, 11007, 2], [11007, 16085, 3], [16085, 20880, 4], [20880, 27018, 5], [27018, 31694, 6], [31694, 34771, 7], [34771, 40151, 8], [40151, 47001, 9], [47001, 48693, 10], [48693, 54838, 11], [54838, 61660, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61660, 0.21066]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7cd115372336ae0b19200cc6e063a0e50cd7c7e9
|
MTTM: Metamorphic Testing for Textual Content Moderation Software
Wenxuan Wang*, Jen-tse Huang†, Weibin Wu†, Jianping Zhang*, Yizhan Huang*, Shuqing Li‡, Pinjia He†, and Michael R. Lyu∗
* Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
† School of Software Engineering, Sun Yat-sen University, Zhuhai, China
‡ School of Data Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen, China
{wxwang, jthuang, jzhang, yzhuang22, sqli21, lyu}@cse.cuhk.edu.hk
wwwb36@mail.sysu.edu.cn, hepinjia@cuhk.edu.cn
Abstract—The exponential growth of social media platforms such as Twitter and Facebook has revolutionized textual communication and textual content publication in human society. However, they have been increasingly exploited to propagate toxic content, such as hate speech, malicious advertisement, and pornography, which can lead to highly negative impacts (e.g., harmful effects on teen mental health). Researchers and practitioners have been enthusiastically developing and extensively deploying textual content moderation software to address this problem. However, we find that malicious users can evade moderation by changing only a few words in the toxic content. Moreover, modern content moderation software’s performance against malicious inputs remains underexplored. To this end, we propose MTTM, a Metamorphic Testing framework for Textual Content Moderation software. Specifically, we conduct a pilot study on 2,900 text messages collected from real users and summarize eleven metamorphic relations across three perturbation levels: character, word, and sentence. MTTM employs these metamorphic relations on toxic textual contents to generate test cases, which are still toxic yet likely to evade moderation. In our evaluation, we employ MTTM to test three commercial textual content moderation software and two state-of-the-art moderation algorithms against three kinds of toxic content. The results show that MTTM achieves up to 83.9%, 51%, and 82.5% error finding rates (EFR) when testing commercial moderation software provided by Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when testing the state-of-the-art algorithms from the academy. In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% ~ 5.9% EFR) while maintaining the accuracy on the original test set. A demo can be found in this link.
Index Terms—Software testing, metamorphic relations, NLP software, textual content moderation
I. INTRODUCTION
In the recent decade, social media platforms and community forums have been developing rapidly, which tremendously facilitates modern textual communication and content publication worldwide. For example, the number of tweets posted on Twitter has grown from 50 million per day in 2010 to 500 million per day in 2020 [1]. However, they inevitably exacerbate the propagation of toxic content due to the anonymity of the web. Textual toxic contents typically refer to three major kinds of texts: (1) abusive language and hate speech, which are abusive texts targeting specific individuals, such as politicians, celebrities, religions, nations, and the LGBTIQA+ [2]; (2) malicious advertisement, which are online advertisements with illegal purposes, such as phishing and scam links, malware download, and illegal information dissemination [3]; and (3) pornography, which is often sexually explicit, associative, and aroused [4].
These toxic contents can lead to highly negative impacts. Specifically, Munro [5] studied the ill effects of online hate speech on children and found that children may develop depression, anxiety, and other mental health problems. Malicious advertisements remain a notorious global burden, accounting for up to 85% of daily message traffic [6]. Pornography can cause significant undesirable effects on the physical and psychological health of children [7]. Moreover, these toxic contents can even increase the number of criminal cases to a certain extent [8]. All these studies reflect that toxic content can largely threaten social harmony; thus, content moderation software, which detects and blocks toxic content, has attracted massive interest from academia and industry.
Typical content moderation software first detects toxic content and then blocks it or warns the users before showing it. As the core of content moderation, toxic content detection has been widely formulated as a classification task, and it has been tackled by various deep learning models, such as convolutional neuron networks, Long-Short-Term-Memory (LSTM) models, and Transformer models [9]–[11]. Recently, the development of pre-trained language models (e.g., BERT [12] and RoBERTa [13]) has significantly improved the held-out accuracy of toxic content detection. Because of the recent progress in this field, industrial companies have also extensively deployed commercial-level content moderation software on their products, such as Google [14], Facebook [15], Twitter [16], and Baidu [17].
However, the mainstream content moderation software is not robust enough [17], [18]. For example, Facebook content moderation software cannot understand many languages, leaving non-English speaking users more susceptible to harmful posts [18]. In addition, toxic content can bypass mainstream content moderation software by applying simple textual trans-
formations. For example, changing “fuck” to “f\_fuck”. The essential first step is to develop a testing framework for content moderation software to address this problem, similar to traditional software.
There remains a dearth of testing frameworks for content moderation software—partly because the problem is quite challenging. First, most of the existing testing [19]–[21] or adversarial attack [22]–[24] techniques for Natural Language Processing (NLP) software rely on word-level semantic-preserving perturbations (e.g., from “I like it” to “I love it”). Most of the perturbed texts generated by these approaches still contain toxic words, and thus, they are unlikely to evade moderation. In addition, as reported by a recent study [25], 44% of the test cases generated by the State-of-the-Art (SOTA) approaches are false alarms, which are test cases with inconsistent semantics or incorrect grammar, rendering these approaches suboptimal. Moreover, existing character-based perturbation approaches [26]–[29] are designed for general NLP software, so they consider common transformations (e.g., from “foolish” to “folish”), which only cover a very limited set of the possible real user inputs for content moderation software.
In this paper, we propose MTTM, a Metamorphic Testing framework for Textual content Moderation software. Specifically, to develop a comprehensive testing framework for content moderation software, we first need to understand what kind of transformations real users might apply to evade moderation. Thus, we conduct a pilot study (Section III) on 2,000 text messages collected from real users and summarize eleven metamorphic relations across three perturbation levels: character level, word level, and sentence level, making MTTM provide metamorphic relations that reflect real-world user behaviors and are specially designed for content moderation software. MTTM employs these metamorphic relations on toxic contents to generate test cases that are still toxic (i.e., being easily recognizable to humans) yet are likely to evade moderation. All these metamorphic relations are implemented for two languages, English and Chinese, because English is a representative language based on the alphabet, while Chinese is a representative language based on the pictograph.
We apply MTTM to test three commercial textual content moderation software and two SOTA moderation algorithms against three typical kinds of toxic content (i.e., abusive language, malicious advertisement, and pornography). The results show that MTTM achieves up to 83.9\%, 51\%, and 82.5\% error finding rates (EFR) when testing commercial content moderation software provided by Google, Baidu, and Huawei, respectively, and it obtains up to 91.2\% EFR when testing the SOTA algorithms from the academy. In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0\% \sim 5.9\% EFR) while maintaining the accuracy on the original test set. Codes, data and results of our pilot study in this paper are available\(^2\). The main contributions of this paper are as follows:
- The introduction of the first comprehensive testing framework, MTTM, for textual content moderation software validation.
- A pilot study on 2,000 real-world text messages that leads to eleven metamorphic relations, facilitating the implementation of MTTM towards two languages: English and Chinese.
- An extensive evaluation of MTTM on three commercial content moderation software and two SOTA academic models, demonstrating that MTTM can generate toxic contents that easily bypass moderation and those toxic contents can improve the robustness of the SOTA algorithms.
Content Warning: We apologize that this paper presents examples of aggressive, abusive, or pornographic expressions for clarity. Examples are quoted verbatim. In addition, to conduct this research safely, we performed the following precautionary actions for the participants: (1) in every stage, we prompted a content warning to the researchers and the annotators and told them that they could leave anytime during the study and (2) we provided psychological counseling after our study to relieve their mental stress.
II. BACKGROUND
A. Textual Content Moderation
1) Commercial Content Moderation Software: Many large companies, such as Google, Facebook, Twitter, and Baidu, have deployed commercial content moderation software on their products. According to their official technical documents, the typical backbone of moderation software is a hybrid classification algorithm of neural network models and pre-defined rules, which leverages the advantages of both parties. Neural network-based methods can effectively understand contextual and semantic information, while rule-based methods can easily implement user-defined functionality. For example, Baidu’s commercial content moderation software is powered by a deep neural network and a huge list of pre-defined banned words.
2) Academic Content Moderation Models: There are generally two categories of academic models for textual content moderation: feature engineering-based models and neural network-based models.
Feature Engineering-Based Models. Feature engineering-based models train their toxic content classification models based on manually-constructed features. Specifically, textual feature engineering can be further divided into rule-based methods and statistical methods.
The core of rule-based methods is pre-defined rules or dictionaries of banned words. Spertus et al. [30] employed 47 handcrafted linguistic rules to extract binary feature vectors and used a decision tree to detect toxic contents. Razavi et al. [31] constructed an abusive language dictionary to extract lexicon-level features for abuse detection. Handcrafted rules and lexicons generalize well across data from different domains. However, they can hardly deal with implicit abuse and sarcasm (e.g., “I haven’t had an intelligent conversation with a woman in my whole life” from [32]). In addition, they
\(^2\)https://github.com/Jarviswang94/MTTM
are vulnerable to the detection of toxic text with errors in spelling, punctuation, and grammar [33]. Statistical methods leverage different statistics of the textual data. Yin et al. [34] and Salminen et al. [35] computed the Term Frequency-Inverse Document Frequency (TF-IDF) of words as features and used Support Vector Machines (SVMs) to detect online harassment and hate speech. Statistical methods require less human effort and are more robust to spelling, punctuation, and grammar variations. Nevertheless, these methods often capture superficial patterns instead of understanding the semantics [33].
**Neural Network-Based Models.** Advancements in text representation learning have spurred researchers to explore neural network-based models for textual content moderation. Djuric et al. [36] was the first that utilized neural networks to obtain surface-level representations and trained a logistic regression classifier to detect abusive language. Badjatiya et al. [2] adopted GLoVe word embedding [37] to extract text features and used a word-level LSTM to moderate textual content. With the help of the pre-trained language models (e.g., BERT [12] and RoBERTa [13]), researchers fine-tune these models on a downstream dataset and achieved remarkable performance on textual content moderation tasks.
### B. Metamorphic Testing
Metamorphic testing [38] is a testing technique that has been widely employed to address the oracle problem. The core idea of metamorphic testing is to detect violations of metamorphic relations (MRs) across multiple runs of the software under test. Specifically, MR describes the relationship between input-output pairs of software. Given a test case, metamorphic testing transforms it into a new test case via a pre-defined transformation rule and then checks whether the corresponding outputs of these test cases returned by the software exhibit the expected relationship.
Metamorphic testing has been adapted to validate Artificial Intelligence (AI) software over the past few years. These efforts aim to automatically report erroneous results returned by AI software via developing novel MRs. In particular, Chen et al. [39] investigated the use of metamorphic testing in bioinformatics applications. Xie et al. [40] defined eleven MRs to test k-Nearest Neighbors and Naive Bayes algorithms. Dwarkanath et al. [41] presented eight MRs to test SVM-based and ResNet-based image classifiers. Zhang et al. [42] tested autonomous driving systems by applying GANs to produce driving scenes with various weather conditions and checking the consistency of the system outputs.
### III. MTTM
This section first introduces a pilot study on text messages collected from real users (Section III-A). Then we introduce eleven metamorphic relations that are inspired by the pilot study. These metamorphic relations can be grouped into three categories according to the perturbation performed: character-level perturbations (Sec. III-B), word-level perturbations (Sec. III-C), and sentence-level perturbations (Sec. III-D).
#### A. Pilot Study
In this work, we intend to develop metamorphic relations that assume the seed test case (i.e., a piece of text) and the perturbed test case should have identical classification labels (i.e., labeled as “toxic content”) returned by the content moderation software. To generate effective test cases, we think the perturbations in our MRs should be:
- **Semantic-preserving:** the perturbed test cases should have the identical semantic meaning as the seed.
- **Realistic:** should reflect possible inputs from real users.
- **Unambiguous:** should be defined clearly.
In order to design satisfactory perturbations, we first conducted a pilot study on text messages from real users to explore what kind of perturbations the users would apply to the toxic content to bypass the content moderation software. We consider text messages from four platforms with a large number of users:
- **Twitter**
- **Grunbletext**
- **Dirty**
- **Taobao**
3: https://twitter.com/
4: https://github.com/t-davidson/hate-speech-and-offensive-language
5: http://www.grumbletext.co.uk/
6: https://www.kaggle.com/uclml/sms-spam-collection-dataset
7: https://www.taobao.com/
8: https://github.com/hrwhisper/SpamMessage
9: https://github.com/pokemonchow/Dirty
the contents labeled as toxic and intentionally perturbed by the annotators to design our perturbation methods.
We manually inspected all these toxic contents perturbed by the real users and collectively summarized eleven perturbation methods that real users have been using to evade moderation. We categorize these toxic sentences from three perspectives: 1) basic unit of perturbation, such as character level, word level, and sentence level; 2) basic perturbation operation, such as substitution, insertion, deletion, split, and combination; and 3) the logic behind perturbation, such as visual-based, homophone-based, and language-based. Accordingly, we derive eleven MRs based on eleven perturbation methods, where each MR assumes that the classification label returned by the content moderation software on the generated test case (i.e., perturbed text) should be the same as that on the seed (i.e., original text). Table I presents the eleven perturbation methods, their categories, examples in two languages, and the percentage of each in our study. We will introduce the MRs (their corresponding perturbation methods) in the following.
### B. MRs with Character-Level Perturbations
**MR1-1 Visual-Based Substitution**
This MR uses visual-based substitutions, which replace characters with visually similar characters. These visually similar characters are not required to be semantically equivalent or similar to the original characters. Usually, the candidates come from the alphabet of other languages. For example, users can replace “a” with “â”, “ii”, “q”, “α”, etc. The candidates can also be punctuation or numbers, such as “1” for “C” and “1” for “I”. For Chinese characters, we can consider their variants from different language systems, such as Kanji in Japanese, Hanja in Korean, and Han character in Vietnamese, making a Chinese character usually has up to three variants. Besides variants, we can easily find many characters that look highly similar. “_fac” (one of the Japanese kana) for “fac” (Power) and “의” (Say) for “의” (Sun) are examples of such substitutions.
**MR1-2 Visual-Based Splitting**
This MR employs visual-based splitting, which separates a character into multiple parts. This MR is inspired by the fact that many characters are composed of other characters. Therefore, some characters can be separated into two characters, such as “VV” for “W” and “女子” (Woman) for “好” (Good). Some Chinese characters can even be split into three characters, for example “木身寸” (Wood/Body/Inch) for “榭” (Pavilion). It is worth noting that Chinese characters can sometimes be split vertically, like “曰心” (Die/Heart) for “忘” (Forget).
**MR1-3 Visual-Based Combination**
This MR’s perturbation method is the inverse transformation of MR1-2. Visual-based combination combines adjacent characters into a single character, such as “m” for “rm”. The difference between this MR and MR1-2 is that, in MR1-2, the underlying meaning is expressed by the combination of characters. Instead, in this MR, we understand the meaning by splitting certain characters.
**MR1-4 Noise Injection**
This MR perturbs text via noise injection, which inserts additional characters into the original text. To not affect human comprehension, users tend to let the noise be closely related to the context (e.g., “o” in “Hellooo”) or from a different domain which can make users ignore the noise when reading (e.g., “*” in “H*ell*o”). Specifically, “Hello” has multiple “o”s, and “*” is a mathematical symbol outside the English alphabet. Therefore, humans can easily ignore the noises.
**MR1-5 Character Masking**
This MR uses character masking, which masks a small portion of the characters by replacing them with some special characters. The content moderation software can hardly recognize the word, but humans can easily infer the masked character within the context. For example, we can infer that the masked word is “your” in “what’s y*ur name” with our prior knowledge.
**MR1-6 Character Swap**
“Aoccdriyng to a rscheearch at Cmabrigde Universtisy, it doen’s mttaer in waht oredr the lettres in a wrod are, the olny iprmoett ihng is taht the frist and lsat lettre be at the rght pclae. The rset can be a toatl mses and you can sifi rd ervey lteter by istlef, but the wrod as a wlohe.” Inspired
### Table I: Summary of the perturbation categories in the pilot study.
<table>
<thead>
<tr>
<th>Perturbation Level</th>
<th>Perturbation Method</th>
<th>Examples in English</th>
<th>Examples in Chinese</th>
<th>Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Character Level</td>
<td>Visual-based Substitution</td>
<td>a → α; C → (; l → i</td>
<td>日 → 月; 此 → 此</td>
<td>12.3%</td>
</tr>
<tr>
<td></td>
<td>Visual-based Splitting</td>
<td>K → k; W → WV</td>
<td>好的 → 女子女句</td>
<td>5.6%</td>
</tr>
<tr>
<td></td>
<td>Visual-based Combination</td>
<td>Earn → Eam</td>
<td>不用 → 聘</td>
<td>0.8%</td>
</tr>
<tr>
<td></td>
<td>Noise Injection</td>
<td>Hello → H<em>ell</em>o</td>
<td>邻电 → 邻*电</td>
<td>13.2%</td>
</tr>
<tr>
<td></td>
<td>Char Masking</td>
<td>Hello → H*llo</td>
<td>年新快乐 → 新年快</td>
<td>7.4%</td>
</tr>
<tr>
<td></td>
<td>Character Swap</td>
<td>Weather → Waether</td>
<td>繁来说 → 简来说</td>
<td>4.1%</td>
</tr>
<tr>
<td>Word Level</td>
<td>Language Switch</td>
<td>Hello → Hola; a → Add</td>
<td>龙 → 例</td>
<td>14.9%</td>
</tr>
<tr>
<td></td>
<td>Homophone Substitution</td>
<td>Die → Dye; Night → Nine</td>
<td>好吧 → 模八; 这样 → 曾</td>
<td>36.4%</td>
</tr>
<tr>
<td></td>
<td>Abbreviation Substitution</td>
<td>As Soon As Possible → ASAP</td>
<td>永远的年 → 有offs</td>
<td>15.7%</td>
</tr>
<tr>
<td></td>
<td>Word Splitting</td>
<td>Hello → Hello o</td>
<td>使用用户 → 使用用户</td>
<td>6.8%</td>
</tr>
<tr>
<td>Sentence Level</td>
<td>Benign Context Camouflage</td>
<td>Golden State Warriors guard won’t play Sunday, 〈add a spam sentence here〉, due to knee soreness.</td>
<td>金钢石增加值暴涨, <在这里添加一条广告>, 是金融市场体系</td>
<td>2.5%</td>
</tr>
</tbody>
</table>
https://www.mrc-cbu.cam.ac.uk/personal/matt.davis/Cmabrigde
---
10Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on July 18,2023 at 11:52:24 UTC from IEEE Xplore. Restrictions apply.
by this fact, this MR uses character swap, which randomly swaps characters within a word.
C. MRs with Word-Level Perturbations
MR2-1 Language Switch
This MR translates some words into other languages. Many users on social media platforms can comprehend more than one language. Thus, users may use words or phrases from different languages in a piece of text to evade moderation. Note that we also consider the switch between different written forms of a language as a language switch. For example, in Chinese, it is commonly seen the transformation between traditional Chinese characters and simplified Chinese characters, such as “Specifications” (Send) and “发” (Send).
MR2-2 Homophone Substitution
This MR is based on homophone substitution, which replaces words with other words or characters that have the same or similar pronunciation. Simple examples include “Die” ([dai]) for “Die” ([da]), “Nite” ([nai]) for “Night” ([na]), and “C” ([si]) for “see” ([si]). Complex homophone substitution includes “w8” ([w] [et]) for “wait” ([wet]), which uses a character outside English alphabet.
In Chinese, the pronunciation of “酱” ([jia]) “Sauce” is similar to that of “酱” ([jia]), “Sauce”. Such words are frequently seen in the toxic content datasets but less frequently in a general domain corpus. Thus, we use TF-IDF to select target words. We utilize sklearn f12 for the English corpus. After filtering out words that are frequently appearing in the toxic content datasets but less frequently in a general domain corpus, we use TF-IDF to select target words. We utilize sklearn f12 for the English corpus. After filtering out content moderation software. Specifically, we focus on words important for the content moderation scenario so that perturbations on these words are more likely to affect the output of content moderation software. Specifically, we focus on words frequently appearing in the toxic content datasets but less frequently in a general domain corpus. Thus, we use TF-IDF to select target words. We utilize sklearn f12 for the English corpus and Jieba library f12 for the Chinese corpus. After filtering out
MR2-4 Word Splitting
This MR injects spaces into the word, aiming to split a word into sub-words. For example, “Hello” can be recognized in most popular NLP models. If we add a space into the word, making it “Hello o”, most NLP tokenizers will recognize it as two separate tokens, namely “Hello” and “o”, which could affect the models’ judgment. This can also happen in Chinese. For example, “使用/户口/满意”, which means “satisfy the users”, should be tokenized as “使用/户口/满意”. If we add some noises to separate the characters, it is easy to make the tokenization results become “使用/户口/满意”, which means “Use/ Household/Satisfy”, leading to the change of semantic meaning.
D. MRs with Sentence-Level Perturbations
MR3-1 Benign Context Camouflage
This MR uses benign context camouflage, which inserts plenty of benign or unrelated sentences to camouflage the toxic sentence. For example, a malicious advertisement can be surrounded by numerous unrelated and non-commercial contents to bypass the malicious advertisement detection model.
E. Discussion
Intersections of Different MRs. Some perturbations can fall into multiple MR categories. For example, some substitution candidates not only have a similar visual appearance to the original character but also are the homophone of the original character, which corresponds to MR1-1 (visual-based substitution) and MR2-2 (homophone substitution), respectively. In addition, similar-looking characters tend to have similar pronunciations, especially for Chinese. However, the MR definitions are clear and can cover all the examples from our pilot study. When counting the distribution, we randomly assign examples to one of the possible MRs.
Combinations of Different MRs. We can use a combination of different MRs to generate diverse test cases. However, to balance the generated test cases’ diversity and readability, we restrict the maximum number of MRs used in each test case. We evaluate the impact of MR combinations in Section IV-C.
Generalization to other software and languages. In this work, we focus on textual content moderation software and implement our MRs for the two most widely used languages: English and Chinese. However, based on our design methodology, these MRs can be easily generalized to other languages and to test other NLP software, such as software for user review analysis and machine translation.
F. Implementation Details
In this section, we describe the implementation details of MITTM. Specifically, we implement (1) a target word selection approach and (2) the perturbations on the selected word in different MRs except MR3-1. For MR3-1, we conduct sentence-level perturbation without the need to identify target words.
Target Word Selection. We intend to perturb the words important for the content moderation scenario so that perturbations on these words are more likely to affect the output of content moderation software. Specifically, we focus on words frequently appearing in the toxic content datasets but less frequently in a general domain corpus. Thus, we use TF-IDF to select target words. We utilize sklearn f12 for the English corpus and Jieba library f12 for the Chinese corpus. After filtering out
11)https://scikit-learn.org/
12)https://github.com/fxsjy/jieba
the stop words, we select the top 20 words with the highest TF-IDF score for each dataset.
**MR1-1 Visual-Based Substitution.** For each English character in the target words, we use DeepAI visual similarity API\(^{13}\) to find the most visually similar character in the Greek and German alphabets as the candidate. For each Chinese character in target words, we leverage SimilarCharacter\(^{14}\), a Python library that uses OpenCV\(^{15}\) to calculate the visual similarity score within 3,000 commonly used Chinese characters, to find another word with the highest visual similarity score as the candidate. To ensure a high similarity, we only replace the original character with the candidate if their similarity score is higher than 0.7.
**MR1-2 Visual-Based Splitting.** For both languages, we use DeepAI visual similarity API to find the most visually similar bi-char combinations as the candidate. We only replace the original character with the candidate if their similarity score is higher than 0.7. Due to the large character space of Chinese characters, it is time-consuming to transverse all the bi-char combinations. Thus, we use the Chinese Character Dictionaries\(^{16}\) to split the character that is split-able in target words as the candidate.
**MR1-3 Visual-Based Combination.** MR1-3 uses the splitting substitution (the original character, the candidate) dictionary built in MR1-2 (Visual-Based Splitting). For each target word, if any of its bi-char combinations occur in the dictionary, we substitute the combined character for the bi-char combination.
**MR1-4 Noise Injection.** We implement two character-level noise injection methods: insertion and repetition. For insertion, we randomly insert a character into the target word. According to the definition in Section III-B, we implement two types of insertion: inserting a character from the language’s alphabet, which is closely related to the context, and inserting a unique punctuation character, which is from a different domain. For repetition, we repeat the vowel in each English target word and randomly repeat a character in each Chinese target word.
**MR1-5 Character Masking.** For each target word, we randomly replace a character with “*” to mask the character. For English, we mask a vowel in the target word.
**MR1-6 Character Swap.** For each target word, we randomly swap two adjacent characters. For Chinese, we randomly swap characters after tokenization.
**MR2-1 Language Switch.** For each target word in English (resp. Chinese), we invoke Google Translate API\(^{17}\) to translate it into Spanish (resp. English), which is the most widely used second language in the USA (resp. China).
**MR2-2 Homophone Substitution.** We use the eng-to-ipa\(^{18}\) Python library to convert English words to International Phonetic Alphabet (IPA) and then find other English words with the most similar IPA as substitution candidates. For Chinese, we use the pypinyin\(^{19}\) and pinyin2hanzi\(^{20}\) libraries to find the substitution candidates.
**MR2-3 Abbreviation Substitution.** For English target words, we replace them with their acronym, which is the word composed of the first letters of the target words. For Chinese target words, we first use the pypinyin Python library to convert them to Pinyin and then use the acronym of their Pinyin as the candidate.
**MR2-4 Word Splitting.** For each target word, we randomly insert a blank space.
**MR3-1 Benign Context Camouflage.** We randomly collect ten benign sentences for each dataset from its non-toxic class. Then for each toxic sentence, we insert the benign sentence either before or after it.
### IV. Evaluation
To evaluate the effectiveness of MTTM, we use our method to test three commercial software products and two SOTA algorithms for content moderation. In this section, we try to answer the following four Research Questions (RQs):
- **RQ1:** Are the test cases generated by MTTM toxic and realistic?
- **RQ2:** Can MTTM find erroneous outputs returned by content moderation software?
- **RQ3:** Can we utilize the test cases generated by MTTM to improve the performance of content moderation?
- **RQ4:** How would different factors affect the performance of MTTM?
#### A. Experimental Settings
1) **Datasets:** We used different kinds of datasets as seed data to validate MTTM. Previous researchers have collected, labeled, and released various types of data for research purposes. In this paper, we choose the datasets with the highest citations according to Google Scholar or those with the most stars on GitHub. Other than the above-mentioned four datasets (in Section III-A), namely HateOffensive, SMS Spam Collection, SpamMessage, and Dirty, we utilize another two datasets: Sexting\(^{21}\), an English pornographic text dataset containing 537 sexual messaging messages, and Midu [44], a Chinese novel paragraph dataset collected from an online literature reading platform called MiDu App\(^{22}\), which is a corpus with 62,876 paragraphs including 7,360 pornographic
<table>
<thead>
<tr>
<th>Dataset</th>
<th>#Sent</th>
<th>Lang</th>
<th>Type</th>
<th>Source</th>
</tr>
</thead>
<tbody>
<tr>
<td>HateOffensive</td>
<td>24.8K</td>
<td>English</td>
<td>Abuse</td>
<td>Twitter</td>
</tr>
<tr>
<td>Dirty</td>
<td>2.5K</td>
<td>Chinese</td>
<td>Abuse</td>
<td>Weibo</td>
</tr>
<tr>
<td>SMS Spam</td>
<td>5.5k</td>
<td>English</td>
<td>Spam</td>
<td>Grumbletext</td>
</tr>
<tr>
<td>Spam Message</td>
<td>60K</td>
<td>Chinese</td>
<td>Spam</td>
<td>Taobao</td>
</tr>
<tr>
<td>Sexting</td>
<td>0.5K</td>
<td>English</td>
<td>Porno</td>
<td>Github</td>
</tr>
<tr>
<td>Midu</td>
<td>7.3K</td>
<td>Chinese</td>
<td>Porno</td>
<td>Midu</td>
</tr>
</tbody>
</table>
\(^{13}\)https://deepai.org/machine-learning-model/image-similarity
\(^{14}\)https://github.com/contr4l/SimilarCharacter
\(^{15}\)https://opencv.org/
\(^{16}\)https://github.com/kfcd/chaizi
\(^{17}\)https://translate.google.com/
\(^{18}\)https://github.com/mphilipp/English-to-IPA
\(^{19}\)https://github.com/mozilla/patterns-pinyin
\(^{20}\)https://github.com/letiantian/Pinyin2Hanzi
\(^{21}\)https://github.com/mathigatti/s Sexting-dataset
\(^{22}\)http://www.mindureader.com/
paragraphs and 55,516 normal paragraphs. Important statistics of the six datasets are shown in Table II.
2) Software and Models Under Test: We use MTTM to test commercial textual content moderation software products and SOTA academic models. Commercial software products include Google Jigsaw’s Perspective23, Baidu AI Cloud24, and Huawei Cloud25. These software products were tested against the three typical kinds of toxic content in our evaluation. One exception is Google Jigsaw’s moderation of malicious advertisements because Google does not provide such functionality. They are all popular software products for content moderation developed by companies and can be accessed by registered users via their APIs. For research models, we select models from GitHub and Huggingface Model Zoo26 with the highest downloads and stars in recent three years. For abuse detection, we select HateXplain [45], a BERT model fine-tuned on abuse detection datasets. For spam detection, we use a BERT model fine-tuned on the spam detection dataset, downloaded from Huggingface27. Since there are no publicly available pornography detection models, we do not test this research model in our experiments.
B. RQ1: Are the test cases generated by MTTM toxic and realistic?
MTTM aims to generate test cases that are toxic and are as realistic as the ones real-world users produce to evade moderation. Thus, in this section, we evaluate whether the generated test cases are still toxic (i.e., semantic-preserving) and whether they are realistic. We generated 100 sentences with each perturbation method (i.e., 1,100 generated sentences in total) and recruited two annotators with Bachelor’s degrees or above and proficiency in both English and Chinese. After given guidelines and training sessions, the annotators were asked to annotate all the generated pairs, each containing an original and a perturbed sentence. For each sentence pair, we asked the following two questions: (1) From “1 strongly disagree” to “5 strongly agree”, how much do you regard the sentence as toxic content (abuse, pornographic, or spam)? (2) From “1 strongly disagree” to “5 strongly agree”, how much do you think the perturbation is realistic in the sense that real users may use it? Note that when asking whether a sentence is toxic or not, the original sentence and the perturbed sentence were not presented at the same time. The annotators can only view one sentence each time from shuffled data when labeling the toxicity. We would review test cases with any disagreement or unrealistic flags. Annotation results show that the average toxic score is 4.51, and the average realistic score is 4.12. We follow [46] to measure the inter-rater agreement using Randolph’s Kappa, obtaining a value of 0.81, which indicates “almost perfect agreement”.
Answer to RQ1: The test cases generated by MTTM are toxic and realistic.
C. RQ2: Can MTTM find erroneous outputs returned by content moderation software?
MTTM aims to automatically generate test cases to find potential bugs in current content moderation software. Hence, in this section, we evaluate the number of bugs that MTTM can find in the outputs of commercial content moderation software and academic models. We first input all the original sentences and obtain the classification label for each software product or model under test. If an original sentence was labeled as “non-toxic”, it would be filtered out because we intend to find toxic contents that can evade moderation. The remaining sentences will be regarded as seed sentences for test case generation. The number of original sentences and seed sentences is presented in Table III. Then, we conduct perturbations in MTTM’s MRs on the seed sentences to generate test cases. Finally, we use the generated test cases to validate the software products and academic models. In particular, we check whether these test cases were labeled as “toxic” or “non-toxic”. Since the generated text should preserve the semantics of the seed sentence, they are supposed to be labeled as “toxic”. If not, the generated test cases evade the moderation of the software products or academic models, indicating erroneous outputs. To evaluate how well MTTM does on generating test cases that trigger errors, we calculate Error Finding Rate (EFR), which is defined as follows:
\[
\text{EFR} = \frac{\text{the number of misclassified test cases}}{\text{the number of generated test cases}} \times 100\%.
\]
The EFR results are shown in Table IV. In general, MTTM achieves high EFRs. The EFRs of commercial software products are lower than that of academic models. Using different MRs, MTTM achieves up to 83.9%, 51%, and 82.5% EFR when testing moderation software provided by Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when testing the SOTA academic models. We think it is because commercial software has been armed with various rule-based methods to detect input perturbation. For example,
Baidu has a patent titled “Method and equipment for determining sensitivity of target text”\(^\text{28}\). Specifically, they provide pre-service rules in their pretreatment unit: 1) remove the unusual characters, such as “*”, “%”, “#”, “$”, and 2) convert text strings with the deformed bodies, such as perpendicular shape literal and characters in a fancy style, to normal text strings. Notably, all the academic models can detect sentence-level benign context camouflage, which may be due to the attention mechanism employed by these models. In addition, all software products and models can pass the test cases generated on MR1-3 (Visual-Based Combination). Therefore, we do not include the results in Tables IV. The performance of commercial textual content moderation software varies greatly against different kinds of toxic content. For example, Google Jigsaw’s Perspective performs much better on pornography detection than on abusive language detection. It is probably because some abusive language, especially swear words like “fuck”, is not taken that seriously on informal occasions. The performance of Baidu AI Cloud on malicious advertisement detection is much worse than that on the other two tasks, which might be related to the fact that Baidu’s revenue mainly comes from advertising. In addition, there is a possible consensus among Chinese web users that malicious advertisement is not as bad as abusive language and pornography. Therefore, companies seem to focus on different kinds of toxic content when developing their content moderation software.
As the biggest search engine company in China, the textual content moderation software in Baidu outperforms the one in Huawei, which is the biggest communication technology company in China. It is probably because Baidu has more business scenarios to design more rules and collect more training data to improve content moderation software’s performance.
---
**D. RQ3: Can we utilize the test cases generated by MTTM to improve the performance of content moderation?**
We have demonstrated that MTTM can generate toxic and realistic test cases that can evade the moderation of commercial software products and SOTA academic models. As shown in the “Abuse Detection” column in Table IV, MTTM achieves high EFR on academic models for most of its MRs (e.g., 91.2% for MR1-1 Visual-Based Substitution), indicating the generated test cases can easily fool the models. The following substantial question is: can these test cases be utilized to improve the performance of content moderation? In other words, we hope to improve model robustness. A natural thought is to retrain the models using test cases generated by MTTM and check whether the retrained models are more robust to various perturbations.
Specifically, we select the Abuse Detection task and use the Hate-Offensive Dataset [43]. We split the dataset into three parts: training set, validation set, and test set with the ratio of 6:2:2. We first fine-tune a pre-trained BERT model [12] on the training set as our abuse detection model, which is a widely used scheme for text classification. We adopt the default fine-tuning settings suggested by Huggingface\(^\text{29}\). Specifically, we train the model with 3 epochs, a learning rate of $5 \times 10^{-5}$, a batch size of 16, 500 warming up steps, and a weight decay of 0.01. We select the model with the highest accuracy on the validation set and use MTTM to test its robustness.
Then, for retraining with MTTM, we conduct fine-tuning with the failed test cases generated by MTTM. We generated test cases with MTTM and randomly collected 300 cases that could fool the model. Labeling them as toxic contents, we add them to the original training set to retrain the model.
---
**Answer to RQ2: MTTM achieves up to 83.9%, 51%, and 82.5% EFR when testing moderation software provided by Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when testing the SOTA academic models.**
---
\(^{28}\)https://patents.google.com/patent/CN102184188A/en
\(^{29}\)https://huggingface.co/transformers/v3.2.0/custom_datasets.html
The setting of hyper-parameters is identical to that of regular training mentioned above.
To validate the effectiveness of robust retraining with MTTM, we use MTTM to test the model after robust retraining, denoted as “Aug”, and compared the EFRs with the original model’s, denoted as “Ori”. The results are presented in Table V. We can observe that the test case generated by MTTM can largely improve the robustness of the content moderation models in the sense that the EFRs have been significantly reduced (e.g., from 71.3% to 0.0% for the MR1-1 Visual-Based Substitution). In other words, after retraining with MTTM’s test cases, the model is rarely fooled by all the perturbations. Moreover, the model’s accuracy remains on par after robust training (from 91.5% to 91.2 %), which means the retraining did not affect model performance on the original test set.
Notably, our approach will not introduce extra unknown tokens because: (1) BERT has a huge (~ 30,000 tokens) vocabulary generated from massive data on the web, including characters from various languages; (2) BERT uses byte-pair encoding, an encoding technique that can effectively mitigate the out-of-vocabulary problem. For example, the generated “h ello” will be tokenized into “hell” and “lo” instead of treating the whole word as an unknown token.
We do not conduct experiments on improving industrial models because industrial moderation only provides APIs while robust retraining requires access to model internals. However, we believe robust retraining with MTTM’s test cases would also improve the robustness of industrial models because the underlying models are similar. In the future, we can study on how to improve the robustness of industrial moderation by designing a preprocessing module to detect and filter out/reverse-perturb intentionally-perturbed inputs.
**Answer to RQ3:** Test cases generated by MTTM can effectively improve the robustness of academic content moderation models.
---
**TABLE V: Error Finding Rates (EFRs) on abusive language detection models after retraining on the original test set and the test cases generated by MTTM.**
<table>
<thead>
<tr>
<th>Level</th>
<th>Perturb Methods</th>
<th>Ori</th>
<th>Aug</th>
</tr>
</thead>
<tbody>
<tr>
<td>Char</td>
<td>Visual-Based Substitution</td>
<td>71.3</td>
<td>0.0</td>
</tr>
<tr>
<td></td>
<td>Visual-Based Splitting</td>
<td>49.5</td>
<td>1.4</td>
</tr>
<tr>
<td></td>
<td>Noise Injection (non-lang)</td>
<td>56.1</td>
<td>2.5</td>
</tr>
<tr>
<td></td>
<td>Noise Injection (lang)</td>
<td>56.1</td>
<td>2.5</td>
</tr>
<tr>
<td></td>
<td>Char Masking</td>
<td>43.9</td>
<td>2.5</td>
</tr>
<tr>
<td></td>
<td>Char Swap</td>
<td>43.6</td>
<td>3.0</td>
</tr>
<tr>
<td>Word</td>
<td>Language Switch</td>
<td>76.2</td>
<td>5.9</td>
</tr>
<tr>
<td></td>
<td>Homophone Substitution</td>
<td>62.5</td>
<td>3.1</td>
</tr>
<tr>
<td></td>
<td>Abbreviation Substitution</td>
<td>76.2</td>
<td>2.2</td>
</tr>
<tr>
<td></td>
<td>Visual Splitting</td>
<td>71.3</td>
<td>2.0</td>
</tr>
<tr>
<td>Sentence</td>
<td>Benign Context Camouflage</td>
<td>12.0</td>
<td>0.0</td>
</tr>
<tr>
<td>Multi</td>
<td>Perturbation Combinations</td>
<td>81.4</td>
<td>3.5</td>
</tr>
</tbody>
</table>
---
**Fig. 1:** The Errors Finding Rates of MTTM with different number of target words.
**E. RQ4: How would different factors affect the performance of MTTM?**
This section explores the impact of four factors on the performance of MTTM. First, we studied the impact of noisy character selection on the performance of our method. In the previous sections, we observe that inserting noisy characters into target words (MR1-4) can help bypass the content moderation software and models. To study the impact of noisy character selection, we try two types of noisy characters: characters from the dataset and special characters that are not in the dataset. As shown in Table IV, inserting characters from the dataset as noise (dubbed Noise Injection (lang)) is much more effective than inserting special characters that are not in the dataset (named Noise Injection (non-lang)). One possible reason is that commercial software products have designed some rule-based preprocessing to the input sentence to remove special tokens that are not commonly seen or recover non-English characters (e.g., à) to English characters (e.g., a). These techniques are usually called text normalization.
Second, we studied the impact of the number of target words. We calculated the TF-IDF scores in the previous sections, and selected the top 20 words as target words. To study the impact of the number of target words, we vary the number of target words from 10 to 50 and compute the corresponding EFRs. As shown in Fig. 1, MTTM can find more errors as the number of target words increases. However, the EFRs saturate when the number of target words is larger than 40.
Third, we studied the impact of the number of perturbations. In the previous sections, we perturbed all the target words in each sentence. In this experiment, for each sentence, we compare the EFRs of perturbing all the target words and that of randomly perturbing half of the target words. As shown in Fig. 2, perturbing all the target words in each sentence can significantly improve the EFRs. Only perturbing half of the target words in each sentence is not sufficient to bypass the content moderation software.
Last but not least, we studied the impact of the perturbation combinations. In the previous sections, we showed that using
The fourth threat is that the test cases generated by MTTM has also been used in recent years. The third threat related has been adopted by various domains, such as autonomous driving and face recognition. However, AI software is not robust enough and can generate erroneous outputs that lead to fatal accidents [52], [53]. To this end, researchers have proposed a variety of methods to generate adversarial examples or test cases that can fool AI software [54]–[64]. Meanwhile, researchers have also designed approaches to improve AI software’s robustness, for example, the robust training mechanism [65]–[67] and network debugging [68], [69]. NLP software has also been used in recent years. Typical scenarios include sentiment analysis [70], [71], machine translation [72]–[74] and text-to-speech synthesis [75], [76]. Because of its importance, researchers from both NLP and software engineering areas have started to explore the
The validity of our study may be subject to some threats. The first threat is that the test cases generated by MTTM after many perturbations may become “non-toxic”, leading to false positives. To relieve this threat, we conducted a user study to validate whether the generated test cases are toxic or not. We further asked the annotators to label whether the test cases reflect inputs from real users. The results show that the generated test cases are toxic and realistic. The second threat is that we implement MTTM for two languages, which may not generalize to other natural languages. To reduce this threat, the choice of the two languages is made thoughtfully: they are representative alphabet-based language and pictograph-based language, respectively. In addition, we believe our MRs can generalize to other languages because most of the languages share similar properties (e.g., visual similarity, homophone, language switch). The third threat lies in our evaluation of five content moderation systems, which might not be a proper estimate of MTTM’s performance on other systems. We test commercial content moderation software and SOTA academic models to mitigate this threat. In particular, we test content moderation software provided by three big companies, which already have their techniques to defend malicious inputs. In the future, we could test more commercial software and research models to further mitigate this threat. The fourth threat is that our MTTM could be outdated with the bypass techniques evolving. To reduce this threat, we provide a comprehensive workflow: study the user behaviors, summarize and design the MRs, generate test cases, and use failure cases to improve the robustness. If other bypass techniques were proposed, people could follow this workflow to design new MRs. We also believe that automated MR generation is a promising and useful direction. This line of research mainly focuses on automated generation of a specific kind of MRs (e.g., polynomial MRs [49], [50] or automated MR generation leveraging software redundancy [51]. Since automated MR generation for content moderation software faces different challenges, we regard it as an important future work.
VI. RELATED WORK
A. Robustness of AI Software
AI software has been adopted by various domains, such as autonomous driving and face recognition. However, AI software is not robust enough and can generate erroneous outputs that lead to fatal accidents [52], [53]. To this end, researchers have proposed a variety of methods to generate adversarial examples or test cases that can fool AI software [54]–[64]. Meanwhile, researchers have also designed approaches to improve AI software’s robustness, for example, the robust training mechanism [65]–[67] and network debugging [68], [69]. NLP software has also been used in recent years. Typical scenarios include sentiment analysis [70], [71], machine translation [72]–[74] and text-to-speech synthesis [75], [76]. Because of its importance, researchers from both NLP and software engineering areas have started to explore the

Each perturbation method alone can achieve a good EFR. To study the impact of different perturbation combinations, we randomly select one char-level perturbation and one word-level perturbation, leading to 24 ($6 \times 4$) combinations. According to the results in Table IV, combining the different perturbation levels can increase the EFR.
**Answer to RQ4**: Noisy characters from the same dataset, more target words, more perturbations, and the combination of different perturbations can boost the performance of MTTM.
F. Compared with Textual Adversarial Attack Methods
In this section, we will illustrate the advantage of MTTM compared to textual adversarial attack methods, which is another line of research for finding the error in NLP software. First, MTTM is more comprehensive than adversarial methods because most of these methods focus on a small subset of the perturbations in MTTM. In addition, as reported by recent studies [25], [47], textual adversarial attack methods often generate low-quality test cases because their semantics change in many cases (around 40%), while MTTM can generate toxic and realistic test cases (Section IV B).
To show the effectiveness of MTTM, we conduct an experiment to compare the performance of MTTM with textual adversarial attacks methods in terms of EFR and running time. Specifically, we attacked our BERT-based abusive detection model in English using two famous NLP adversarial methods: PSO [48] and BAE [23], leading to an EFR of 65.0% and 47.8%, respectively, while a majority of MTTM’s MRs achieve more than 85% EFR (Table IV). In addition, adversarial methods need much more running time than MTTM because these methods rely on extensive model queries, while MTTM needs one query per test case. The running time of the two adversarial methods are 605.2x and 72.5x more. In summary, MTTM can find more error in less running time.
robustness of NLP software [77]–[79]. In particular, Ribeiro et al. [80] designed a behavioral testing method to test NLP software for sentiment analysis, duplicate question answering, and machine comprehension. Li et al. [22] used deep learning models to generate test cases for deep learning-based NLP software. Sun et al. [21] propose a word-replacement-based approach to test and fix machine translation bugs without retraining. Our paper studies the robustness of a widely-used AI software, content moderation software, which has not been systematically studied.
B. Robustness of Textual Content Moderation Software
We systematically reviewed papers on testing and attacking textual content moderation across related research areas: software engineering, natural language processing, and speech signal processing. Specifically, Ahlgren [81] used metamorphic testing to test Facebook’s simulation system, which is used to tackle harmful content. Li et al. [27] reported that visual-based substitution (MR1-1), character swap (MR1-6), and word splitting (MR2-4) could fool the NLP model. Gao et al. [26] proposed a black-box attack method based on character swap (MR1-6) to fool deep learning classifiers. Eger et al. [29] use visual-based substitution (MR1-1) to attack NLP models. Kapoor et al. [82] stated that Indian Internet users could use English-Hindi code-switched language to express abusive content (MR2-1). Cid et al. [83] found that spammers reduce the effectiveness of the spam detection algorithm by introducing noise in their messages (MR1-4). Li et al. [84] found that malicious Chinese netizens may obfuscate some toxic words in their comments with the corresponding variants that are visually similar to the original words (MR1-1).
However, our paper has sufficient contribution compared with the above papers. First, MTTM is much more comprehensive. Only five kinds of perturbations explored in these papers overlap with our MRs. To the best of our knowledge, the other six MRs in MTTM have not been explored in the existing papers across different related research areas. Moreover, all these papers focus on one language setting, while we implement MTTM for both English and Chinese. In addition, all the MRs were supported by our pilot study on real user inputs, which are different from existing papers that came up with the perturbations based on domain knowledge. Furthermore, most of the existing papers were only evaluated on research models, while MTTM has also been evaluated on three commercial content moderation software products. Thus, we believe MTTM is the first comprehensive testing framework for textual content moderation.
VII. CONCLUSION
This paper proposed the first comprehensive testing framework MTTM for validating textual content moderation software. Unlike existing testing or adversarial attack technique for general NLP software, which only provide common perturbations and cover a very limited set of toxic inputs that malicious users may produce, MTTM contains eleven metamorphic relations that are mainly inspired by a pilot study. In addition, all the metamorphic relations in MTTM have been implemented for two languages: English and Chinese. Our evaluation shows that the test cases generated by MTTM can easily evade the moderation of two SOTA moderation algorithms and commercial content moderation software provided by Google, Baidu, and Huawei. The test cases have been utilized to retrain the algorithms, which exhibited substantial improvement in model robustness while maintaining identical accuracy on the original test set. We believe that this work is the crucial first step toward systematic testing of content moderation software. For future work, we will continue developing metamorphic relations in MTTM and extend it to more language settings. We will also launch an extensive effort to help continuously test and improve content moderation software.
VIII. ACKNOWLEDGEMENT
The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the General Research Fund) and the National Natural Science Foundation of China (Grant Nos. 62102340 and 62206318).
REFERENCES
|
{"Source-Url": "https://www.cse.cuhk.edu.hk/lyu/_media/conference/wwang_icse2023_mttm.pdf?id=students%3Aphd&cache=cache", "len_cl100k_base": 13232, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47109, "total-output-tokens": 15843, "length": "2e13", "weborganizer": {"__label__adult": 0.0005350112915039062, "__label__art_design": 0.0008230209350585938, "__label__crime_law": 0.0005617141723632812, "__label__education_jobs": 0.0031280517578125, "__label__entertainment": 0.00023305416107177737, "__label__fashion_beauty": 0.0002646446228027344, "__label__finance_business": 0.0004165172576904297, "__label__food_dining": 0.0003459453582763672, "__label__games": 0.0019025802612304688, "__label__hardware": 0.0009713172912597656, "__label__health": 0.0005135536193847656, "__label__history": 0.0004112720489501953, "__label__home_hobbies": 0.00013148784637451172, "__label__industrial": 0.00028204917907714844, "__label__literature": 0.0008807182312011719, "__label__politics": 0.0003635883331298828, "__label__religion": 0.0004546642303466797, "__label__science_tech": 0.0870361328125, "__label__social_life": 0.0002498626708984375, "__label__software": 0.0408935546875, "__label__software_dev": 0.85888671875, "__label__sports_fitness": 0.0002224445343017578, "__label__transportation": 0.00031304359436035156, "__label__travel": 0.00018930435180664065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65766, 0.03483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65766, 0.39264]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65766, 0.89233]], "google_gemma-3-12b-it_contains_pii": [[0, 5458, false], [5458, 11570, null], [11570, 15884, null], [15884, 22940, null], [22940, 28333, null], [28333, 34299, null], [34299, 39281, null], [39281, 43406, null], [43406, 48760, null], [48760, 54785, null], [54785, 61446, null], [61446, 61446, null], [61446, 65766, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5458, true], [5458, 11570, null], [11570, 15884, null], [15884, 22940, null], [22940, 28333, null], [28333, 34299, null], [34299, 39281, null], [39281, 43406, null], [43406, 48760, null], [48760, 54785, null], [54785, 61446, null], [61446, 61446, null], [61446, 65766, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65766, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65766, null]], "pdf_page_numbers": [[0, 5458, 1], [5458, 11570, 2], [11570, 15884, 3], [15884, 22940, 4], [22940, 28333, 5], [28333, 34299, 6], [34299, 39281, 7], [39281, 43406, 8], [43406, 48760, 9], [48760, 54785, 10], [54785, 61446, 11], [61446, 61446, 12], [61446, 65766, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65766, 0.13308]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
eb2dede392542972916a4590380223a4d3969190
|
[REMOVED]
|
{"Source-Url": "http://www.db-net.aueb.gr/files/dapd07.pdf", "len_cl100k_base": 14189, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 73099, "total-output-tokens": 17416, "length": "2e13", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.0005774497985839844, "__label__crime_law": 0.00042510032653808594, "__label__education_jobs": 0.0014972686767578125, "__label__entertainment": 0.00023353099822998047, "__label__fashion_beauty": 0.000217437744140625, "__label__finance_business": 0.0004916191101074219, "__label__food_dining": 0.0004498958587646485, "__label__games": 0.0010709762573242188, "__label__hardware": 0.0018777847290039065, "__label__health": 0.0007925033569335938, "__label__history": 0.0006971359252929688, "__label__home_hobbies": 8.654594421386719e-05, "__label__industrial": 0.0003786087036132813, "__label__literature": 0.0006809234619140625, "__label__politics": 0.0003247261047363281, "__label__religion": 0.0005002021789550781, "__label__science_tech": 0.2626953125, "__label__social_life": 0.00016438961029052734, "__label__software": 0.05975341796875, "__label__software_dev": 0.66552734375, "__label__sports_fitness": 0.0002734661102294922, "__label__transportation": 0.0006284713745117188, "__label__travel": 0.00037980079650878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74639, 0.02713]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74639, 0.32962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74639, 0.90001]], "google_gemma-3-12b-it_contains_pii": [[0, 2044, false], [2044, 5089, null], [5089, 8295, null], [8295, 11174, null], [11174, 14055, null], [14055, 15516, null], [15516, 18730, null], [18730, 20824, null], [20824, 23616, null], [23616, 26628, null], [26628, 28421, null], [28421, 31084, null], [31084, 33810, null], [33810, 36469, null], [36469, 39218, null], [39218, 41250, null], [41250, 44592, null], [44592, 46675, null], [46675, 49758, null], [49758, 52241, null], [52241, 55639, null], [55639, 57180, null], [57180, 59608, null], [59608, 62291, null], [62291, 64790, null], [64790, 66965, null], [66965, 68896, null], [68896, 70915, null], [70915, 73348, null], [73348, 74639, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2044, true], [2044, 5089, null], [5089, 8295, null], [8295, 11174, null], [11174, 14055, null], [14055, 15516, null], [15516, 18730, null], [18730, 20824, null], [20824, 23616, null], [23616, 26628, null], [26628, 28421, null], [28421, 31084, null], [31084, 33810, null], [33810, 36469, null], [36469, 39218, null], [39218, 41250, null], [41250, 44592, null], [44592, 46675, null], [46675, 49758, null], [49758, 52241, null], [52241, 55639, null], [55639, 57180, null], [57180, 59608, null], [59608, 62291, null], [62291, 64790, null], [64790, 66965, null], [66965, 68896, null], [68896, 70915, null], [70915, 73348, null], [73348, 74639, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74639, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74639, null]], "pdf_page_numbers": [[0, 2044, 1], [2044, 5089, 2], [5089, 8295, 3], [8295, 11174, 4], [11174, 14055, 5], [14055, 15516, 6], [15516, 18730, 7], [18730, 20824, 8], [20824, 23616, 9], [23616, 26628, 10], [26628, 28421, 11], [28421, 31084, 12], [31084, 33810, 13], [33810, 36469, 14], [36469, 39218, 15], [39218, 41250, 16], [41250, 44592, 17], [44592, 46675, 18], [46675, 49758, 19], [49758, 52241, 20], [52241, 55639, 21], [55639, 57180, 22], [57180, 59608, 23], [59608, 62291, 24], [62291, 64790, 25], [64790, 66965, 26], [66965, 68896, 27], [68896, 70915, 28], [70915, 73348, 29], [73348, 74639, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74639, 0.08571]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
24e9a6b84d189a5158457bccbac7a1a9b81de3ad
|
Dalton: Learned Partitioning for Distributed Data Streams
Eleni Zapridou
EPFL
eleni.zapridou@epfl.ch
Ioannis Mytilinis∗
Oracle
ioannis.mytilinis@oracle.com
Anastasia Ailamaki
EPFL
anastasia.ailamaki@epfl.ch
ABSTRACT
To sustain the input rate of high-throughput streams, modern stream processing systems rely on parallel execution. However, skewed data yield imbalanced load assignments and create stragglers that hinder scalability. Deciding on a static partitioning for a given set of “hot” keys is not sufficient as these keys are not known in advance, and even worse, the data distribution can change unpredictably. Existing algorithms either optimize for a specific distribution or, in order to adapt, assume a centralized partitioner that processes every incoming tuple and observes the whole workload. However, this is not realistic in a distributed environment, where multiple parallel upstream operators exist, as the centralized partitioner itself becomes the bottleneck and limits scalability.
In this work, we propose Dalton: a lightweight, adaptive, yet scalable partitioning operator that relies on reinforcement learning. By memoizing state and dynamically keeping track of recent experience, Dalton: i) adjusts its policy at runtime and quickly adapts to the workload, ii) avoids redundant computations and minimizes the per-tuple partitioning overhead, and iii) efficiently scales out to multiple instances that learn cooperatively and converge to a joint policy. Our experiments indicate that Dalton scales regardless of the input data distribution and sustains 1.3 × - 6.7× higher throughput than existing approaches.
PVLDB Reference Format:
PVLDB Artifact Availability:
The source code has been made available at https://github.com/ezapridou/Dalton.
1 INTRODUCTION
Stream processing systems cope with data of enormous volume and velocity. From social network analytics to gaming, fraud detection, and stock trading, streaming applications require the real-time processing of high-throughput, in-motion data. Failing to sustain the input rate causes degradation in the quality of service and often jeopardizes the integrity of the entire application. To meet the ever-increasing computational demands, common wisdom suggests parallelization [3, 9, 20, 24, 29, 40, 43].
Figure 1: Impact of distribution changes and partitioner’s parallelism on application’s throughput
The physical limitations of a single machine and the inherently distributed nature of data sources (e.g., geo-distributed sensors in IoT) have sparked a lot of interest in distributed streaming frameworks such as Flink [20], Spark [43], and Kafka Streams [38], by both academia and industry. These systems compile the task at hand into a dataflow graph and follow a data-parallel approach, where different parts of the incoming data are assigned to different workers. Scaling the application under this model requires efficient load balancing – uneven assignments lead to stragglers and resource under-utilization. To make things worse, real data are often highly skewed, stressing the need for efficient parallelization [10, 28]. Therefore, a key research question is how to partition streams in order to achieve balanced execution. While shuffling trivially solves the problem for stateless operators, when state is involved, the optimal partitioning decision defines a complex optimization problem that is data-, resource- and workload-dependent.
As an example, consider a windowed group-by operation. To guarantee group-by semantics, a hash partitioner, which is sensitive to skewed data, is usually applied. To remedy this, previous research has proposed two techniques: i) re-partitioning [13, 15], and ii) key-splitting [30, 31]. Re-partitioning is too heavyweight as it involves state migration and transferring large data volumes over the network. By avoiding both the I/O cost of re-partitioning and the pitfalls of hashing, key-splitting has become the state-of-the-art. Key-splitting works in a Map-Reduce-like fashion. In the “map” stage, tuples are assigned to parallel workers. As each key can be assigned to multiple workers, key-grouping semantics are violated, but we benefit from the available parallelism even when the data distribution is heavily skewed. Then, data is partially aggregated and routed via hashing to the “reducers” for final aggregation. As Katsipoulakis et al. [21] have shown, key-splitting creates a trade-off between the effective parallelism in the first step and the aggregation cost in the second.
To identify the optimal trade-off, we have to answer many important questions such as: which keys to split, how many workers do we need for each key, and which these workers should be. To exacerbate things, the input rate and the underlying data distribution are not stationary but highly volatile and unpredictable: trending events create spikes in the load, and topic drifts change the set of “hot” keys that are responsible for load imbalance. Thus, key-splitting decisions should not be part of an offline, optimize-once process but a continuous and adaptive one.
During the past years, there have been many research efforts that employ key-splitting for stream partitioning under both the tuple-at-a-time [21, 30–32] and the micro-batching [1, 2] processing models. However, existing techniques suffer from at least one of the following issues: i) they either do not adapt to distribution/rate changes, or ii) they cannot scale efficiently when the partitioner itself becomes the bottleneck and limits scalability. When the partitioner’s input comes from multiple upstream operators, a single partitioning task may not be sufficient to sustain the load. Naively scaling by replicating the partitioner does not resolve the problem, as the locally-optimal decisions of each partitioner are not guaranteed to converge to a good global policy. Moreover, as existing partitioning functions are stateless, they: i) incur multiple redundant computations per tuple, further overloading the partitioner, and ii) miss the opportunity to exploit past experience for quickly converging to an efficient global policy.
Figure 1 illustrates a scenario that captures both deficiencies. An input stream with two parallel data sources produces uniform data, and all tuples go through a centralized partitioner. At $t = 50k$, we double the partitioners, and for the majority of the examined algorithms throughput increases, indicating that the partitioner itself had become the bottleneck. Then, after a while, one of the input streams becomes skewed due to a trending event. On the one hand, we observe that when following a static policy (Hashing, Two-Choices [31]), execution benefits from the second partitioner since individual partitioners follow the same strategy. However, static strategies cannot effectively handle all the different distributions. On the other hand, DAgreedy [32], which follows an adaptive policy, does not benefit from the second partitioner, as each of the two replicas acts independently and the system cannot converge.
This work proposes Dalton: a stream partitioning operator that can be injected into any stream processing system and jointly addresses both the adaptivity and the partitioner-scalability problem. To adapt to the distribution, Dalton relies on reinforcement learning (RL). For each assignment, a reward is instantly provided through a distributed protocol with tunable synchronization overheads. After a while, one of the input streams becomes skewed due to a trending event. On the one hand, we observe that when following a static policy (Hashing, Two-Choices [31]), execution benefits from the second partitioner since individual partitioners follow the same strategy. However, static strategies cannot effectively handle all the different distributions. On the other hand, DAgreedy [32], which follows an adaptive policy, does not benefit from the second partitioner, as each of the two replicas acts independently and the system cannot converge.
This work proposes Dalton: a stream partitioning operator that can be injected into any stream processing system and jointly addresses both the adaptivity and the partitioner-scalability problem. To adapt to the distribution, Dalton relies on reinforcement learning (RL). For each assignment, a reward is instantly provided through a distributed protocol with tunable synchronization overheads. After a while, one of the input streams becomes skewed due to a trending event. On the one hand, we observe that when following a static policy (Hashing, Two-Choices [31]), execution benefits from the second partitioner since individual partitioners follow the same strategy. However, static strategies cannot effectively handle all the different distributions. On the other hand, DAgreedy [32], which follows an adaptive policy, does not benefit from the second partitioner, as each of the two replicas acts independently and the system cannot converge.
Figure 1 illustrates a scenario that captures both deficiencies. An input stream with two parallel data sources produces uniform data, and all tuples go through a centralized partitioner. At $t = 50k$, we double the partitioners, and for the majority of the examined algorithms throughput increases, indicating that the partitioner itself had become the bottleneck. Then, after a while, one of the input streams becomes skewed due to a trending event. On the one hand, we observe that when following a static policy (Hashing, Two-Choices [31]), execution benefits from the second partitioner since individual partitioners follow the same strategy. However, static strategies cannot effectively handle all the different distributions. On the other hand, DAgreedy [32], which follows an adaptive policy, does not benefit from the second partitioner, as each of the two replicas acts independently and the system cannot converge.
We propose Dalton: an RL-based stream partitioner that efficiently scales and maximizes throughput regardless of the data distribution. By adapting to the data and minimizing the per-tuple overheads, Dalton outperforms existing approaches by $1.3x - 6.7x$.
As centralized partitioners can become the bottleneck, we propose a distributed learning protocol that leverages locally learned states to converge to a common global policy. Our protocol achieves $1.4x$ to $3.4x$ higher throughput than simply replicating the partitioner.
## 2 PARTITIONING STATEFUL OPERATORS
The stream processing model assumes a dataflow, usually in the form of a directed acyclic graph, where nodes represent operators and edges data streams. We describe our formulation for the tuple-at-a-time processing model but extend it to the micro-batch model in Section 3.4. Each operator has an input and output queue and works at the tuple granularity, i.e., it pulls a tuple from the input queue, processes it individually, and enqueues it to the output.
### Table 1: Notation table
<table>
<thead>
<tr>
<th>$S^W$</th>
<th>stream of window $w$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$e_t = (t,k,\bullet)$</td>
<td>tuple with order $t$ and key $k$</td>
</tr>
<tr>
<td>$c_i$, $1 \leq i \leq n$</td>
<td>partial aggregator subtasks</td>
</tr>
<tr>
<td>$P_t : S \rightarrow {c_1, \ldots, c_n}$</td>
<td>partitioning function</td>
</tr>
<tr>
<td>$L^{(t)}(c_i, w)$</td>
<td>load of $c_i$ in $w$ at time $t$</td>
</tr>
<tr>
<td>$I^{(t)}(P_t, w)$</td>
<td>load imbalance in $w$ using $P$</td>
</tr>
<tr>
<td>$\Gamma^{(t)}(w)$</td>
<td>aggregation cost in $w$</td>
</tr>
<tr>
<td>$L^{(t)}_w$</td>
<td>load vector of $w$</td>
</tr>
<tr>
<td>$X^{(t)}_w$</td>
<td>fragmentation vector</td>
</tr>
<tr>
<td>$\mathcal{K}$</td>
<td>number of distinct keys</td>
</tr>
</tbody>
</table>
### Streams & Windows
A stream $S$ comprises an infinite sequence of records that obey a partial order. We consider the sliding window model (count- or time-based) and represent the records of a specific window $w$ with $S^W$. We also assume that the order $t$ of a tuple $e$ in the stream is explicitly expressed as an attribute of the tuple, i.e., $e_t = (t, \bullet)$, and that it is used to assign tuples in windows.
### Parallel dataflows
For each operator, distributed stream processing systems have multiple deployed instances that we call sub-tasks. For example, in Figure 2b, there are three subtasks for the parallel window aggregation. When exchanging data with downstream operators, the routing decision often depends on a set of specific attributes that act as partitioning keys. In the example of Figure 2b, tuples that have the same group-by key should be routed to the same subtask. To distinguish keys from the rest of the attributes, we denote tuples as $e_t = (t,k,\bullet)$, where $k$ is the key. In such key-based partitioning schemes, data skew causes significant performance degradation, as it decreases the effective parallelism. On top of that, the “hot” keys may change over time in an unpredictable manner. Re-partitioning requires re-distributing the state of stateful operators and incurs high I/O costs.
Hash partitioning is a technique that allows dealing with diverse distributions without re-partitioning the state. The idea is to decompose the parallel stateful operators in two layers: In the first layer, we apply a partitioning function of the form: \( P_t : S \rightarrow \{c_1, \ldots, c_n\} \), where \( c_i \) the partial aggregator subtasks\(^1\). Then, each subtask of the first layer assigns the partitioned tuples \( S_{c_i} \) from \( i = 1, \ldots, n \) to windows and computes a partial aggregate for each \( S_{c_i}^w \). In the next step, partial aggregates are routed via hashing to the second layer of subtasks for final aggregation. For simplicity, and as this model resembles Map-Reduce, we call the partial aggregators of the first layer combiners and the final aggregators reducers. Figure 2c demonstrates key-splitting for our group-by example. The Partitioner \( P \) uses an arbitrary scheme to assign tuples to combiners \( c_1, c_2, c_3 \), where tuples are partially aggregated, and hashing is used to route the partial aggregates to reducers \( r_1, r_2 \), where the final aggregates for each window are computed.
**Challenge 1: Balanced work in combiners.** Key-splitting permits arbitrary partitioning to combiners. However, to maximize the benefit from the available parallelism, we should minimize load imbalance. Assuming \( L_t^{(1)}(c_i, w) \) denotes the number of tuples that are assigned to \( w \) and have been routed to \( c_i \) before the arrival of the tuple \( (t, k, v) \), and \( n \) the number of combiners, load imbalance is defined as:
\[
I^{(t)}(P_t, w) = \max_{1 \leq i \leq n} \left\{ L_t^{(1)}(c_i, w) \right\} - \frac{1}{n} \sum_{i=1}^{n} L_t^{(1)}(c_i, w)
\]
(1)
**Challenge 2: Minimal aggregation in reducers.** Balancing the load of the combiners is not enough to achieve high end-to-end throughput, as the bottleneck can shift to the aggregation of the reducers. Thus, the second goal a good partitioning scheme should achieve is to minimize:
\[
\Gamma^{(t)}(w) = \max_{1 \leq j \leq m} A_j^{(t)}(w), j = 1 \ldots, m
\]
(2)
where \( m \) the number of reducers, and \( A_j^{(t)} \) the cost of the \( j \)-th parallel reducer in the window \( w \) before the arrival of \( (t, k, v) \).
**Challenge 3: Lightweight partitioners.** Addressing both of the aforementioned challenges requires solving a multi-objective optimization problem. However, as this problem is proven to be intractable \(2\), such an algorithm would require many computations per tuple. To meet the latency requirements of streaming applications, the partitioner should be lightweight and never become the performance bottleneck.
**Challenge 4: Scaling the number of partitioners.** Even if Challenge 3 is resolved and a really efficient partitioner is available, a partitioner that receives data at high rates from multiple parallel upstream operators/sources can still become the bottleneck. Scaling the number of partitioners is not trivial as, by default, partitioners do not communicate with each other, and local decisions may lead to a highly sub-optimal global partitioning. This is especially true when the data distribution from each upstream operator differs.
Assuming that: i) tuples never arrive out-of-order, ii) the content of a tuple does not affect the processing cost, and iii) hashing is used for routing to the reducers, we compile all four challenges in the scalable and adaptive partitioning problem:
**Problem 1 (Scalable & Adaptive Partitioning).** Devise a partitioning scheme that distributes the load to the combiners and satisfies the following requirements:
1. It minimizes both \( I(P_t, w) \) and \( \Gamma \) at the same time.
2. It requires minimal latency per tuple.
3. It quickly adapts to distribution changes.
4. It can scale to multiple parallel partitioners.
**State-of-the-art.** While there is a lot of research on stream partitioning, no existing technique covers all four points. Most algorithms, e.g., Two-Choices \(31\), make static decisions that offer different imbalance-aggregation trade-offs but do not adapt at runtime \(21, 30\). A state-of-the-art tuple-at-a-time algorithm of key-splitting is DAGreedy \(32\). DAGreedy does adapt, but for each tuple, it calculates a score for each candidate combiner, and thus, the partitioning overhead increases with the number of parallel workers. Similarly, a state-of-the-art algorithm for the micro-batch model is Prompt \(2\), which also adjusts its strategy but has the overhead of sorting all keys in a batch based on their frequency. More importantly, both algorithms cannot efficiently scale out. In cases where multiple partitioning instances are deployed, as each of them optimizes only the locally observed distribution, partitioning policies diverge and degrade the overall system’s performance. Even if the partitioners were syncing and were periodically exchanging load information, as the algorithm does not maintain state enriched with past experience, decisions between the sync points would again diverge, and convergence would never be achieved.
3 LEARNING PARTITIONING POLICIES
The ever-changing nature of data streams makes static heuristics incapable of providing an efficient partitioning policy during the lifespan of a streaming application. Moreover, as we explained, techniques that rely on stateless partitioning functions, that forget past experience, increase processing per tuple and fail to scale in distributed environments. Reinforcement learning (RL) naturally fits this problem: it learns actions based on the actual data distribution, and as we show in Section 4, by keeping track of past experience, it enables a mechanism for scaling the partitioners. However, trivially applying RL results in a vast and impractical state-action space. In Section 3.1, we analyze the complexity of an RL-based solution and present three key-technical ideas that can decouple inter-dependent components of the problem and render it in a manageable form.
3.1 Cost of RL-based Stream Partitioning
We mathematically model the problem as a Markov Decision Process (MDP). Formally, an MDP is defined as a tuple \( S = (S, A, P_a, R_a) \), where \( S \) is a finite set of states, \( A \) is a set of actions, \( P_a \) is the transition function that expresses the dynamics of the environment and \( R_a \) the reward function. More specifically, \( P_a(s, s') = Pr(s_{t+1} = s' | s_t = s, a_t = a) \) denotes the probability that an action \( a \) taken at time \( t \) in state \( s \), will lead to state \( s' \) at time \( t+1 \), and \( R_a(s, s') \) the corresponding immediate reward. We consider the stream partitioner as an agent that takes actions and transitions across states with the aim of maximizing its cumulative reward. The environment is non-stationary, and each partitioning decision changes the load distribution in the combiners, thereby affecting future partitioning decisions. Next, we formally define states, actions, and rewards.
**States.** Given an input tuple \( e_t = (t, k, o) \) and a window \( w \), the state of the partitioner should capture the key attribute of the tuple at hand, and the current load distribution for the tuples in \( w \) that arrived before \( e_t \), i.e., \( \{e_{t'} : t' \in S^w \land t' < t \} \). To describe the load distribution, we use a load vector and a fragmentation vector.
**Definition 3.1 (Load Vector).** The load vector \( L_w^{(t)} \) is a vector that contains the number of tuples \( L(t)(c_i, w) \) that each combiner \( c_i \) received in the window \( w \) and before tuple \( e_t \) arrives, i.e.,
\[ L_w^{(t)} = [L(t)(c_1, w), \ldots, L(t)(c_n, w)] \]
**Definition 3.2 (Fragmentation Vector).** The fragmentation vector is defined as:
\[ X_w^{(t)} = [\mathbb{1}(k_1, w, t), \ldots, \mathbb{1}(k_t, w, t), \mathbb{1}(k_{|K|}, w, t), \ldots, \mathbb{1}(k_{|K|}, w, t), \mathbb{1}(k_{|K|}, w, t)] \]
where \( K \) the number of distinct keys in the window \( w \). Conceptually, \( X_w^{(t)} \) is a bit-vector that, for each key \( k \), shows which combiners hold at least one tuple corresponding to key \( k \) with order \( t' < t \), in the window \( w \).
Thus, we represent the state as \( (k, X_w^{(t)}, L_w^{(t)}) \) triplet. Assuming \( L \) tuples within a window before \( e_t \) arrives, and following a "balls into bins" argument for the load, the number of possible states is:
\[ |K| \times 2^{|K|} \times \binom{L+n-1}{n-1} \]
**Actions.** At each step, an action consists of the selection of a combiner for a given tuple \( e_t \). Therefore, the number of available actions \( |A| \) corresponds to the number of combiners \( n \).
**Rewards.** Based on Equations 1 and 2, the cost for an action \( a \) consists of the action’s contribution to the: (i) imbalance in the combiners, and (ii) the reducers’ aggregation cost. The cost can be translated to the following rewards function:
\[ R_w(e_t, a) = -(p_1 + C_t^{(t)}(a) + p_2 + CA_w^{(t)}(k, w)) \]
where \( p_1 \) and \( p_2 \) are adjustable and control the contribution of each metric (i.e., \( p_1 + p_2 = 1 \)). We express the first term as:
\[ C_t^{(t)}(a) = \frac{L_w^{(t+1)} - L_w^{(t+1)}}{\max\{L_w^{(t+1):a, w}, L_w^{(t+1):a, w}\}} \]
where \( L_w^{(t)} := \sum_{i=1}^{n} L(t)^{i}(c_i, w) \) denotes the average load of the combiners in the window \( w \) before the arrival of tuple \( e_t \), and \( a \) refers to the chosen combiner. \( CI \) captures the cost of assigning one more record to the combiner \( a \). The metric is normalized, taking values in the range \([-1, 1]\). Assigning the tuple to an overloaded combiner results in a negative \( CI \). This corresponds to a high reward, encouraging such choices. Conversely, choosing an overloaded combiner is penalized with a low reward.
For the second term of the cost, we assume that the aggregation cost that action \( a \) incurs for the input tuple \( (t, k, o) \) is proportional to the fragmentation \( ||X_w^{(t+1)}(k)|| \) of key \( k \), where \( || \bullet \ || \) denotes the number of 1s in the \( A \) bit-vector that correspond to key \( k \), and thus how many combiners \( k \) is split. Again, we normalize the cost, which is expressed as:
\[ CA_t^{(t)}(k) = \frac{||X_w^{(t+1)}(k)||}{n} \]
Solving the above RL problem with a technique such as Q-learning [42] or Sarsa [36] would require tabulating \( [S \times A] \) elements. Even for a toy example, with 10 distinct keys and 100 tuples already in the window, running with 8 combiners, the number of possible states is approximately \( 3 \times 10^{35} \). Furthermore, with each new tuple in this window, as the load increases, the number of possible states increases as well. The complexity is already prohibitive, and in reality, we have millions of tuples per window and hundreds of parallel worker threads, so this number will grow exponentially.
An offline learning approach, as in [1, 33] is also not going to work since such a train-once process violates the third requirement of Problem 1 – we need a partitioner that continuously learns in an online fashion and adapts to the data.
To decrease the number of states, we employ three key ideas:
**Key idea 1: Separation of concerns** Following prior art, we can decrease \( K \) by employing RL for the partitioning of the most frequent keys and hashing for the rest. As we show later in Theorem 3.4, the threshold we use to make this distinction results in a maximum of \( n \) frequent keys.
**Key idea 2: Load space quantization.** The number of possible values that the load vector can assume is the largest factor in determining the size of the state space, \( |S| \). To tame it, we can make the load assignment representation more coarse-grained by quantization. For a quantum \( q \), assuming that a combiner \( c_i \) has a current load of \( L(c_i, w) \), we transition to a new load value only when \( q \) more
tuples are assigned to $c_i$. The quantized representation can take one of $\left\lceil \frac{\log_{n-1}(n)}{n-1} \right\rceil$ values, which corresponds to a significant reduction of the state space. With these two modifications, for $q = 10$ and considering 8 frequent keys, the number of state-action pairs in our example is already decreased to approximately $3 \times 10^{24}$.
**Key idea 3: Temporal invariability of fragmentation.** The new state to which we transition depends not only on the chosen action but also on the next tuple of the stream. This makes it infeasible to visit all possible states – the truly eligible states for a transition are conditioned on the order in which the keys appear. Let us assume that we are in state $(k_j, N_w(t), L_w(t))$ and all combiners have loads corresponding to a new quantum. If the partitioner does not decide on a further split for the key of the incoming tuple (something it tries to avoid), then, for the given window $w$, $N_w$ will not change as well. Hence, given these assumptions, the agent’s actions contribute to the selection of the next state only every $q \leq T \leq n(q - 1) + 1$ tuples. In between, the next state is solely determined by the keys in the stream. If we increase $q$ a lot, in order to reduce the state space, then $T$ will increase as well and the RL agent will degenerate to a contextual bandit [8, 26].
**Algorithm 1: Dalton**
```plaintext
local : n: number of combiners
input: Incoming tuple $e : (t, k, v)$
1 UpdateFrequency($k$);
2 $f_k \leftarrow EstimateFrequency(k)$;
3 if $f_k \geq \frac{t}{n}$ or (k in Q and not expired) then
4 assign $e$ to $c^* = argmax_i (Q(k, i))$;
5 compute reward $R(k, c^*)$;
6 $Q(k, c^*) = Q(k, c^*) + \gamma [R(k, c^*) - Q(k, c^*)]$;
7 else
8 assign $e$ to $c^* = hash(k)$;
9 UpdateWorkerLoad($c^*$);
```
### 3.2 Reducing State using Contextual Bandits
Contextual bandits can be used to learn a different policy per key – allowing for a key to be split according to its frequency – with considerably reduced memory requirements. Furthermore, by discounting past rewards, contextual bandits can be robust to distribution shifts and quickly adapt their policy in an online manner.
A contextual bandit maintains a $Q$-table per key and aims to learn an estimate of the value $Q(k, a)$ for all the possible assignments of key $k$ to a combiner $a$. When presented with a new tuple, the partitioner selects the action (combiner) that maximizes the expected reward. A natural way to estimate $Q(k, a)$ is by averaging the rewards that have been received when the combiner $a$ was selected for key $k$. Nevertheless, since streams are unpredictable and the underlying data distribution may change, the reward distribution is non-stationary and we may desire to rely more heavily on recent rewards than long-pasting ones. Let us denote with $Q_i(k, a)$ the estimated average reward for action $a$ after observing the first $t - 1$ rewards. Then, given the $t$-th reward $R_t(k, a)$ for that action, we update the learned value with the following rule:
$$Q_{t+1}(k, a) = Q_t(k, a) + \gamma [R_t(k, a) - Q_t(k, a)]$$
where $\gamma$ is a constant step-size parameter that takes values in the range $[0, 1]$. Each key corresponds to a row in the $Q$-table and based on Key-idea 1, there can be at most $n$ ("hot") keys. Each row in the $Q$-table has $n$ entries – one for each possible action – which makes the total memory complexity of the algorithm $O(n^2)$.
**Initial Values.** We set the initial values to the minimum possible reward, i.e., $-2$ (Equations 3 and 4). This provides two nice properties that prevent excessive key splitting. First, after the initial assignment of a key $k$ to a combiner $c_i$, subsequent records with key $k$ have an affinity for the same worker; splitting happens only through exploration. Second, even when exploration splits a key and assigns it to a combiner $c_j$, due to the low initial value estimates, the partitioner will be discouraged from sending more records to $c_j$ unless the reward is substantially higher. Without substantially higher rewards, when the tuple that the exploration assigned to $c_j$ expires, the fragmentation of $k$ will be decreased.
**Exploration.** The agent uses an $\epsilon$-greedy policy: with a probability $1 - \epsilon$ it greedily chooses the action with the highest $Q(k, a)$ value, and with a probability $\epsilon$ it explores new assignments by randomly choosing among all actions. This policy allows the partitioner to explore new assignments by splitting or even migrating a key. The probability $\epsilon$ should be low so that most of the time, the agent makes decisions that have been already proven to be beneficial. Our evaluation indicates that a good value is $\epsilon = 0.1$.
**Heavy hitters.** As already mentioned, we employ the contextual bandit agent for partitioning the most frequent keys and hashing for the rest. The intuition behind our definition for heavy hitters is that (i) the bandit should be used only for keys for which splitting can be beneficial and (ii) splitting is beneficial when a key causes imbalance even when it is the only heavy hitter assigned to a specific combiner. This idea leads to the following definition:
**Definition 3.3 (Heavy Hitters).** Heavy hitter is a key $k$ whose frequency $f(k, w)$ within the window $w$ satisfies $f(k, w) \geq \frac{\mathcal{L}}{n}$, where $\mathcal{L}$ the total load of the current window.
**Theorem 3.4.** There can be at most $n$ heavy hitters in a window, where $n$ is the number of combiners.
**Proof.** Let us consider that there are $x$ heavy hitters $\{k_1, \ldots, k_x\}$. Then, according to Definition 3.3: $\sum_{i=1}^{x} f(k_i, w) \geq \frac{\mathcal{L}}{n} x$. But $\mathcal{L} = \sum_{i=1}^{x} f(k_i, w) + Y$, where $Y$ is the total frequency of the non-heavy hitters. Thus, $x \leq n \cdot \frac{\sum_{i=1}^{x} f(k_i, w) + Y}{\sum_{i=1}^{x} f(k_i, w)} \leq n$.
The problem with this definition is that the total load \( L \) of the current window is not known before the window completes. We solve this problem by using statistics from both the previous and the current window. Details follow in the next subsection.
We call our partitioning operator \textit{Dalton} and present the pseudocode of the bandit algorithm it employs in Algorithm 1. Figure 3 presents an overview of Dalton’s workflow. By maximizing the reward function, Dalton learns a policy that minimizes the imbalance and the aggregation cost and quickly adapts to distribution changes. This covers the requirements (1) and (3) of Problem 1. Next, we show the necessary enhancements to meet objectives (2) and (4).
3.3 Managing Windows
In the above discussion, we show that the computation of \( X_w \), \( L_w \), \( R(c_i, w) \) and \( f(k_j, w) \) depends on a window. This window is not necessarily the same for all four quantities. Here, we analyze the requirements for each of them and present the system design we use in order to achieve low latency in windowing operations and meet the second objective of Problem 1.
Let us assume an application with a sliding window of size \( W \) and slide \( s \). In this case and for a starting point \( t_0 \), every \( s \) “time” steps\(^2\), \((t_0, t_0 + s, \ldots)\), each combiner emits a partial aggregate for the last window \(( (t_0 - W, t_0], (t_0 + s - W, t_0 + s)], \ldots)\). As partitioning decisions must reflect the actual processing cost in both combiners and reducers, the estimated cost of an action must account for the load and fragmentation in \( ((t_0 - W, t_0], (t_0 + s - W, t_0 + s)], \ldots)\). Therefore, the window we use for three out of the four quantities, \( X_w, L_w, R(c_i, w) \), is \( W \) and it is updated every \( s \) steps.
**Reward Computation.** To compute the reward \( R \) in a sliding-window fashion, we need sliding-window data structures for \( X_w \) and \( L_w \). We opt for a design that has minimal update time and avoids costly memory allocations in the critical path. In an abstract level, both \( X_w \) and \( L_w \) follow a similar design: each has a dedicated memory pool of size \( \left\lceil \frac{W}{s} \right\rceil \) that contains one pre-allocated block per slide organized in a circular linked list, and an extra structure that holds aggregated information. Using this design, each incoming tuple requires \( O(1) \) update time by solely updating the head of the list. Slide expiration also requires an \( O(1) \) update time to touch the tail of the list and the aggregate structure.
More specifically, for the fragmentation vector \( X_w \) at each slide, we get a block from the corresponding memory pool, and we maintain a map from the keys that appear in this slide to a bit-vector that indicates to which combiners the specific key has been assigned \((k \rightarrow \{1, \ldots, |\mathcal{L}|\})\). Each time a tuple with key \( k_i \) gets assigned to a combiner \( c_j \), we retrieve the map that exists in the head of the list/pool, get the bit-vector of \( k_i \), and set the \( c_j \)-th bit to 1.
Assuming \( \left\lceil \frac{W}{s} \right\rceil \) maps in the pool: \( M_1, M_2, \ldots, M_{\left\lceil \frac{W}{s} \right\rceil} \), where \( M_i \) the head, the aggregate data structure \( X^A \) maintains, in an incremental way, the union for all past slides, i.e., \( M_2 \cup \cdots \cup M_{\left\lceil \frac{W}{s} \right\rceil} \) and a reference counter per key per combiner that shows in how many of the past slides within the window, the key has been assigned to the combiner. Then, each time a slide expires, we do the following:
1. Remove the tail of the list that corresponds to the expired slide and expire the corresponding keys. This consists of reducing the reference counter in \( X^A \) that corresponds to each expired assignment and, if the counter becomes 0, setting the corresponding bit in the bit-vector of \( X^A \).
2. Merge the current head to \( X^A \), by computing \( X^A \cup M_1 \), and increasing the corresponding reference counters.
3. Use the expired memory block as the new head.
Assuming \( K_{\text{HEAD}}, K_{\text{TAIL}} \) denote the key cardinality in the head and tail of the linked list, maintaining the \( X^A \) structure incurs a cost of \( O(K_{\text{HEAD}} + K_{\text{TAIL}}) \) each time a slide expires, but allows the computation of Equation 4 in \( O(1) \) time by simply computing the OR function between two bit-vectors: one retrieved from the head \((M_1(k))\), and one from \( X^A(k) \) (Figure 4).
In a similar spirit, we compute the load of each combiner \( L(c_i, W) \). Concretely, for every slide, we keep a counter that stores the number of tuples assigned to this combiner. Additionally, we maintain a sliding-window sum, corresponding to the total load of the combiner for all past slides, using the Subtract-on-Evict algorithm [37].
**Statistics Computation.** Although we need the window specification of the application \((W, s)\) for computing the rewards, this is not true for the key frequencies \( f(k, w) \). This window does not interfere with the application’s semantics and is just used to identify the heavy hitters as time passes and distribution shifts. For the windowing in statistics updates, we use a tumbling window whose size is defined by the \texttt{STATS} system parameter; this is a tuning knob that affects the partitioner’s latency. Then, heavy hitters are computed by using the formula of Definition 3.3. We estimate the load \( L \) of the current window by setting it equal to the load observed during the previous \texttt{STATS} window. Once a key is considered as a heavy hitter, it will be assigned using the bandits policy for the current and the next \texttt{STATS} window. At the end of the next window, if the key has not exceeded the frequency threshold again, it is expired and its entry is deleted from the Q-table. This allows the system to use past observations and continuously learn the assignment policy of keys that remain hot for more than one \texttt{STATS} window instead of resetting the Q-table at the end of every window.
Intuitively, when \texttt{STATS} is too small, it is like we “zoom in” a lot to the distribution and miss heavy hitters. Then, for the missed heavy hitters, hashing is used instead of the bandit policy, and thus, we allow for combiners to become stragglers and stale execution. At the other extreme, if \texttt{STATS} is too large, we approximate the distribution better, but in case of distribution shifts, we force unnecessarily many tuples to go through the bandit and incur extra performance overheads. For example, consider that the distribution
---
\(^2\)“time” just expresses an ordering and refers to both count- and time-based windows.
changes every $T_1$ sec and $\text{STATS}_{\text{WIN}} = 2T_1$. Let us also assume that initially, we had a set of heavy hitters $K_1$, and when the distribution changed at $T_1$, the new heavy hitters became $K_2 : K_1 \cap K_2 = \emptyset$. Thus now, there are $|K_1| + |K_2|$ keys that go through the bandit, instead of just $|K_2|$.
### 3.4 Dalton for Micro-batches
To increase throughput at the cost of latency, micro-batch systems accumulate incoming tuples and process them in batches. Each operator pulls a batch from its input queue, processes it individually, and enqueues it to the output. Thus, the partitioner is expected to first see all the tuples of a batch, split them into subsets called *data blocks*, and emit each data block to a combiner. Observing all tuples of a batch before taking decisions can lead to more accurate statistics and thus assist the whole partitioning process.
Typically, to perform windowed computations, a partial aggregate is first calculated for every data block, followed by a final aggregation that combines intermediate results (as in Section 2). However, combinators do not reduce data at the window but at the micro-batch level. Hence, the basic implementation difference is that we must modify fragmentation vectors $X(t)$ to work over micro-batches instead of windows.
### 4 MULTI-AGENT PARTITIONING
In real scenarios, there are multiple parallel input sources, each of which can follow a different data distribution and inject data into the system at a high rate. Passing all input streams through a single partitioner will shift the bottleneck to the partitioner itself (Figure 5a). In this section, we show how we can use Dalton in a distributed environment to coordinate multiple individual partitioners.

**Figure 5:** (a) Dataflow topology with a single partitioner (b) Dataflow topology in the case of multi-agent partitioning. A Q-table server is used to aggregate individual Q-tables.
#### 4.1 Learning Distributed Data Streams
The high-level idea of the algorithm is that periodically, we compute and communicate to all the partitioners a global policy that is not beneficial only at an individual level but to the aggregate throughput of the system. The algorithm relies on two properties of the Q-tables: (i) they maintain information about the local heavy hitters, and (ii) according to the observed rewards, they suggest an optimal policy for the local input distribution. By averaging the individual Q-tables, we compute a global structure that incentivizes taking actions that have collected high rewards from the majority of the partitioners. After each synchronization point, each partitioner takes actions based on this global Q-table, and eventually, the policies of different partitioners converge to a common one.
To realize the proposed algorithm, the system transparently adds a `QtableReducer` (QR) operator, one sync stream, shown with solid green lines in Figure 5b, and one feedback loop stream, shown with dashed green lines. The two added streams serve as communication channels between the QtableReducer and the individual partitioners. Every DSYNC time steps, each of the individual partitioners sends a SYNC message to the QtableReducer. This message contains: (i) the local Q-table, (ii) the total number of records processed since the last SYNC message, and (iii) a vector with the top-$n$ most frequent keys. Once the reducer processes the SYNC messages from all the partitioners, it broadcasts back to the partitioner via the feedback loop channel the global Q-table, extended with an expiration timestamp of each key, and the aggregate load $G_L$.
#### Algorithm 2: Cooperative Dalton
<table>
<thead>
<tr>
<th>local</th>
<th>$n$: number of combiners</th>
</tr>
</thead>
<tbody>
<tr>
<td>input</td>
<td>(i) incoming tuple $e : (t, k, v)$ or, (ii) message from QtableReducer ($Q, G_L$)</td>
</tr>
<tr>
<td>if</td>
<td>input is $e : (t, k, v)$ then</td>
</tr>
<tr>
<td></td>
<td>UpdateFrequency($k$) ;</td>
</tr>
<tr>
<td></td>
<td>$f_k \leftarrow \text{EstimateFrequency}(k)$ ;</td>
</tr>
<tr>
<td>if</td>
<td>$f_k \geq \frac{1}{T}$ or (k in Q and not expired) then</td>
</tr>
<tr>
<td>assign</td>
<td>$e$ to $c^* = \arg\max_k{Q(k, i)}$ ;</td>
</tr>
<tr>
<td>compute</td>
<td>reward $R(k, c^*)$ ;</td>
</tr>
<tr>
<td>if</td>
<td>$\text{state} = \text{PREPARE}$ then</td>
</tr>
<tr>
<td>assign</td>
<td>$e$ to $c^* = \text{hash}(k)$ ;</td>
</tr>
<tr>
<td>end</td>
<td>UpdateWorkerLoad($c^*$) ;</td>
</tr>
<tr>
<td>end</td>
<td>$\text{if time from last sync} = \text{DSYNC}$ then</td>
</tr>
<tr>
<td>send</td>
<td>$\text{SendSyncMsg}(Q, L, \text{GetTopKeys}())$ ;</td>
</tr>
<tr>
<td>state</td>
<td>$\text{state} = \text{AWAIT}$ ;</td>
</tr>
<tr>
<td>end</td>
<td>end if input is ($Q, GL$)</td>
</tr>
<tr>
<td>assign</td>
<td>$Q = Q, L = G_L$ ;</td>
</tr>
<tr>
<td>AggregateBufferedRewards() ;</td>
<td></td>
</tr>
<tr>
<td>state</td>
<td>$\text{state} = \text{PREPARE}$ ;</td>
</tr>
</tbody>
</table>
Algorithm 2 presents the pseudocode of a Dalton operator running in a distributed setup with many partitioners. Each of the $P$ partitioners can be in one of two distinct states: PREPARE and AWAIT. While in the PREPARE state, a partitioner is individually learning by taking actions and updating its local Q-table as described in Section 3.2. As soon as it emits the SYNC message, the partitioner enters the AWAIT state in which it remains until it receives the global Q-table. While in the AWAIT state, partitioners continue to receive tuples, run locally the bandit algorithm, and assign rewards to partitioning decisions. However, instead of updating the local Q-table, the rewards received during the AWAIT phase are just buffered so that they can be merged with the global Q-table once it is received. When
the global Q-table $\overline{Q}$ and the aggregate load $GL$ are received, we merge the buffered rewards using Equation 5, we update the local load estimation to $GL$ and transition back to the PREPARE state.
Once the QTableReducer has received the synchronization messages from all $n$ partitioners, it calculates the heavy hitters that correspond to the global distribution and the global Q-table. For the heavy hitters, the reducer computes the aggregate load $GL = \sum_{j=1}^{P} L_j$, that was processed during the PREPARE phase and considers the keys that have a frequency greater than $\frac{GL}{n}$. Since the reducer receives the $n$ most frequent keys of each partitioner and the number of heavy hitters cannot exceed $n$ (Theorem 3.4), no heavy hitters are missed.
For the global Q-table, the QTableReducer calculates a weighted average over the local Q-tables. As keys are not equally frequent in the distribution of all input streams, the weights reflect the normalized frequencies as received by each partitioner. Therefore, the update formula for a key $k$ is:
$$\overline{Q}(k, c_i) = \frac{\sum_{j=1}^{P} f_j(k) \overline{Q}_j(k, c_i), \forall i \in [1, n]}{P}$$
where $\overline{Q}(k, c_i)$ the global/averaged Q-value for a key $k$ and a combiner $c_i$. Using the frequencies as weights, the contribution of each partitioner to the value of the global Q-table for a key $k$ is proportional to the number of rewards it has received for it.
Our proposed synchronization mechanism achieves three important properties. First, it does not block execution - partitioners continue to assign tuples while in AWAIT state. Second, by buffering rewards received in the AWAIT state, all the learned rewards are communicated to the reducer, and thus, we fully exploit acquired experiences. Third, we do not allow keys that are frequent only according to a local distribution but not for the global one to be split and increase the aggregation cost. A partitioner considers a key as a heavy hitter only if it exceeds the frequency threshold for the global load $GL$ or if the key is included in the global Q-table.
In the multi-agent case, we map the synchronization interval $DSYNC$, to the STATS_WIN window used to maintain key frequency statistics to ensure that no heavy hitters are missed by the QTableReducer. The value of $DSYNC$ and, hence, the frequency in which the $SYNC$ events are emitted, affects the efficiency of the distributed mechanism. On the one hand, a short sync period favors learning but adds synchronization and communication overheads. On the other hand, rare syncs permit individual learners to deviate from the common policy. To hit a sweet spot, we propose an adaptive communication protocol that changes the sync frequency $\tau$ and, hence, the STATS_WIN at runtime.
If $DSYNC$ time steps have passed since the last $SYNC$ message was sent and a partitioner is still in AWAIT state, it means that the reducer cannot keep up with the synchronization rate and by the time the updated global state is received, it is already stale. In that case, by setting a field in the next $SYNC$ message, the partitioner requests to double the $DSYNC$ interval. When the reducer receives the $SYNC$ messages, it first checks whether any node has requested to double $DSYNC$; if that is the case, it fulfills the request and broadcasts the new value along with the next global state. At the same time, the reducer always monitors the amount of time it is idle, and if this is longer than the time for processing Q-tables, it decreases $DSYNC$.
**Discussion.** We use multi-agent Dalton for optimizing a single query. However, it can be trivially extended for multiple concurrent queries over the same stream. If these queries are executed independently of each other, the only difference is that the QTableReducer should not be a query’s operator but implemented in the system’s coordinator. Nevertheless, when multiple aggregates are processed over the same stream, we can do much more than having a joint optimizer. Common practice suggests a global plan with shared streams and operators [19]. In such a work-sharing system, Dalton can be yet another operator of the global plan and the implementation will remain exactly as described in Sec.4.1.
### 4.2 Optimizing for Non-Heavy Hitters

In the tuple-at-a-time model, when we have a single partitioner, as in Figure 6a, non-heavy hitters are hashed and are directly forwarded to the output, avoiding the extra latency from the final aggregation step. We call this scheme "key-forwarding". However, this cannot trivially happen in the distributed, multi-agent case. As different partitioners may observe different distributions, they cannot safely instruct a combiner to forward a key directly to the output. In Figure 6b, key $k_1$ (signified with the red color) is "hot" according to the global distribution. Before synchronization happens, partitioner $p_1$ hashes $k_1$ and marks it as a non-heavy hitter. Nevertheless, as $p_2$ has already identified $k_1$ as "hot" and split it, we should not forward it to the output but rather to the reducers. For this reason, in the default multi-agent implementation, we disable the "key-forwarding" feature. Non-heavy hitters are still hashed to avoid overflowing the bandit learner but are always aggregated at the reducers for correctness. In such cases, as the burden to reducers is increased, they should be scaled out appropriately.
For the special case, where synchronization occurs at least once per slide, we propose an optimization that enables "key-forwarding" in the multi-agent case. Synchronizing before the window completes allows to repair wrong forwarding decisions before emitting a result. For the example of Figure 6b, the partitioner $p_1$ receives the global Q-table before the end of the slide, and the global Q-table marks $k_1$ as "hot". As this happens before the end of the slide, the window is not completed yet and combiners have not emitted their output. Thus, $p_1$ instructs the combiner to disable forwarding for $k_1$ and all intermediate results for $k_1$ are aggregated. A question that naturally arises is if synchronization is needed (e.g., what happens if the message from the QTableReducer is delayed). To prevent such issues, partitioners also disable forwarding if they have not received a global Q-table before the window is completed.
5 EXPERIMENTAL EVALUATION
This section experimentally evaluates the scalability and adaptivity of Dalton when we vary the data distribution and the degree of parallelism. We also perform a sensitivity analysis that shows how the tuning knobs of Dalton affect performance.
Platform We use 5 two-socket Intel Xeon E5-2660 CPUs, 2.20 GHz servers with 8×2) threads per socket and featuring 128 GB of DRAM. Tuple-based algorithms are implemented in Flink v1.12 and micro-batch in Storm Trident v2.4.0 using Java 11.0.9.1. We dedicate one server to the JobManager/Nimbus and use the remaining four for the TaskManagers/Supervisors for Flink and Storm respectively. In all the experiments, we use the tuple-based implementation unless otherwise specified.
Methodology Data is pre-loaded in main memory and is continuously consumed in a circular manner. We allow the system to warm up and measure the sustainable input throughput only after the system has been stabilized. This input rate achieves maximum utilization while ensuring that there is no backpressure.
Algorithms We compare Dalton against:
1. 1-choice partitioners assign all tuples with the same key to the same worker. From this group, we consider Hashing and Group Affinity with Imbalance Minimization (cAM) [21]
2. N-choice partitioners apply key-splitting following a static policy for all keys. We consider Shuffling, Two-choices [31], and, Cardinality Imbalance Minimization (CM) [21].
3. Hybrid partitioners split the most frequent keys and hash the rest. We consider DAGreedy [32] for the tuple-at-a-time processing model and Prompt [24] for the micro-batch model. To isolate the partitioning algorithm from the actual implementation, we implement our optimization for the non-heavy hitters (Section 4.2) for DAGreedy as well.
For Dalton, we set the step-size parameter $\gamma = 0.1$, the STATS_WIN interval equal to one slide and the cost model parameters $p_1 = p_2 = 0.5$ based on our experimental evidence. For the frequency statistics, we experiment with a common hashmap, a count-min sketch [11], and a hybrid policy that dynamically selects one of the two, at runtime, based on the statistics of the previous STATS_WIN interval.
<table>
<thead>
<tr>
<th>Dataset</th>
<th># of keys</th>
<th>Frequency of top-1 key</th>
</tr>
</thead>
<tbody>
<tr>
<td>T4SA</td>
<td>~450k</td>
<td>2.69%</td>
</tr>
<tr>
<td>Elections</td>
<td>~200k</td>
<td>7.2%</td>
</tr>
<tr>
<td>Voters</td>
<td>100k</td>
<td>up to 38.45%</td>
</tr>
<tr>
<td>Synthetic</td>
<td>100k-1M</td>
<td>up to 38.45%</td>
</tr>
</tbody>
</table>
**Data** To investigate the impact of different distributions, we experiment with real and synthetic data. We consider two Twitter datasets, T4SA [41] and Elections [14], and the voters dataset, which represents the voter registry for North Carolina. For Twitter, we use the hashtag as the key, and for voters the post-code. Table 2 shows information for each dataset. For the synthetic data, we investigate uniform and Zipf distributions with various exponents.
Applications The majority of the experiments are based on Word Count as it represents a typical windowed aggregation example. As partitioning should be more lightweight than the application itself, in Word Count, we do not assume tuples directly in a key-value form, but parsing and key extraction are part of the application. In addition, to stress our reward model, we use Correlation Clustering, a common data mining task. We use the VOTE [12] algorithm for the combiners and the GREEDY algorithm [16] for the reducers. Thus, this application has quadratic complexity in the combiners and a much heavier final aggregation than the typical group by queries. The quadratic complexity acts as an adversarial example to our linear reward function. Except if mentioned otherwise, we use sliding windows with a size of 60s and a slide of 1s and 20s for Word Count and Correlation Clustering, respectively. We use the Twitter datasets for Word Count and Voters for Correlation Clustering.
5.1 Scalability with the Number of Combiners
Figure 7 and 8 show how Word Count scales for different datasets for the tuple-at-a-time and the micro-batch processing model respectively. For the algorithms that require the final aggregation step, we use 1, 2, 4, or 8 reducers, respectively, for parallelism of 8, 16, 32, and 64, and devote the rest of the resources to combiners. The T4SA dataset is close to uniform, Elections is skewed, whereas the synthetic one is configured to present an even higher degree of skewness. We observe that hash-based algorithms scale well for uniform data but do not exploit parallelism for highly skewed workloads; adding more resources does not result in higher throughput. In contrast, algorithms that use key-splitting spread the load to combiners and solve the imbalance problem. Nevertheless, they cannot scale in the uniform case as they cause over-splitting and pay a high aggregation cost at the reducers. Existing techniques cannot scale in both uniform and skewed distributions. This is a huge problem as the partitioning algorithm is selected before launching a task and, hence, before knowing the data distribution, and also the distribution changes at runtime. In the micro-batch model the combiners compute partial aggregates per batch and not per window. Hence, even the hash-based approaches require a final aggregation step which results in a smaller difference in the performance between hash-based and key-splitting algorithms.
**Takeaway.** Dalton scales almost linearly regardless of the distribution. In the case of uniform data, it applies minimal splitting and behaves almost like hashing, while in the Zipf case, it discovers a policy that outperforms existing approaches by 1.5× to 6.7× for the tuple-at-a-time and 1.6× to 2.1× for the micro-batch model.
5.2 Adaptivity to Distribution Changes
Next, we showcase the ability of each algorithm to adapt to dynamic workloads. We consider two types of distribution changes: i) distribution alternates between uniform and Zipf. This scenario simulates the sporadic occurrence of trending/hot events. ii) Random changes between different Zipf distributions with different degrees of skewness and different set of heavy hitters.
The first case is illustrated in Figures 9a and 9c for the tuple-at-a-time model and the Word Count and Correlation Clustering task and in Figure 9b for the micro-batch model and Word Count task. When transitioning to a Zipf distribution, performance drops for all algorithms. However, Dalton better absorbs the change, and while the distribution is skewed, it outperforms the other algorithms by 1.3× to 6× and 1.1× to 1.8× for Word Count for the tuple-at-a-time and the micro-batch model respectively and by 1.1× to 1.8× for Correlation Clustering in the tuple-at-a-time model. Note that only Dalton and DAGreedy can adapt, while Dalton outperforms DAGreedy for skewed workloads.
For the second case, illustrated in Figure 9d, as distribution changes happen frequently and transitions are restricted among Zipf distributions, the transition points are not that visible – an averaging effect is produced. However, by learning the appropriate policy and by quickly adapting to the changes, Dalton achieves 1.1× to 1.3× higher throughput.
**Takeaway.** By continuously learning, Dalton adapts its policy following distribution shifts. It achieves the performance of hashing for uniform workloads and outperforms existing techniques for skewed ones in tasks with completely different computation traits.
5.3 Overhead of Partitioner
Next, we assess how Dalton’s tuning knobs affect the overhead it introduces. Figures 10a and 10b show the cost of maintaining the frequency statistics as a function of the STATS_WIN parameter. The cost is the aggregate time required for updating the statistics for the processing of a window with 100M elements. In Figure 10a, where the distribution is uniform, STATS_WIN has no impact when the Count-Min sketch is used. On the contrary, when using a hash-map, higher STATS_WIN values translate to more keys in the map and, consequently, more cache misses that deteriorate performance. Therefore, there is no clear winner between exact computation and algorithms can sustain the input rate and they reach their peak.
Putting to a lesser extent since the input throughput can be sustained different input rates. For lower input rates, parallelism affects throughput in uniform distributions (T4SA) where it achieves a 3 on average. Finally, we enable key forwarding and have the full performance of Section 4.2. Enabling heavy hitter tracking gives a heavy and non-heavy hitters but does not consider the forwarding through the bandit. The one without key forwarding distinguishes partition only the heavy hitters using the learner and showcases the effectiveness of our defined threshold for heavy hitters. For this experiment, we use a Zipf distribution with $s = 1.0$ to allow for more than 600 distinct keys per slide. For this distribution and according to our Definition 3.3 for heavy hitters, Dalton would consider 4 heavy hitters and, thus, achieve the maximum throughput.
Figure 10c shows: i) the latency that Dalton introduces as a function of the number of keys considered for splitting, and ii) the corresponding end-to-end application throughput. Increasing the number of heavy hitters up to 4 leads to lower load imbalance and, hence, higher throughput. For more heavy hitters, the partitioning latency is increased, affecting throughput. This justifies our decision to partition only the heavy hitters using the learner and showcases the effectiveness of our defined threshold for heavy hitters. For this experiment, we use a Zipf distribution with $s = 1.0$ to allow for more than 600 distinct keys per slide. For this distribution and according to our Definition 3.3 for heavy hitters, Dalton would consider 4 heavy hitters and, thus, achieve the maximum throughput.
Takeaway. Choosing the right data structure for the frequency statistics can reduce the window latency by up to 24 sec, while using a hybrid approach that only splits the most frequent keys as well as key-forwarding results in a speedup of up to 2.7x. This is of significant importance in a streaming system with low latency requirements and high input rates.
5.4 Scaling the Partitioners
We experiment with setups with multiple input sources and partitioners. We use the task of Word Count and a window with a size of 60sec and a slide of 20sec to allow for high throughput that showcases the benefits of having multiple partitioners.
Figure 13 shows the performance of different algorithms for two setups: i) one source is producing a uniform and the other a Zipf distribution, and ii) both sources produce a Zipf but each with different heavy hitters. We use 2 partitioners and for all Zipf distributions $s = 1.5$. As in our infrastructure, with 2 Dalton instances, partitioning is never the bottleneck, scaling further does not yield any improvement. When at least one distribution is uniform, hash-based algorithms behave better and outperform DAGreedy, while when both are Zipf, the contrary happens. In both cases, by appropriately coordinating the learners, Dalton converges to the best global policy and outperforms existing techniques by 1.4× to 3.4×.
Sync frequency. Figure 14a shows the performance of Dalton depending on the synchronization frequency of the partitioners DSYNC. We use two partitioners, each consuming data from a different source. We test 2 scenarios: i) one source produces data with a uniform and the other with a Zipfian distribution ($s=1.5$), and ii) both produce data with the same distribution. The second scenario is equivalent to producing data from a uniform and a Zipfian distribution in an alternating fashion. When the partitioners never sync, the throughput is low since they optimize only locally; being agnostic to the real load of the combiners, they give inaccurate rewards to the bandit agent. More frequent synchronization improves performance. However, the synchronization overhead increases when DSYNC is higher than 10sec causing performance degradation.
Figure 14: Synchronization frequency (DSYNC) experiments using two partitioners. (a) Impact of DSYNC to throughput for different distributions; (b) Dalton’s policy for dynamically updating DSYNC, under varying QTableReducer latency.
Figure 15 shows how the protocol we propose for dynamically adjusting the synchronization interval works. The top of the figure shows throughput with the adaptive protocol and a fixed interval of 20s and 10s. The bottom of the figure depicts the changes in the value of DSYNC with the adaptive protocol. The initial value of DSYNC is 20s. Initially, the partitioners observe that they receive fast messages from the QTableReducer and propose more frequent synchronization until DSYNC converges to 10s. This step happens during the warmup of the system and, hence, is not depicted in the figure. At 5000s, we artificially double the time the QTableReducer needs to aggregate the Qtables. Hence, the partitioners propose less frequent synchronization increasing DSYNC to 20s. At the time of 10000s, we make the time the QTableReducer needs for the aggregation 4 times higher than the initial, which brings DSYNC to 40s. At 15000s, we remove all imposed delays, and DSYNC returns to 10s.
Figure 15 shows the load imbalance (Equation 1) of the most imbalanced combiner and the aggregation cost (Equation 2) imposed by the most frequent key. In the case of multiple partitioners, half observe a uniform distribution and half a Zipf-1.5. In the case of a single partitioner, the input data is produced by alternating data from a uniform and a Zipf-1.5. Learning converges to a global policy and a stable cost in all cases. Crucially, while two partitioners have a slightly increased imbalance compared to one, this overhead does not increase with the number of partitioners. Moreover, not having to pay for synchronization, the single partitioner converges faster. However, for more than one partitioner, the rate of convergence is not affected by the number of partitioners – as many partitioners as necessary can be used without significantly affecting learning.
Takeaway. Our protocol successfully captures the global distribution and, leveraging cooperative learning, outperforms existing techniques by 1.4× to 3.4×. Moreover, Dalton automatically tunes the synchronization frequency so that the communication with the QTableReducer does not stale execution.
6 RELATED WORK
Partitioning for streams A traditional stream partitioning approach is to dynamically re-partition in case of imbalance [35]. However, re-partitioning comes hand-in-hand with the heavyweight task of state migration which Dalton avoids altogether. A classic key-splitting approach is the Two-choices algorithm [31] and its extension [30]. These algorithms address load imbalance but are agnostic to the aggregation cost. Moreover, they make static decisions and fail to adapt to distribution changes. [21] proposes a set of static heuristics that consider both load imbalance and aggregation cost. However, our evaluation shows that these heuristics do not cover all workloads and do not always adapt to distribution shifts.
To adapt to the data, more recent approaches use dynamic strategies based on the key frequencies. [13, 15] use a routing table for heavy hitters and hashing for the rest of the keys. However, they employ state migration, which we eliminate. DAGreedy [32] also hashes infrequent keys and greedily assigns frequent ones based on a cost model. Prompt [2] is a heuristic-based partitioner for the micro-batch model. Dalton outperforms DAGreedy and Prompt by finding better partitioning assignments and avoiding over-splitting, and can efficiently scale. In pull-based systems, late merging can be used instead of upfront partitioning [44]. We focus on the push-based model, adopted by most current systems [20, 40, 43]. A related topic to partitioning is elasticity, and re-configuration [17, 18, 34]. Such techniques can also deal with stragglers and increase the application’s throughput. Although many existing approaches study the problem of elastically adjusting the number of workers [2, 6], e.g., combiners and reducers, no existing work focuses on the specific problem of scaling the partitioning operators.
Partitioning for Map-Reduce Partitioning has been widely studied for Map-Reduce-based processing [7, 22, 23, 25]. While conceptually similar, these approaches either require offline preprocessing of the data and, thus, are not suitable with the streaming model or optimize solely for the map or the reduce phase.
RL for load balancing RL for load balancing and task scheduling is widely used in Cloud Computing [4, 5, 27, 39]. However, these applications do not consider balancing among operators with a windowed state. PartLy [1] uses deep RL for partitioning in the micro-batch setup. However, it assumes prior knowledge of a fixed distribution, which violates stream processing requirements.
7 CONCLUSION
The performance of stream processing systems highly depends on the efficiency of partitioning the load among parallel workers. Resource underutilization and key oversplitting introduce overheads degrading throughput. Moreover, as streams are unpredictable and distributed in nature, to ensure high, effective parallelism, systems should quickly adapt to the distribution at hand and be able to scale not only the processing workers but also the partitioners. This work presents Dalton, an RL-based operator that learns, with minimal overhead, partitioning policies at runtime, meets both desiderata and outperforms state-of-the-art approaches by up to 6.7×.
|
{"Source-Url": "https://giagulei.github.io/gmytil_files/dalton.pdf", "len_cl100k_base": 16091, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 55000, "total-output-tokens": 17619, "length": "2e13", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.0007681846618652344, "__label__crime_law": 0.00035762786865234375, "__label__education_jobs": 0.0017843246459960938, "__label__entertainment": 0.0002014636993408203, "__label__fashion_beauty": 0.00022494792938232425, "__label__finance_business": 0.0005517005920410156, "__label__food_dining": 0.0004096031188964844, "__label__games": 0.00098419189453125, "__label__hardware": 0.0019168853759765625, "__label__health": 0.0006647109985351562, "__label__history": 0.0004651546478271485, "__label__home_hobbies": 0.00017011165618896484, "__label__industrial": 0.0007538795471191406, "__label__literature": 0.0004477500915527344, "__label__politics": 0.00036072731018066406, "__label__religion": 0.0005636215209960938, "__label__science_tech": 0.30078125, "__label__social_life": 0.00015115737915039062, "__label__software": 0.027496337890625, "__label__software_dev": 0.65966796875, "__label__sports_fitness": 0.0002853870391845703, "__label__transportation": 0.00048828125, "__label__travel": 0.00024271011352539065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69764, 0.0224]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69764, 0.35889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69764, 0.89108]], "google_gemma-3-12b-it_contains_pii": [[0, 4717, false], [4717, 13276, null], [13276, 18381, null], [18381, 25231, null], [25231, 31197, null], [31197, 38034, null], [38034, 43496, null], [43496, 49967, null], [49967, 53947, null], [53947, 57489, null], [57489, 62115, null], [62115, 67732, null], [67732, 67732, null], [67732, 69764, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4717, true], [4717, 13276, null], [13276, 18381, null], [18381, 25231, null], [25231, 31197, null], [31197, 38034, null], [38034, 43496, null], [43496, 49967, null], [49967, 53947, null], [53947, 57489, null], [57489, 62115, null], [62115, 67732, null], [67732, 67732, null], [67732, 69764, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69764, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69764, null]], "pdf_page_numbers": [[0, 4717, 1], [4717, 13276, 2], [13276, 18381, 3], [18381, 25231, 4], [25231, 31197, 5], [31197, 38034, 6], [38034, 43496, 7], [43496, 49967, 8], [49967, 53947, 9], [53947, 57489, 10], [57489, 62115, 11], [62115, 67732, 12], [67732, 67732, 13], [67732, 69764, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69764, 0.15319]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
a7975c8441c200b27f143743eb2baf55f120f6f7
|
Vectorization-Aware Loop Unrolling with Seed Forwarding
Rodrigo C. O. Rocha
University of Edinburgh, UK
r.rocha@ed.ac.uk
Vasileios Porpodas
Intel Corporation, USA
vasileios.porpodas@intel.com
Pavlos Petoumenos
University of Manchester, UK
pavlos.petoumenos@manchester.ac.uk
Luí S. W. Góes
PUC Minas, Brazil
lfwgoes@pucminas.br
Zheng Wang
University of Leeds, UK
z.wang5@leeds.ac.uk
Murray Cole
University of Edinburgh, UK
mic@inf.ed.ac.uk
Hugh Leather
University of Edinburgh, UK
hleather@inf.ed.ac.uk
Abstract
Loop unrolling is a widely adopted loop transformation, commonly used for enabling subsequent optimizations. Straight-line-code vectorization (SLP) is an optimization that benefits from unrolling. SLP converts isomorphic instruction sequences into vector code. Since unrolling generates repeated isomorphic instruction sequences, it enables SLP to vectorize more code. However, most production compilers apply these optimizations independently and uncoordinated. Unrolling is commonly tuned to avoid code bloat, not maximizing the potential for vectorization, leading to missed vectorization opportunities.
We are proposing VALU, a novel loop unrolling heuristic that takes vectorization into account when making unrolling decisions. Our heuristic is powered by an analysis that estimates the potential benefit of SLP vectorization for the unrolled version of the loop. Our heuristic then selects the unrolling factor that maximizes the utilization of the vector units. VALU also forwards the vectorizable code to SLP, allowing it to bypass its greedy search for vectorizable seed instructions, exposing more vectorization opportunities.
Our evaluation on a production compiler shows that VALU uncovers many vectorization opportunities that were missed by the default loop unroller and vectorizers. This results in more vectorized code and significant performance speedups for 17 of the kernels of the TSVC benchmarks suite, reaching up to 2× speedup over the already highly optimized -O3. Our evaluation on full benchmarks from FreeBench and MiBench shows that VALU results in a geo-mean speedup of 1.06.
1 Introduction
Modern high-performance processors include short SIMD vector units to support higher computational throughput. Making effective use of the vector units is critical for extracting maximum performance from these processors.
There are two general classes of vectorizers. Traditional loop-based vectorizers [2, 3] detect instructions that can be vectorized across loop iterations. Superword-Level Parallelism (SLP) [23, 43], on the other hand, are not limited by the loop structure. They identify isomorphic groups of instructions that can be vectorized within any straight-line code sequence, whether in a loop body or outside loops altogether.
Loop unrolling is commonly applied before the SLP vectorization pass. Unrolling the loop body generates straight-line code with repeating computational and memory access patterns. This makes finding vectorizable instructions much more likely. The motivation for this work comes from the realization that, in state-of-the-art compilers, unrolling and SLP vectorization are completely independent and uncoordinated. Unrolling is guided by its own heuristic, mainly considering how unrolling affects code size. As a result, this heuristic makes good unrolling decisions with regards to vectorization only incidentally.
In this work, we propose Vectorization-Aware Loop Unrolling (VALU), a novel unrolling approach that offers a...
strong coupling with SLP vectorization. Our approach is two-fold. First, VALU uses a novel analysis, named Potential SLP, that performs vectorization and profitability analyses that would be performed by SLP as if the loop had been unrolled (without unrolling it yet). If vectorization is deemed profitable, the loop is then actually unrolled by a factor that maximizes utilization of the vector units on the target architecture. Second, VALU has a seed forwarding mechanism that keeps track of unrolled copies of vectorizable seed instructions identified in the original context and forwards them directly to the SLP vectorizer. VALU knows by definition that unrolled instructions are isomorphic, while the SLP vectorizer needs to discover which group of instructions in the unrolled loop will lead to isomorphic use-def graphs, without an expensive search. By forwarding this information, we can bypass SLP’s greedy seed collection, improving vectorization.
Our approach uncovers many vectorization opportunities that were completely missed by LLVM’s loop unroller. Unlike traditional unrolling, VALU only unrolls loops when enough code will be vectorized away. Therefore, it can afford to make aggressive unrolling decisions, when that is estimated to pay off. When evaluated on the TSVC [6] benchmark suite, VALU improves SLP vectorization by up to 6x and 30% on average, enabling SLP to outperform the loop vectorizer for 26 kernels of the TSVC suite. VALU also improves performance by up to 2x, with a geometric mean of 5%, compared to the highest optimization setting (-O3). We have also evaluated VALU on two full benchmarks, FreeBench and MiBench, where it achieves a geo-mean percentual speedup of 6%.
To summarize, our main contribution is providing a strong coupling between loop unrolling and the SLP vectorizer, with a two-way communication channel between the two passes.
- We enable much better vectorization by analyzing instructions in the rolled context.
- We choose better unroll factors by knowing how vectorization will be applied.
- We find better vectorization seeds before loop unrolling and forward them directly to the SLP vectorizer.
## 2 Background
### 2.1 Loop Unrolling
Loop unrolling creates multiple copies of the loop body, in order to perform multiple iterations at once, adjusting the loop control accordingly to preserve its original semantics. The number of copies is called the **unrolling factor** [9, 16, 30]. The immediate benefit comes from reducing the loop control overhead. By converting loops into straight-line code, loop unrolling also enables or improves subsequent optimizations.
Excessive unrolling may impair performance, mainly due to increased register pressure and instruction cache misses [10, 46]. For this reason, most unrolling heuristics will not unroll a loop above a certain factor, if the estimated size of the unrolled loop body exceeds an empirically set threshold.
### 2.2 SLP Vectorization
Superword-Level Parallelism (SLP) is a straight-line-code vectorizer that was first introduced by Larsen and Amarasinginghe [23]. SLP tries to find isomorphic instruction sequences and vectorize them if profitable. Some variants of this algorithm have been implemented in production compilers, with Bottom-Up SLP [43] being widely adopted due to its low runtime overhead and its good coverage.
Figure 1 shows a diagram of the bottom-up SLP algorithm [43]. It first identifies instructions, called **seed instructions**, that are likely to form vectorizable sequences, such as **stores** instructions or reductions trees (step 1). Starting from a group of seeds (step 3), the algorithm follows their use-def chains towards their operands to grow the SLP graph (step 4). Once this process encounters instructions that cannot form a vectorizable group (e.g. due to non-matching opcodes), it forms a non-vectorizable group and it stops further exploring this path. Non-vectorizable groups indicate that scalar-to-vector data movement is required.
Next, the algorithm estimates the profitability of vectorizing the instructions in the SLP graph (step 5). The total profit is the one of converting groups of scalar instructions into vectors minus the overhead of gathering the inputs of the vector instructions. If profitable, SLP replaces each group of scalar instructions in the graph with their equivalent vector version (step 6). Otherwise, the code remains unmodified. The process then continues with the next seed group until all seeds have been explored (step 2).
### 3 Motivating Example
In this section, we present an example to demonstrate that existing unrolling heuristics are ineffective in exposing vectorization opportunities for SLP. Instead, an ideal loop unroller would be able to identify exactly which loops are profitable to be vectorized by SLP after unrolling and what is the unrolling factor that uncovers enough code to maximize the utilization of the target vector units.
Figure 2a shows a loop with a small loop body, just two statements long. Loop unrolling uses its heuristics to determine the unrolling factor, comparing the expected code size increase against a threshold. In this particular example, LLVM unrolls it only by a factor of two, because the cost of unrolling it further exceeds the threshold. Although the
Vectorization-Aware Loop Unrolling with Seed Forwarding
1 float Af[N], Bf[N], Cf[N], Df[N], Ef[N];
2 double Ad[N], Bd[N], Cd[N], Dd[N], Ed[N];
3 for (int k = 0; k < N; k++) {
4 Af[k] = Bf[k]+Cf[k] + Df[k]+Ef[k];
5 Ad[k] = Bd[k]*Cd[k] + Dd[k]*Ed[k];
}
(a) Source code of a loop that is unrolled twice by the default loop unroller.
(b) After unrolling the loop by a factor of 2, the SLP vectorizer will generate this sub-optimal vectorized code, underutilizing the vector units available in the target architecture.
For slightly bigger loops or slightly lower unroll thresholds, the default loop unroller may completely bail-out on unrolling and prevent SLP from vectorizing the loop altogether.
Just changing the unroll threshold to improve vectorization is not a reasonable strategy. Loops can be vectorizable regardless of their sizes, then some vectorization opportunities might be missed for any unrolling threshold. At the same time, high thresholds would unroll scalar loops by very large factors impacting performance. SLP vectorization cannot rely on the default loop unroller, because its heuristics may decide not to unroll profitable loops for vectorization.
The end result shown in Figure 2c also differs from that produced by LLVM’s loop vectorizer. The loop vectorizer selects a single vector length, based on the largest data type, for the whole loop body, so that all instructions in the loop can be vectorized with the same vector length. The ideal loop unroller should choose the best unrolling factor to maximize performance. Usually, the version with mixed vector lengths tends to be faster as it better utilizes the vector units [40].
4 Vectorization-Aware Loop Unrolling
In this section, we describe our vectorization-aware loop unrolling (VALU). The core idea is to perform an analysis on the original loop that looks for code that could be vectorized by SLP once the loop gets unrolled. After unrolling, VALU forwards to SLP the instructions that are profitable for vectorization, bypassing SLP’s greedy seed collection.
4.1 Potential SLP Graph
In order to identify if loop unrolling would be beneficial for vectorization, VALU performs an analysis inspired by the SLP algorithm. Traditional SLP analysis builds an SLP graph that represents the combined use-def graphs of the groups of scalar instructions that are considered for vectorization. VALU uses a different data-structure, called Potential SLP graph. This is built from one use-def graph of the scalar instructions in the rolled loop. However, Potential SLP graph reproduces the state of an equivalent SLP graph that would be built if the loop was unrolled by a specific unrolling factor. For example, Figure 3c shows the Potential SLP graph obtained by VALU when applied to the loop from Figure 3a, which contains the use-def graph shown in Figure 3b. The Potential SLP graph is able to estimate the same profitability cost as the one computed by the SLP graph in Figure 3e, which is built from the already unrolled loop shown in Figure 3d. This is a key aspect of how VALU is able to precisely unroll loops that are profitable for SLP vectorization.
Figure 4 shows a diagram of the algorithm for the VALU heuristic. VALU starts by scanning the loop body and collecting seed instructions. At the moment, we only consider store instructions and reduction trees, but other instructions can also be used as seeds. Contrary to SLP that only collects vectorizable store instructions, VALU collects all store instructions, as detailed in Section 4.7. After collecting these seed instructions, we calculate the best vectorization factor (VF) based on the data type of the seed instructions. This factor is required for building the Potential SLP graph. VALU selects a vectorization factor that maximizes the utilization of vector units in the target architecture. This can be computed based on the bit-size of the instruction’s data type.
As it was mentioned in Section 4.1, a necessary step is deciding whether the Potential SLP graph is profitable, i.e., the SLP pass after unrolling, it forms a final non-vectorizable node. The green nodes represent all vectorizable nodes. The red node for $C[D[i]]$ in Figure 3c is an example of a non-vectorizable node, due to its indirect memory addressing. This process repeats until we have reached non-vectorizable nodes or load instructions. This completes the Potential SLP graph.
While finding isomorphic code is an expensive task for SLP, VALU does not suffer from this same problem since the unrolled copies of the loop will inevitably contain isomorphic code. For this reason, most nodes in the Potential SLP graph are trivially vectorizable, such as those formed by arithmetic, logical, or casting instructions. Memory operations and function calls, on the other hand, require some special treatment. In particular, VALU needs to analyze if the memory instructions can be widened, i.e., whether or not their unrolled copies will form groups with vectorizable access patterns. Section 4.3 describes this analysis in more detail.
Since each Potential SLP graph has its own vectorization factor, we may end up with many profitable Potential SLP graphs in the same loop, each with a different vectorization factor. This introduces a conflict, as the vectorization factor corresponds to the desired unrolling factor of the enclosing loop. We need a way for choosing a single unrolling factor from multiple vectorization factors. The solution is simple: we select the least common multiple among the vectorization factors, since this is the only way to ensure that all of them will get vectorized in the future by SLP, while also fully utilizing the vector units of the target architecture. Because all vectorization factors are powers of two, this means that, in practice, we can simply select the maximum among them for the unrolling factor.
### 4.2 Profitability of Potential SLP Graph
As it was mentioned in Section 4.1, a necessary step is deciding whether the Potential SLP graph is profitable, i.e.,
whether the unrolled scalar code will be considered profitable by the SLP vectorizer. This is done with the help of the compiler’s target-specific cost model. The cost of each node is calculated as the difference $\text{VectorCost} - \text{ScalarCost}$, with negative cost values implying that the vector code performs better than the equivalent scalar code. The $\text{ScalarCost}$ of a node in the Potential SLP graph is the cost of its scalar instruction multiplied by the number of copies that will be produced after the loop is unrolled by VF times. The $\text{VectorCost}$ is estimated assuming that all unrolled copies of the scalar instruction will be packed into a VF-wide vector. We also account for any additional costs related to inserting/extracting data to/from the potential vector instructions. For example, a vectorizable instruction in our Potential SLP graph may have uses outside the graph. In this case we would have to extract the data from the vector (possibly with the help of some additional instructions) and feed it to its uses.
4.3 Widening Memory Instructions
While arithmetic, logical, and casting instructions are trivially vectorizable by simply widening the data type, memory instructions are more challenging. The best performing vector memory instructions are the ones accessing consecutive memory addresses. Therefore, we consider a memory instruction in the Potential SLP graph as vectorizable, only if its unrolled copies point to consecutive memory addresses. If the addresses are not consecutive but instead follow a strided pattern with small constant strides, these memory instructions may also be vectorized, but this is not currently handled well by the SLP pass, so we are considering them as non-vectorizable. Modern processors do provide support for non-consecutive memory access patterns, but these are usually more costly than their consecutive counterparts, therefore we need to account for this when widening [4].
This memory access analysis is performed symbolically using chains of recurrences [5, 13], implemented by LLVM’s scalar evolution framework (SCEV). Chains of recurrences (CR) is a formalism used to represent closed-form functions at regular intervals [32]. In compilers, it is largely used to represent induction variables and memory access patterns, allowing the compiler to reason about loops and memory operations in a systematic way. We are using LLVM’s SCEV analysis to perform the memory access analysis, which determines which of the memory instructions in the Potential SLP graph will be vectorized or not.
4.4 Dependence Analysis
The SLP pass relays on dependence analysis to check that the code semantics are not violated by vectorization. LLVM’s SLP implements this as part of a scheduling step, which tests whether the groups of instructions to be vectorized, can be moved to a single point in the code, without violating any dependencies. During the construction of the SLP graph, SLP tests whether the instructions are schedulable, and will only form a vectorizable group if they are. If not, the group node is labeled as non-vectorizable.
4.5 Partial Vectorization
VALU handles partial vectorization seamlessly. The Potential SLP graph grows until a load instruction or non-vectorizable node is found. As long as the cost model estimates that it is profitable to vectorize a Potential SLP graph, it will be considered for vectorization, regardless if the Potential SLP graph is fully vectorizable or not. Figure 3 shows such an example where both VALU and SLP coordinate to partially vectorize a loop that contains indirect memory accesses. As we show in Section 5, this is an important advantage over the loop vectorizer.
4.6 Code Size Concerns
Although VALU will temporarily increase the size of the code and potentially increase the register pressure after unrolling, we rely on the SLP vectorization to bring the code of the unrolled loops close to their initial sizes. However, we cannot always avoid code size increase.
First, partially unrolling a loop may create extra code for maintaining the program’s semantics. For example, if the trip count is not divisible by the unrolling factor or the trip count is not statically known, we need to create a cloned loop to perform the remainder iterations after the unrolled loop [44].
Significant code size increase can also result from partially vectorizable loops. When a fully vectorizable loop is unrolled,
all unrolled copies will be grouped together in a vector form, canceling out the effects in code size. However, when an unrolled loop is only partially vectorizable, all copies of the non-vectorizable code will remain scalar. This is illustrated in Figure 5. After the loop gets unrolled and vectorized, the resulting loop will still contain multiple copies of the non-vectorizable code.
There is a way to mitigate this code increase if part of the non-vectorizable code is completely independent of the vectorizable code in the loop. We can perform loop distribution and only unroll the loop that contains the vectorizable code, as shown in Figure 5. This loop may still contain non-vectorizable code that interacts directly with the vectorizable part of the loop, but the impact on code size increase will be smaller. When this is not possible, we provide a threshold that specifies the minimum proportion of vectorizable code in a loop to consider unrolling it. Loops with little vectorizable code are ignored.
4.7 Forwarding Seeds to SLP
Straight-line code vectorization is a graph isomorphism problem and as such, an optimal solution has exponential time complexity. SLP Vectorizers [43] in production compilers are designed around heuristic-based algorithms that limit the exploration to instructions that have a good chance of success. They collect seed instructions (e.g., stores to consecutive memory addresses) and perform a localized exploration on the use-def chains rooted at these seeds. The collection of seed instructions, however, is both computationally expensive and is itself guided by heuristics whenever multiple grouping alternatives are available. This can lead to missed vectorization opportunities if the seed collection does not form a seed group with the instructions generated by unrolling. VALU can help by forwarding the seed instructions that drive its unrolling decision to SLP, effectively bypassing SLP’s seed collection for these instructions, and increasing the probability of success.
VALU collects the seeds during its Potential SLP graph formation. The Potential SLP graph is built from a single seed instruction. The unrolled copies of this single seed instruction will then become the seed instructions to form the first group node of an SLP graph. Instead of expecting SLP’s seed collection to find these same instructions and group them correctly, VALU can assist the SLP vectorizer. To achieve that, VALU keeps track of the unrolled copies of the profitable seed instructions while performing the unrolling and shares them with the SLP vectorizer. This guarantees that SLP will be applied on unrolled copies of instructions that are trivially isomorphic and profitable for vectorization. This is preferred to relying on SLP’s greedy seed collection, which may miss these vectorization opportunities in the unrolled code.
There are two cases where seed forwarding is extra helpful: non-vectorizable stores and reduction computations.
4.7.1 Non-Vectorizable Stores
Figure 6 shows a loop with a store instruction which is non-vectorizable, due to its indirect addressing, but its value operand is part of a profitable SLP graph for vectorization. Since unrolling generates copies of the loop body, VALU is aware that although the store is non-vectorizable, it is possible that the unrolled copies of its value operand will result in isomorphic use-def graphs that are profitable for SLP vectorization. For this reason, if the store is non-vectorizable, VALU builds the Potential SLP graph starting from its value operand, as shown in Figure 6b. If this Potential SLP graph is profitable for vectorization, VALU forwards these as seed instructions for SLP.
Without seed forwarding from VALU, SLP performs seed collection in the already unrolled loop. LLVM’s SLP does not track these non-vectorizable store instructions, for complexity reasons. As such it fails to collect its predecessors as seeds and will not vectorize it. The loop vectorizer cannot handle this loop either as it requires partial vectorization.
4.7.2 Reduction Computations
VALU seed forwarding can also improve SLP vectorization of reductions. Figure 7b shows the use-def graph for the reduction from the loop shown in Figure 7a.
Currently, the SLP seed collection is performed by following the use-def chains, starting from the φ-node, grouping the first set of nodes that differ from the reduction operator. SLP considers all instructions with the same opcode of the reduction operator as part of the reduction computation. In the example shown in Figure 7, the SLP vectorizer collects all multiplication instructions as seeds and proceeds to form the SLP graph. A major problem arises with loop unrolling (Figure 7c), which generates copies of the loop body and makes it harder to identify the reduction and its immediate
Vectorization-Aware Loop Unrolling with Seed Forwarding
operands. SLP may greedily select a group of additions as seeds, which may be a non-profitable group. However, it is trivial for VALU to identify the seeds highlighted in Figure 7c and forward them to the SLP vectorizer.
```
1 int sum = 0;
2 for (int i = 0; i<SIZE; i++) {
3 sum += ((DAG#1)*(DAG#2)) + ((DAG#3)*(DAG#4))
4 }
```
(a) Reduction loop before unrolling. The DAGs represent subexpressions that may be different from one another.
(b) Use-def graph with re-
(c) Use-def graph with reduction after unrolling. The DAGs represent subexpressions that may be different from one another.
Figure 7. Horizontal reduction before and after unrolling. We highlight the seeds for isomorphic graphs.
5 Experimental Results
5.1 Experimental Setup
Our evaluation platform is a Linux 4.4.27, glibc-2.22 based system with an Intel®Core i7-4770 CPU and 16 GiB of RAM. We implemented VALU as a standalone pass in LLVM 8 and was placed just before the SLP vectorizer in the compilation pipeline. We compiled all benchmarks using clang with the following flags: -O3 -ffast-math -march=native -mtune=native -ml1vm -s1p-vectorize-hor. These options enable the default loop unroller (DU) as well as both SLP and the Loop Vectorization (LV).
We evaluate our approach on three benchmark suites\(^2\): TSVC [6], FreeBench [18], and MiBench [15]. First, we provide a detailed analysis on several of the TSVC kernels, which were specifically designed for evaluating vectorizing compilers. Then, we provide performance results on both FreeBench and MiBench, which include full benchmark programs from a wide range of application domains.
SLP, being a straight-line-code vectorizer, is not expected to find many opportunities for vectorization in the TSVC kernels, which is exactly what makes it a great suite for evaluating the effectiveness of VALU. Since the TSVC suite contains a large number of kernels (151), we only show the kernels with a performance difference of at least 2% or more compared to the baseline. In total, 52 kernels are hidden from the plots. Regardless, geometric means and averages refer to all 151 TSVC kernels. For our performance results we ran each workload 25 times and we show the arithmetic average of the speedup across all runs, as well as the 95% confidence interval of the speedup as a min-max bar.
5.2 Overall Performance
The performance speedup of enabling VALU over -O3 is shown in Figure 8a. VALU significantly improves the LLVM baseline with a speedup of up to 2×, and a geometric mean of 1.05× (5% improvement) across the whole benchmark suite. This is a promising result, given the heavily optimized baseline and that for most kernels there is little room for improvement when applying SLP.
As we discuss later in Section 5.3, many of the significant speedups shown in Figure 8a are due to partial vectorization enabled by VALU, such as the kernel S255. However, the few regressions observed, more specifically the kernels S258 and S292, also represent two loops that get unrolled by VALU and later partially vectorized by SLP. Both VALU and the SLP vectorizer rely on the compiler’s built-in cost model when checking for profitability, which can cause performance regressions when the cost model contains inaccuracies. The rest of the results show the expected behavior: better costs lead to better performance.
Figure 8b isolates the effect of more intelligent unrolling on SLP vectorization. It shows the speedup of VALU over LLVM’s default loop unroller with SLP vectorization enabled but loop vectorization disabled. In other words, the baseline is using the additional -fno-vectorize and -fslp-vectorize flags, and we show the speedup due to enabling VALU over this setting. Since VALU is well coordinated with the requirements of SLP, it is expected that more code will get vectorized compared to the default loop unroller. This figure supports our argument that the default loop unrolling heuristics are inappropriate for preparing code for the SLP vectorizer. VALU uncovers vectorization opportunities that result in speedups of up to 6× compared to the default loop unroller, with a geometric mean of 1.29× (29% improvement) across all 151 kernels in the TSVC benchmarks.
Figure 8c compares SLP against loop vectorization. The baseline is -O3 with loop vectorization but without SLP (-fno-slp-vectorize). The figure shows the speedup over this baseline with the loop vectorizer disabled, SLP enabled, and either the default loop unroller or VALU enabled. The figure highlights two key points that were motivated in Section 3: (i) VALU enables SLP to handle loops where the loop vectorizer fails, and (ii) VALU helps to close the performance gap between SLP and the loop vectorizer. A good coordination between the loop unroller and the SLP vectorizer is essential for SLP to reach, or even exceed, the performance of the loop vectorizer.
Although SLP combined with VALU can cover many of the same loops covered by the loop vectorizer, there are still multiple cases where the loop vectorizer generates faster code than SLP even when combined with the VALU unroller. In most of them, it is due to some missing features in LLVM’s
In the following sections, we discuss key strengths of VALU and also how to improve the LLVM’s SLP implementation in more details. Finally, we report the compilation overhead of our approach.
**5.3 Overall Analysis of the Performance Results**
As expected, the loop vectorizer performs very well on this loop-only benchmark suite. However, there are two classes of loops where VALU+SLP outperforms the loop vectorizer: (1) loops that contain loop-independent dependences; and (2) loops that can only be partially vectorized. Because SLP operates on groups of use-def graphs separately, it is able to handle loop-independent dependences out of the box, leaving the problem of placing the vectorized instructions to the scheduler (see Section 4.4). Similarly, because SLP grows its graph until the point where it is no longer vectorizable, partial vectorization is intrinsic to it. As long as the SLP graph is considered profitable, it will be vectorized.
We can also assign the loops where VALU+SLP misses performance opportunities in two classes: (1) reductions computations; and (2) loops with control flows that require predication. Overall, LLVM’s loop vectorizer supports more idioms than its SLP implementation, resulting in missed opportunities for VALU+SLP. We discuss in detail all these cases in the subsequent subsection.
**5.3.1 Loop-Independent Dependences**
Loop-independent dependences are between different instructions within the same iteration of a loop. This adds
Vectorization-Aware Loop Unrolling with Seed Forwarding
5.3.2 Partially Vectorizable Loops
One benefit of VALU+SLP over LLVM’s LV is that it can partially vectorize loops containing non-vectorizable code. The loop in Figure 10, taken from the kernel S4114, is such a case. It contains an indirect memory access \( c[\text{LEN}-k+1-2] \) that cannot be vectorized. While the loop vectorizer bails out completely, VALU+SLP vectorizes it partially, improving the performance of this loop by about 50%.
Specifically, if VALU unrolls the loop, SLP can partially vectorize the code and leave the indirect memory access. This means that the scalar loads \( c[\text{LEN}-k+1-2] \) must be inserted into a vector, but this overhead is taken into account by our Potential SLP analysis and is found to be profitable.
Other kernels that also include indirect addressing are S4112 and S4117, which also result in significant speedups. In addition to indirect memory accesses, there are many other loops that are partially vectorized by VALU+SLP that LLVM is unable to handle, such as S2251, S244, S255, and S291.
5.3.3 Seed Forwarding
VALU’s seed forwarding mechanism is an effective way of overcoming major limitations in existing vectorizers. Figure 11 shows a loop that is poorly vectorized by SLP without the assistance of VALU’s seed forwarding, as the computation being stored in adjacent addresses is not fully isomorphic. However, VALU groups the interleaved stores that are
Figure 9. Kernel S241 with complex data dependencies that require instruction reordering before vectorization. VALU+SLP vectorized version results in gains higher than 2\(\times\) over -O3.
Figure 10. Kernel S4114 with indirect addressing. VALU+SLP version achieves about 1.5\(\times\) speedup over -O3.
Figure 11. Kernel S127. Loop shows an induction variable with multiple increments. Example where forwarding seeds makes life easier for the SLP vectorizer.
Figure 12. S272. This loop has a conditional branch. LLVM’s loop vectorizer is able to vectorize this loop using predication, which is not yet supported by the SLP implementation.
5.3.4 Reduction Computations
The loop vectorizer in LLVM is able to generate efficient code for reductions. This accounts for all exceptionally well performing cases of LV. Although VALU is able to identify reductions, especially because max- or min-reductions are lowered into select-based reductions, LLVM’s SLP implementation has a limited support for reductions. The two most serious limitations are that it cannot handle product-based reductions and it reduces the vector lanes inside the loop instead of outside the loop. The former makes it impossible to vectorize cases that the loop vectorizer does, the latter reduces the benefits of vectorization.
The list of kernels with reduction in Figure 8c includes: S13110, S1111, S131, S312, S313, S314, S316, S317, S319, S3113, S352, vdotr, and vsumr. Because the kernels S3111 and S352 are a reduction where the inner loop has already been unrolled, the loop vectorizer is unable to handle it. For the other kernels with reduction, however, the loop vectorizer is able to generate very efficient code.
5.3.5 Predicated Vectorization
The loop vectorizer is also able to effectively handle loops that contain conditional branches, such as the loop in Figure 12, taken from the kernel S272. In these cases, it generates a vectorized code that uses masks to predicate the execution for particular vector lanes.
Similar cases of predication, with varied levels of complexity, can be found in the kernels: S1161, S124, S1279, S253, S271, S2710, S2711, S2712, S272, S273, S274, S441, S443, and
vif. For all of them, we are limited by the implementation of SLP in LLVM which does not support predicted SLP vectorization, despite proposed techniques to achieve this [45]. On such cases, our unrolling technique has any effect, so we only consider single-block loops in our heuristic.
5.4 Compilation Time
We measured the wall clock time for compiling the full TSVC benchmark suite using O3+VALU and normalizing it to O3. Enabling VALU leads to a modest overall compilation overhead of 16% over O3, considering the whole compilation pipeline. Most of this overhead is due to the fact that after loop unrolling, subsequent optimizations, including the SLP vectorizer, and the backend will have more code to process.
Interestingly, if we compare VALU+SLP with the loop vectorizer (LV), VALU+SLP results in about 8% faster compilation. This shows that the compilation overhead of VALU+SLP is within acceptable bounds. The difference in compilation time comes from different sources, which includes the time spent during the vectorization itself, but also because loop unrolling can still be applied after the loop vectorizer.
5.5 Performance on Full Benchmarks
The kind of code accelerated by VALU is not found only in benchmarks suites designed to test vectorizers. We tested VALU on the benchmarks of the FreeBench and MiBench suites, on top of the baseline −O3 which already includes both vectorizers and the default loop unrolling. Shown in Figure 13, VALU achieves a geometric mean speedup of 6%. Five benchmarks improve their performance by more than 10% with stringsearch getting 45% faster.
Figure 13. Speedup of O3+VALU over O3 on full benchmarks.
6 Related Work
6.1 Loop Unrolling
Loop unrolling is a well-studied code transformation technique, implemented in most compilers. There is a wide range of studies on loop unrolling [9, 30]. Traditionally, this was applied only to FOR-loops at the source level [1]. Later, more general techniques have been proposed to perform loop unrolling [16, 44], including nested and remainder loops.
Unroll-and-jam is an loop unrolling technique for outer loops, unrelated to vectorization. With unroll-and-jam, the compiler unrolls outer loops and then fuses the unrolled copies of the inner loops [7, 8, 34]. Similarly, Ferrer et al. [14] shows how to unroll loops that already contain OpenMP task parallelism, fusing the tasks after unrolling to reduce unnecessary multi-threading overheads.
VALU is the first loop-unrolling technique, to the best of our knowledge, that performs vectorization-aware unrolling. Unlike prior unrolling work that aims at balancing code size increase with improving the applicability of generic optimizations, VALU is able to identify loops that are valid and profitable to be vectorized.
There has also been a significant amount of work on iterative optimization or other approaches for tuning the unrolling factor [20, 21, 24, 46]. However, even if these approaches manage to find the best unrolling factor to uncover SLP vectorization, which is usually infeasible on a per loop basis, they are still insufficient to vectorize those loops that require VALU’s seed forwarding. As described in Section 4.7, there are cases where SLP can be unable to properly identify the seed instructions in order to vectorize the unrolled loop.
6.2 Loop and Function Auto-Vectorization
Auto-vectorization techniques have traditionally focused on vectorizing loops [49]. The basic implementation conceptually strip-mines the loop by the vector factor and widens each scalar instruction in the body to work on multiple data elements. The effectiveness of loop vectorizing compilers has been studied by Maleki et al. [26]. Many fundamental problems of loop vectorization have been addressed by early work on the Parallel Fortran Converter [2, 3, 11, 22, 48]. Since then, numerous improvements to the basic algorithm have been proposed in the literature and production compilers [4, 12, 31, 32, 42]. For example, Stock et al. [47] uses machine learning to train a profitability model for the loop vectorizer.
Whole function vectorization has been proposed by Karender et al. [19, 41]. This is particularly important for mapping programming models like OpenCL onto vector units. A different approach is presented by Masten et al. [27] which discusses how function/kernel vectorization could be presented as a loop-vectorization problem. Finally, Moll et al. [29] present a novel control-flow linearization algorithm, for use in function/kernel vectorizers.
6.3 SLP Auto-Vectorization
A complementary technique to the loop vectorizer has been introduced by Larsen and Amarasinghe [23], the SLP vectorizer, which focuses on straight-line code. Since its original work, several improvements have been proposed for the straight-line-code (SLP-style) vectorization [17, 25, 28, 33, 45].
We implemented VALU in LLVM and evaluated it on the TSVC vectorization testing suite. Our experimental results show a great SLP vectorization improvement compared to the LLVM’s default loop unrolling heuristic, and very significant performance improvements over O3.
Combining loop-vectorization with SLP was proposed in loop-aware SLP [43] and implemented in GCC. This work combines SLP-style parallelism with the loop vectorizer, which allows it to vectorize both across iterations and within a single iteration. Zhou et al. [51] improve this technique by extending the exploration performed by the algorithm, improving the effectiveness of the mixed inter and intra-loop vectorization. Both approaches rely on SLP-style parallelism that must already be exposed in the loop body, which means that VALU would be complementary to them. This is different from our work³.
7 Conclusion
This paper presented Vectorization-Aware Loop Unrolling (VALU), a novel compiler heuristic for identifying loop unrolling opportunities to enable the straight-line-code vectorization. VALU does so by identifying if loop unrolling will be profitable for the SLP vectorizer and what loop unroll factor can maximize the utilization of the target architecture’s vector units. VALU determines the unroll factor by employing Potential SLP, a novel vectorization and profitability analysis on the original rolled loop as if it was unrolled. We implemented VALU in LLVM, and evaluated it on the TSVC vectorization testing suite. Our experimental results show a great SLP vectorization improvement compared to the LLVM’s default loop unrolling heuristic, and very significant performance improvements over O3.
References
³GCC fails to vectorize many of the loops solved by VALU: https://godbolt.org/z/mfE3Hb
CC ’20, February 22–23, 2020, San Diego, CA, USA
|
{"Source-Url": "https://www.research.manchester.ac.uk/portal/files/157722467/main.pdf", "len_cl100k_base": 8908, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45957, "total-output-tokens": 12983, "length": "2e13", "weborganizer": {"__label__adult": 0.00043129920959472656, "__label__art_design": 0.0003695487976074219, "__label__crime_law": 0.00036025047302246094, "__label__education_jobs": 0.0004193782806396485, "__label__entertainment": 7.551908493041992e-05, "__label__fashion_beauty": 0.00018930435180664065, "__label__finance_business": 0.0002655982971191406, "__label__food_dining": 0.0004177093505859375, "__label__games": 0.0008716583251953125, "__label__hardware": 0.002597808837890625, "__label__health": 0.0005202293395996094, "__label__history": 0.0003631114959716797, "__label__home_hobbies": 0.00011247396469116212, "__label__industrial": 0.0006070137023925781, "__label__literature": 0.00022149085998535156, "__label__politics": 0.00038909912109375, "__label__religion": 0.000640869140625, "__label__science_tech": 0.042205810546875, "__label__social_life": 6.246566772460938e-05, "__label__software": 0.005222320556640625, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.0004456043243408203, "__label__transportation": 0.0009207725524902344, "__label__travel": 0.00029730796813964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52076, 0.03603]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52076, 0.46586]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52076, 0.88918]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3518, false], [3518, 8834, null], [8834, 12771, null], [12771, 14887, null], [14887, 19340, null], [19340, 24187, null], [24187, 29422, null], [29422, 30909, null], [30909, 34576, null], [34576, 39428, null], [39428, 46129, null], [46129, 52076, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3518, true], [3518, 8834, null], [8834, 12771, null], [12771, 14887, null], [14887, 19340, null], [19340, 24187, null], [24187, 29422, null], [29422, 30909, null], [30909, 34576, null], [34576, 39428, null], [39428, 46129, null], [46129, 52076, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52076, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52076, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3518, 2], [3518, 8834, 3], [8834, 12771, 4], [12771, 14887, 5], [14887, 19340, 6], [19340, 24187, 7], [24187, 29422, 8], [29422, 30909, 9], [30909, 34576, 10], [34576, 39428, 11], [39428, 46129, 12], [46129, 52076, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52076, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4aafa937f31d73244df33390db96a6ed8db22d23
|
[REMOVED]
|
{"Source-Url": "http://www.fos.kuis.kyoto-u.ac.jp/~igarashi/papers/pdf/virtualtypes.IC.pdf", "len_cl100k_base": 14498, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 70686, "total-output-tokens": 18367, "length": "2e13", "weborganizer": {"__label__adult": 0.0003933906555175781, "__label__art_design": 0.00030875205993652344, "__label__crime_law": 0.00027871131896972656, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 5.21540641784668e-05, "__label__fashion_beauty": 0.0001608133316040039, "__label__finance_business": 0.0001652240753173828, "__label__food_dining": 0.0004012584686279297, "__label__games": 0.00041294097900390625, "__label__hardware": 0.000568389892578125, "__label__health": 0.000568389892578125, "__label__history": 0.0002446174621582031, "__label__home_hobbies": 8.422136306762695e-05, "__label__industrial": 0.0003266334533691406, "__label__literature": 0.000335693359375, "__label__politics": 0.0002720355987548828, "__label__religion": 0.0005965232849121094, "__label__science_tech": 0.007472991943359375, "__label__social_life": 9.638071060180664e-05, "__label__software": 0.0030574798583984375, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0003173351287841797, "__label__transportation": 0.0005331039428710938, "__label__travel": 0.00021958351135253904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62206, 0.01203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62206, 0.67744]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62206, 0.82666]], "google_gemma-3-12b-it_contains_pii": [[0, 2580, false], [2580, 5823, null], [5823, 6562, null], [6562, 9657, null], [9657, 12540, null], [12540, 16541, null], [16541, 20385, null], [20385, 22560, null], [22560, 25816, null], [25816, 28617, null], [28617, 31265, null], [31265, 34415, null], [34415, 38189, null], [38189, 41773, null], [41773, 45158, null], [45158, 48875, null], [48875, 52641, null], [52641, 56260, null], [56260, 60107, null], [60107, 62206, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2580, true], [2580, 5823, null], [5823, 6562, null], [6562, 9657, null], [9657, 12540, null], [12540, 16541, null], [16541, 20385, null], [20385, 22560, null], [22560, 25816, null], [25816, 28617, null], [28617, 31265, null], [31265, 34415, null], [34415, 38189, null], [38189, 41773, null], [41773, 45158, null], [45158, 48875, null], [48875, 52641, null], [52641, 56260, null], [56260, 60107, null], [60107, 62206, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62206, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62206, null]], "pdf_page_numbers": [[0, 2580, 1], [2580, 5823, 2], [5823, 6562, 3], [6562, 9657, 4], [9657, 12540, 5], [12540, 16541, 6], [16541, 20385, 7], [20385, 22560, 8], [22560, 25816, 9], [25816, 28617, 10], [28617, 31265, 11], [31265, 34415, 12], [34415, 38189, 13], [38189, 41773, 14], [41773, 45158, 15], [45158, 48875, 16], [48875, 52641, 17], [52641, 56260, 18], [56260, 60107, 19], [60107, 62206, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62206, 0.01843]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
33271203cf7daa6b5e02221c32bf8cc31d680ea4
|
IMS Transaction Manager
Your Enterprise Transaction Manager
April 2012
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
**z/OS Middleware**
- **Middleware, also called Subsystem in the z/OS environment**
- A layer between the operating system and an end user or end-user applications
- Often includes an *application programming interface* (API)
- **Typical z/OS middleware**
- Database systems
- Web servers
- Message queuing and routing functions
- Transaction managers
- Java virtual machines
- …
z/OS Middleware – To Allow Evolution while Protecting Investment
- Internet
- e-business
- Browser
- e-business with Core Business Systems
- Browser
- Enterprise Network
- Web Server
- Appl. Server
- Browser
- Personal Computer
- Server
- Customer Site
- Business System Databases
- Core Business Systems Applications
- Business Systems Front End
- "Dumb" Terminal
Time
Application Investment Protection
GUI Front-End
Terminal Processing
Transaction Management - Definitions
- **What’s a Transaction?**
- “an indivisible unit of work, comprised of several operations, all or none of which must be performed in order to preserve data integrity” (source: JavaWorld, July 2000)
- A request and execution of a set of programs, performing business functions and accessing and/or updating shared databases on behalf of a user.
- **Properties of a transaction**
- Atomicity: This implies indivisibility; any indivisible operation (one which will either complete fully or not at all) is said to be atomic.
- Consistency: A transaction must transition persistent data from one consistent state to another. If a failure occurs during processing, the data must be restored to the state it was in prior to the transaction.
- Isolation: Transactions should not affect each other. A transaction in progress, not yet committed or rolled back, must be isolated from other transactions. Although several transactions may run concurrently, it should appear to each that all the others completed before or after it; all such concurrent transactions must effectively end in sequential order.
- Durability: Once a transaction has successfully committed, state changes committed by that transaction must be durable and persistent, despite any failures that occur afterwards.
- **What’s a Transaction Monitor / Manager**
- A program or subsystem that manages or oversees the sequence of events that are part of a transaction
- Makes sure the ACID properties of a transaction are maintained
- Includes functions such as interfacing to databases and networks and transaction commit/rollback coordination
- Provides an API so applications can exploit the services of the transaction monitor
Transaction management – A key z/OS strength! ...
- A key strength of the z/OS platform is support for high-volume, high-performance transaction management using transaction managers
- Scalable
- Optimized for mixed workload
- Highly available
- IBM’s z/OS-based transaction managers
- CICS - Customer Information Control System
- IMS TM - Information Management System Transaction Manager
- WebSphere Application Server for z/OS
Transaction management – A key z/OS strength! …
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
A z/OS middleware that inherit all strength of zEnterprise
A Messaging & Transaction Manager
- Based on a messaging and queuing paradigm
- Asynchronous data flow
- A real benefit in case of surge of traffic, or in case of unavailability of users to receive their transaction answers.
- High-volume, rapid response transaction management for application programs accessing IMS and/or DB2 database, MQ queues
- Managing the application programs — dispatching work, loading application programs, providing locking services
- “Universal” Application Connectivity
- Manages input and output messages from network (3270s, APPC, TCP/IP, WebSphere MQ, etc.)
A Batch Manager
- Standalone z/OS batch support
- Batch processing region centrally managed by the IMS control region
- Managing the batch-oriented programs — providing checkpoint/restart services
A Database Manager
- Central point of control and access for the IMS databases based on a hierarchical database model
- Used by companies needing high transaction rates
- Now provide a “Universal” Database Connectivity based on JDBC / DRDA
- Lot of new features in that space! Stay tuned
IMS – High level View
- **IMS Transaction**
- No presentation layer
- Very simple design
- Get Input Message
- RM calls
- ISRT Output Message
- “execute” and “forget”
- **IMS Batch**
- BMPs Msg Driven or non Msg Driven
- Standalone (no picture here)
- **Access to Resource Managers (RM)**
- IMS database – Hierarchical data model
- DB2 database – Relational data model
- MQ Queues
- Web Services using WOLA – WebSphere Optimized Local Adapter
- **IMS Presentation Layer (MFS)**
- Description of input and output messages and device map
- Not used in client/server implementations
IMS Transaction Manager and Database Manager for z/OS
Long term product plans
▪ **Extend the lead in availability, scalability and performance**
– Continue to ensure IMS capacity limits are well beyond customer needs
– Continue to evolve IMS definition and configuration processes to be more dynamic and not require IMS system outages
– Expand Active-Active Environment and IMS Replication capabilities
▪ **Reduce cost of ownership**
– Reduce MIPS usage by IMS to help reduce cost
– Simplify management of IMS systems as well as IMS application development to do more with less staff
– Advance autonomicics to make the system more self-managing and self-tuning
▪ **Application simplification and enablement**
– Increase support for application and database access to IMS through standard APIs: SQL, Web Services, Java EE, .NET
– Improve ease of use for application development with graphical assist and centralized IMS metadata support
– Enhance and simplify integration of IMS assets with SOA, other Web solutions, decision support solutions and other IBM products
▪ **Enable high-volume transaction processing for next wave of applications**
– Continue investment in IMS TM including: IMS Connect, Open Transaction Manager Access (OTMA), IMS TM RA and SOAP Gateway
State of the IMS Business 2011
- **IMS TM/DB runs CORE business applications**
- Most companies already run IMS for these applications!
- ATM networks, core banking, bill of materials applications, auto/airline maintenance, insurance policy/claims.
- Supporting millions of internet users
- Handling thousands of transactions per second
- Ensuring 24x7 service availability
- **New Customers**
- Mergers and Acquisitions
- New applications built on IMS TM
- eg. TARGET2Securities (T2S) project for EU
- Consolidation of Transaction Managers
- Strong potential in emerging GEOs
- 2 POCs being driven now in Russia for IMS TM/DB
- **Most growth is additional workload from existing customers**
- IMS MIPS have doubled over last 5 years.
- Over 50% of IMS customers grew transaction workload in 2010.
- New applications and workloads onto IMS
---
**Overall IMS Customers**
- 65% IMS TM/DB
- 32% IMS DB only
- 3% IMS TM only
**Top 50 IMS Customers**
- 43 run IMS TM/DB
- 3 are IMS TM only
- 3 are DBCTL
- Over 50% run with SMQ
- 27 are Fastpath
IMS Strengths
- **Quality**
- IMS has best customer satisfaction in IBM SWG
- PE (PTF in Error) rate halved over past 5 years.
- Field Apar Rate improved consistently Version to Version
- **Reliability**
- Many customers go years without an unplanned outage
- In cases of hard downs (power outages etc) IMS recovers gracefully
- Numerous features for high availability
- Including Sysplex support, Shared Message Queues, Data Sharing
- Data integrity problems very rare
- **Performance/Scalability**
- Lab benchmark with single system IMS 12, z196
- 46,000 trans/sec Fastpath application with database update and 30,000 simulated network clients!
- Customers running >7500 trans/sec, 200M+ trans/day
- DL/I database extremely efficient, uses less DASD space and faster access than relational.
- Continuous improvements in MIPS consumption, offload capabilities
- **Modern**
- IMS today is “open” as a server and as a client, through industry standard interfaces.
- Direct access to IMS transactions and data from distributed systems
- Integrated with standard tooling, BI solutions, Web 2.0
- Rich support for Java, SQL, .NET
- Sophisticated Web Services implementation with support for top down WSDL definition
IMS Strategy
- **Modernize Application Interoperation/Integration**
- Standard Tools/Interfaces to Speed Deployment
- **Streamline Installation/Management**
- Simplify Interfaces, Ease Operations
- Heighten Availability, Increase Productivity
- **Enable Efficient Growth**
- Alleviate Bottlenecks
- Reduce costs
- Optimize performance and resilience
Why customers use IMS TM today?
- **Hosting business-critical high-volume transactional or batch-oriented applications**
- With 24/7 possible availability of application environment
- With goal-oriented workload management
- With security
- **Protecting investment in applications and ensuring upward compatibility for over 40 years**
- Integrated message queuing, transaction processing and data base management
- Business still relies on existing application constantly updated to adapt to new business needs
- No need to recompile applications when changing middleware, z/OS or HW
- **Running on the most scalable and most robust IT infrastructure**
- IMS component architecture in conjunction with z/OS features
- **Optimizing CPU and storage consumption when using IMS hierarchical data model**
- 64 bit Data-In-Memory solution with asynchronous I/Os on physical data on disk (DEDB)
- Partitioning solution to parallelize I/Os without application changes (DEDB, HALDB)
- **Integrated access to DB2 relational databases and MQ queues on z/OS**
- Guaranteed integrity (Two Phase Commit)
- Transactional and batch support (BMP) with dynamic backout capabilities
- Easy to use batch checkpoint/restart mechanisms
- Coordinated recovery solution to reduce impact of locked resources after an unplanned outage (FDBR)
The Modern « Application Container » Label
- **« Application Container » requirements** *
- Simple programming model
- Transactional management – ACID properties
- Optimized management of data and network connection
- Solution for in-memory data
- Support application interoperability
- Support for event-management
- **« System Infrastructure » Requirement** *
- Elastic scalability
- Optimized management of system resources (memory, processes, pools, …)
- Optimized workload management
* Summarized from Gartner documentation
IMS as Modern « Application Container »
- Running on z/OS and System z, the Optimized “System Infrastructure”
- Simple programming model
- Get message, send message
- Multi-segment support allowing large messages
- “Execute” and “forget”
- Transactional management optimized for over 40 years
- Universal interoperability with network and applications
- Solution for in-memory data with DEDB 64 bit addressing
- Supported by IMS DL/I calls – simple API
- Supported by JDBC today, COBOL SQL in the future
- Support any language including Java (transactional and batch)
- Support application interoperability between IMS applications
- Prog-to-prog inside IMSPlex environment
- MSC between IMS environments
- Support application interoperability outside of IMS environment
- SOA standard support
- IMS as a server or as a client
- Synchronous and asynchronous capabilities
- Support for event-management
- Event could be sent by MQ message or by using IMS API (ISRT ALTPCB)
- Changed data can be captured and sent using InfoSphere solutions
# IMS TM in Perspective
## Native Quality of Services
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Recognized Business Logic Container</td>
<td>IMS TM since 40+ years – Investment protection</td>
</tr>
<tr>
<td>Optimized integration with a database manager to optimize throughput with low resource consumption</td>
<td>IMS TM & IMS DB as single subsystem for transaction and database management</td>
</tr>
<tr>
<td>High transactional throughput</td>
<td>IMS TM since 40+ years</td>
</tr>
<tr>
<td>Batch support</td>
<td>Online batch with BMPs / Standalone IMS Batch</td>
</tr>
</tbody>
</table>
## Application Development
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Multi-language AD support</td>
<td>COBOL, PLI, C, … JAVA</td>
</tr>
<tr>
<td>THE enhanced development platform</td>
<td>Rational Developer for zEnterprise</td>
</tr>
<tr>
<td>Asset analysis</td>
<td>Using Rational Asset Analyser</td>
</tr>
</tbody>
</table>
## Access to external resource managers (in addition to IMS Databases) on same z/OS platform
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Access to DB2 data under Two-Phase Commit protocol</td>
<td>IMS transactions, BMPs – using SQL or Java JDBC</td>
</tr>
<tr>
<td>Access to Master Data directly when hosted in DB2 for z/OS</td>
<td>MDM Server “Query” Connect</td>
</tr>
<tr>
<td>Access to WebSphere MQ under Two-Phase Commit protocol</td>
<td>IMS transactions and BMPs – using MQ API (explicit)</td>
</tr>
<tr>
<td>Access to Web Services</td>
<td>IMS transactions and BMPs – using WOLA API</td>
</tr>
</tbody>
</table>
## Business Integration
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Universal access to IMS Queue</td>
<td>Open Transaction Manager Access / No change in IMS applications</td>
</tr>
<tr>
<td>Access from any MQ Server</td>
<td>MQ IMS Bridge – MQ Trigger Monitor</td>
</tr>
<tr>
<td>Access from any WAS server</td>
<td>IMS TM Resource Adapter for JCA, MQ IMS Bridge for JMS, IMS SOAP Gateway for web service</td>
</tr>
<tr>
<td>IBM Enterprise Service Bus & BPM Integration with IMS applications</td>
<td>IMS support in the 3 IBM ESBs: Datapower, WESB, WMB Support inbound or outbound integration</td>
</tr>
<tr>
<td>Fast integration in Web 2.0 applications</td>
<td>IMS Mashup solutions</td>
</tr>
<tr>
<td>Optimized WAS for z/OS & IMS Integration</td>
<td>WOLA – Inbound and outbound</td>
</tr>
</tbody>
</table>
## Decision Support
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Access to Business Rules</td>
<td>IMS TM & WODM integration</td>
</tr>
<tr>
<td>Generation of Business Events</td>
<td>IMS TM & WODM integration</td>
</tr>
</tbody>
</table>
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
IMS Architecture – Proven & Innovative Technology
- **Multi-address space architecture with one single point of control**
- Control region controls up to 1000 « service » address space.
- DBRC centralizes all backup and recovery information.
- **Tight integration of messaging, TM and DB activities**
- Sharing IMS system components (logging, pool management, …)
- Transactional workload as well as batch workload
- Optimized access to IMS DEDBs – high volume – high performance – low CPU
- **Optimized parallel processing inside an IMS environment**
- Multi-threading and multi-tasking
- Rich scheduling capabilities including Serial mode, Pseudo-WFI, WFI
- Transaction / Processing Class / MPP
- z/OS Resource allocation based on z/OS WLM definitions
- **Optimized workload balancing in an IMS Shared Queue environment**
- « Pull » instead of « Push »
- Routing at different level: network entry (see VGR or sysplex distributor), IMS connect, or IMS Shared Q
- **Transparent connectivity between IMS systems geographically dispersed**
- MSC (Multiple Systems Coupling) using VTAM or TCP/IP networks
- Asynchronous IMS-IMS TCP/IP support
IMS Architecture – Proven & Innovative Technology …
- **Tight Integration with z/OS**
- Continuous application availability, thanks to a robust inter-system coupling solution, aka parallel sysplex
- Continuous IT operations for system or maintenance upgrades
- Elastic scalability thanks to adequate resource allocation of computing resources based on workload priority
- Mixed workload support (transactional & batch, from assembler to java, …) and best of breed workload management
- “Bulletproof” system recoverability without data loss (except in case of bug)
- Focus on outage prevention
- Optimized parallel computing with efficient latch/lock management
- Exploitation of z/OS capabilities e.g., the use of extended format data sets and striping to improve logging bandwidth
- 64-bit support
- **Security**
- Based on z/OS Security Server
- User authentication
- User authorization at transaction / program / database level
IMS Architecture – Proven & Innovative Technology …
- **System Updates - easy to skip releases of IMS**
- Supported migration paths from 9 to 11, 10 to 12
- Customer could also make more important jump without fallback capabilities
- From 5 to 10, or 6 to 10, …
- **System Updates – without impacting investment in business logic**
- Application is not required to be modified or even re-compiled or re-bound
- Even when the physical structure of a database is changed, e.g., from Full Function to HALDB
- Or when new capabilities are leveraged, e.g. Shared Queues or Data Sharing
- Or even when the communications interface changes
- **Numerous continuous availability features**
- On one site, on 2 sites, on 3 sites geographically dispersed
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
Universal Interoperability with network and applications
- **Support for routinely large number of concurrent accesses from terminals and/or applications**
- **Evolution from terminal to « client/server » to « browser / processes & services » without application change**
- Presentation layer outside of IMS application
- Application interface based on input and output message descriptions
- OTMA as universal protocol to access IMS TM – many OTMA clients
- IBM: IMS Connect, MQ bridge, DB2 Stored procedure
- Non IBM: TIBCO, …
- High performance TCP/IP access thru IMS Connect
- Enhanced by IMS Connect Extension functionalities
- Parallel processing of incoming requests – multiple ICON Address space
- Highly available configuration
- The basis for many integration solutions
- **Integration between IMS applications**
- “Prog to Prog” inside a single IMS, or inside an IMS Shared Queue environment, or across a TCP/IP link with IMS Connect
- MSC between any IMS environment (locally or geographically dispersed)
Universal Interoperability with network and applications ...
- **Integration of IMS applications with other service providers**
- SOA Integration
- WOLA IMS Support
- Support for lightweight web application with mashups
- **Flexible and high performance connectivity**
- VTAM generic resource capability
- Across the different LU types
- TCP/IP IP spraying and load balancing support, e.g., with Sysplex Distributor
- IMS Connect can be configured to access multiple IMS systems in the same or different LPARs or multiple IMS Connects can access a single IMS
SOA Connectivity with IMS TM - Inbound to IMS
WebSphere Servers
- WAS, WESB, WTX, WMB, BPM
IMS TM Resource Adapter
WebSphere SOAP Gateway
WebSphere DataPower
IBM Mashup Center / WebSphere sMash
Connect API (Java, C)
MQ
Web Service Consumer
WOLA
IMS Connect
IMS
MQ IMS Bridge
MQ Trigger Monitor
OTMA
IMS DB & XML DB
DB2
JCA
WAS – WebSphere Application Server
WOLA - WebSphere z/OS Optimized Local Adapters
WESB – WebSphere Enterprise Service Bus
WTX – WebSphere Transformation Extender
WMB – WebSphere Message Broker
BPM - IBM Business Process Manager (BPM) Advanced
Web 2.0
- Client
RYO Client
MQ Client
WMB & DataPower
JMS or MQ API
MQ IMS Bridge
IMS Bridge
MQ Trigger Monitor
B Business logic
D Data Access
SOA Connectivity with IMS TM - Outbound from IMS
- Asynchronous and synchronous capabilities
WAS – WebSphere Application Server
WOLA - WebSphere z/OS Optimized Local Adapters
WBE – WebSphere Business Events
WBM – WebSphere Business Monitor
WMB – WebSphere Message Broker
RYO Server - .Net, BizTalk, Oracle SP, SAP, PayPal services, and any application server, etc.
# Integration and Connectivity Features Summary
<table>
<thead>
<tr>
<th>Integration and Connectivity Features</th>
<th>IMS</th>
</tr>
</thead>
<tbody>
<tr>
<td>SNA support - LU0, LU1, LU2, LU6.1, LU6.2</td>
<td>All LU types including SLUP</td>
</tr>
<tr>
<td>TCP/IP native support</td>
<td>Y with IMS Connect (ICON) as high performance gateway IMS Connect API for easy TCP/IP client development</td>
</tr>
<tr>
<td>WebSphere MQ support</td>
<td>WMQ Bridge and Trigger Monitor</td>
</tr>
<tr>
<td>SOAP support</td>
<td>Y with IMS SOAP Gateway on z/OS or distributed</td>
</tr>
<tr>
<td>XML messages – transport level & data store level</td>
<td>Supported by IMS Connect Storage in IMS databases</td>
</tr>
<tr>
<td>Java Connector Architecture (JCA, J2C)</td>
<td>Y using IMS TM Resource Adapter & ICON</td>
</tr>
<tr>
<td>JMS</td>
<td>Y, based on MQ & IMS support</td>
</tr>
<tr>
<td>Web Services Provider (inbound)</td>
<td>Y</td>
</tr>
<tr>
<td>Web Services Consumer (outbound)</td>
<td>Y, synchronously or asynchronously</td>
</tr>
<tr>
<td>Restful Services support on top of HTTP</td>
<td>Y</td>
</tr>
<tr>
<td>Web 2.0 (Atom) support</td>
<td>Y</td>
</tr>
<tr>
<td>Business Events Processing</td>
<td>Y, with IMS application modification</td>
</tr>
<tr>
<td>IBM ESB - WebSphere Message Broker support</td>
<td>Y, inbound to IMS with IMS Connect or MQ outbound with MQ</td>
</tr>
<tr>
<td>IBM ESB - Data Power appliance</td>
<td>Y, inbound to IMS with IMS Connect or MQ outbound with MQ</td>
</tr>
<tr>
<td>Service Flow</td>
<td>With BPM</td>
</tr>
<tr>
<td>IDE Tool</td>
<td>RDz + IMS Explorer for Dev</td>
</tr>
</tbody>
</table>
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
IMS Application Support – Design & Development
- **Supports many languages including Java**
- Assembler (yes, still used), Cobol, PL/I, C/C++, REXX and Java
- Allows interoperability between Cobol or PL/I and Java in MPP/BMP/IFP regions
- e.g., Cobol calling Java or Java calling Cobol
- Specific processing regions for Java transaction (JMP) and Java Batch (JMP) based on z/OS optimized JVM
- **Support for a simple programming model for IMS application**
- No presentation layer imbedded in IMS logic
- Very simple design: Get Input Message, Access resource Managers, ISRT Output Message
- “Execute” and “Forget” - No affinity with the middleware or OS (as best practice)
- IMS call for application logging service inside the centralized IMS log
- **Based on a simple IMS API for IMS TM**
- GU IOPCB call to get input message and ISRT IOPCB call to send output message
- ISRT ALTPCB call to send a message to an alternate destination ie other IMS transaction, terminal, remote program, EJB, web service, …
- Additional API for IMS DB Access
- GHU, GU, GHN, GN, GNP, ISRT, REPL, DLET calls
- Other supported API
- JDBC to access IMS databases
- Exec SQL or JDBC to access DB2 databases
- MQI to access MQ queues
- WOLA API to access EJB or web service
**IMS Application Support – Design & Development**
- **Supported by enhanced IBM Enterprise Modernization tools**
- Collaborative design and lifecycle management with Rational Team Concert (RTCz)
- Development with Rational Developer for zEnterprise (RDz)
- Tools provide code snippets to assist programmers in coding the IMS calls
- Application asset understanding with Rational Asset Analyzer
- **Solutions for IMS Application Development Environment on z/OS**
- Running development and unit test on x86 workstation with Rational Development and test for System z (RD&T) – new name in April 2012 – Was RDzUT before
- Running z/OS on a x86 PC running Linux
- Virtualization of multiple IMS environments into one IMS on z/OS
- The Standardware COPE solution allows IMS development teams to virtualize their IMS test environments for potential savings in test resources, process time and set-up systems skills without associated application program changes
IMS Application Support – Design & Development …
- **Support from testing and problem determination tools**
- IMS based: BTS Tool – testing in batch or BMP mode instead of online
- z/OS based: Debug tool, File Manager, Fault analyzer
- Look at “IMS Explorer for Dev” extension for IMS TM in the future
- **Easily integrated into the Services and Processes Oriented world**
- IMS as service provider – IMS Inbound solutions
- IMS as service requestor - Able to call out to a service using native DLI calls – IMS Outbound solutions
- Generation of business events
- **Supports integration into SOA development models**
- Bottom up: reuse business logic already implemented in existing IT application systems
- Meet-in-the-middle: create an integration layer to accommodate new business needs with existing services – support for all IBM ESBs: Datapower, WESB, WMB
- Top-down: write new services based on IMS transactions – Tooling to be provided to facilitate IMS application development from WSDL definition
IMS Application Support – Best Practice for App. Structure
- **3 Layers**
- One with IMS TM calls
- One to analyze input message and decide what services to call (no IMS knowledge)
- One with all the services (no IMS knowledge)
• Unchanged application when changing system infrastructure and middleware – application investment protection
– No need to recompile IMS application to move from one IMS Version to another
• But recompiling may take better advantage of HW & z/OS enhancements
• Online change for application components
– Dynamic resource definition
• Easy implementation of new version or maintenance of application
– Could be isolated in some processing region
– No need to stop IMS processing
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
External Resource Manager Access – On same z/OS
- **Access to the 2 IBM z/OS DBMS, IMS DB and DB2**
- Efficient data management capabilities
- Support for “Data Sharing” in a z/OS sysplex environment
- CF cache structure can be used to store data and reducing the need for read disk I/Os.
- Support for “Data-In-Memory” - 64 bit support for IMS DEDB and DB2
- **Access to “Master Data” thru the MDM Server “Query” Connect**
- InfoSphere MDM Server offers a high performance, high scalability foundation to access master data with several options (server and/or data can be distributed or z/OS).
- When data is in DB2 for z/OS, a COBOL Adapter enables COBOL programs to access Master Data Management Server services thru the MDM Server Central Transaction server (for Update request) and thru the MDM Server “Query” Connect (for Read-only requests)
- **Access to Messaging Systems**
- IMS has a imbedded queuing mechanism based on IMS API.
- IMS applications can also use MQ API to access the MQ local manager. Queues can be defined local or remote inside this QM.
- **Access to services with WOLA - WebSphere z/OS Optimized Local Adapters**
- IMS applications can use WOLA API to call WAS on z/OS-based applications using cross-memory
- Available for transactional workload and batch workload
- **Access to local “Business Rules” with WebSphere Operational Decision Management (WODM)**
- ILOG Rules for COBOL can be used to develop rules as COBOL subroutine to be included in the IMS transaction UOW
- ILOG zRules will be able to call java-based rules from a COBOL IMS application without complicated development
# IMS DB in Perspective
## Native Quality of Services
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>High Capacity</td>
<td>HALDB & DEDB</td>
</tr>
<tr>
<td>High Availability</td>
<td>IMS Data Sharing</td>
</tr>
<tr>
<td>Performance without CPU extra cost</td>
<td>1/2 the MIPS and 1/2 the DASD of relational</td>
</tr>
</tbody>
</table>
## Application Development
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Multi-language AD support</td>
<td>COBOL, PLI, C, … JAVA</td>
</tr>
<tr>
<td>XML Support</td>
<td>Decomposed or Intact</td>
</tr>
<tr>
<td>Java SQL support (JDBC)</td>
<td>IMS Java</td>
</tr>
<tr>
<td>Access from CICS and IMS applications, from Batch</td>
<td>IMS since early days</td>
</tr>
<tr>
<td>Open Access and Data Integration</td>
<td>DRDA Universal Driver with IMS 11 Open Database</td>
</tr>
</tbody>
</table>
## Data Management
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Basic free utilities for reorganization and recovery</td>
<td>Included in IMS Core product</td>
</tr>
<tr>
<td>Advanced Space Management Capabilities</td>
<td>DFSMS family</td>
</tr>
<tr>
<td>Health Check</td>
<td>Pointer validation & repair</td>
</tr>
<tr>
<td>Backup and Recovery Advanced Solutions</td>
<td>IMS Tools</td>
</tr>
<tr>
<td>Reorganization for better performance</td>
<td>IMS Tools</td>
</tr>
</tbody>
</table>
## Enterprise Data Governance
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compression and Encryption</td>
<td>IMS Tools – Guardium Tools</td>
</tr>
<tr>
<td>Audit for every access</td>
<td>IMS Tools – Guardium Tools</td>
</tr>
<tr>
<td>Data Masking</td>
<td>OPTIM Family</td>
</tr>
<tr>
<td>Creation of Test databases</td>
<td>OPTIM Family</td>
</tr>
</tbody>
</table>
## Information Integration & Data Synchronization
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fast integration in Web 2.0 applications</td>
<td>IMS 11 Open database</td>
</tr>
<tr>
<td>Data Federation</td>
<td>InfoSphere Classic Federation</td>
</tr>
<tr>
<td>Replication to IMS – Towards Active / Active solution</td>
<td>InfoSphere IMS Replication</td>
</tr>
<tr>
<td>Replication to Relational</td>
<td>InfoSphere Classic Replication Server & Classic CDC</td>
</tr>
<tr>
<td>Publication of DB Changes</td>
<td>InfoSphere Classic Data Event Publisher</td>
</tr>
</tbody>
</table>
## Operational Business Analytics & Reporting
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Data Federation</td>
<td>InfoSphere Classic Federation</td>
</tr>
</tbody>
</table>
COGNOS & SPSS
External Resource Manager Access – On a different environment (z/OS or distributed)
- **Access to DB2 LUW via DB2 z/OS as a gateway**
- Transactionality preserved
- **Access to Services**
- IMS provides numerous solutions for accessing services from IMS applications
- Asynchronously or synchronously using SOAP, JCA or JMS, WOLA API
- Asynch: IMS API (ISRT ALTPCB), MQ API and also APPC API or TCP/IP calls with IMS Connect
- Synch (not in 2PC scope): IMS API (ICAL), MQ API and also APPC/IMS (2PC scope), WOLA API (2PC scope soon)
- **Access to remote “Business Rules”**
- ILOG Rules Execution Server provides services that can be called by IMS application as described above.
- **Access to Messaging Systems**
- IMS applications can use MQ API to access the MQ local manager that will then communicate with any MQ manager.
- Remote queue managers can communicate with IMS TM using either the MQ OTMA bridge or the MQ Trigger Monitor mechanism.
- **Access to Event Manager**
- Event message can be created by the IMS application based on data included in IOPCB, based on database content or based on application logic
- Event message is sent based on IMS Callout solutions using IMS API & IMS SOAP Gateway Business Event Support or using MQ API
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
IMS Operation and System Management
- **Centralization of messages for the whole IMS environment**
- One log for IMS system, TM and DB activity
- Tool to simplify log visualization for analysis and debugging purposes
- **Automated operator interface based on simple IMS calls to submit commands, receive command output and monitor messages**
- AO application programs and exit routines
- **Capability to implement a SPOC (Single Point of Control) for several IMS environments**
- Provides a simple front-end interface for an IMSPllex
- Allows commands to be routed to one or more IMS systems and retrieves results
- Based on the IMS Common Service Layer (CSL)
• Keeps track of resources and provides an efficient mechanism for inter-address space communications
- **Dynamic resource definition**
- For VTAM terminals, applications and databases
- **Enhanced solutions from different vendors – IBM & ASG, BMC, CA**
- IMS monitoring, IMS system management, …
Agenda
- z/OS Transaction Management
- IMS Transaction Manager Positioning
- Robust and efficient product architecture
- Universal Interoperability with network and applications
- IMS Application Support
- External Resource Manager Access
- IMS Operation and System management
Conclusion
The Message
- **IMS continues to be a premier server with architected standard interfaces**
- New products and tools from a variety of vendors provide access to IMS transactions and data
- **Our goal is to leverage IMS as an integral part of the enterprise in the evolving business world through**
- Addition of support for complementary standards surrounding IMS connectivity, data representation, and application development
- **And to allow you to realize the promise of building the IT for the Future**
- Simplify the business environment
- Respond to market changes quicker and at less cost
As the world’s largest business software company, IBM is helping organizations of all sizes tackle their most important business needs.
*IBM solutions are built on a core set of software capabilities.*
<table>
<thead>
<tr>
<th>Need</th>
<th>Capabilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Turn information into insights</td>
<td>- Business Analytics<br> - Data Management<br> - Big Data<br> - Data Warehousing<br> - Enterprise Content Management<br> - Information Integration and Governance</td>
</tr>
<tr>
<td>Deepen engagement with customers, partners and employees</td>
<td>- Social Collaboration<br> - Unified Communications<br> - Web Experience<br> - Commerce<br> - Enterprise Marketing Management<br> - Smarter City Operations</td>
</tr>
<tr>
<td>Enable the agile business</td>
<td>- Business Process Management<br> - Connectivity, Integration and SOA<br> - Application Infrastructure</td>
</tr>
<tr>
<td>Deliver enterprise mobility</td>
<td>- Mobile Development and Connectivity<br> - Mobile Management and Security</td>
</tr>
<tr>
<td>Accelerate product and service innovation</td>
<td>- Application Lifecycle Management<br> - Complex and Embedded Systems<br> - Enterprise Modernization</td>
</tr>
<tr>
<td>Optimize IT and business infrastructure</td>
<td>- Cloud and IT Optimization<br> - Asset and Facilities Management<br> - Enterprise Endpoint Management</td>
</tr>
<tr>
<td>Manage risk, security and compliance</td>
<td>- Identity and Access Management<br> - Data Protection<br> - Application Security<br> - Infrastructure Protection<br> - Security Intelligence and Compliance Analytics</td>
</tr>
</tbody>
</table>
IMS Middleware Positioning with IBM software capabilities
- **Turn Information into Insights**
- DB2 and IMS Operational Data
- Data Privacy
- Auditing
- Federation & Publication & Replication
- Master Data Management
- **Enable the Agile Business**
- CICS Transactions
- IMS Transactions
- WMQ
- WAS
- ESBs
- Business Processes
- Service Repository
- Business Rules & Event
- **Deepen engagement with customers, partners and employees**
- Portal Access
- Virtualized Apps in Cloud
- Collaborative Dev
- **Deliver enterprise mobility**
- Apps Connectivity
- Data Connectivity
- People-centric Processes
- **Accelerate Product and Service Innovation**
- Apps Dev
- Asset Analyze & Clean & Simplify
- Collaborative Dev
- IMS Explorer for Dev
- Compilers
- Business Processes
- **Optimize IT and Business Infrastructure**
- Availability
- Flexibility
- Scalability
- Operational Effectiveness
- Capacity Planning
- Chargeback
- z/OS & zEnterprise
- Parallel Sysplex
- E2E Workload Manager
- E2E Workload Scheduler
- E2E System management
- E2E Application discovery
- **Manage Risk, Security, and Compliance**
- Reliability
- Make Visible
- Control
- Automate
- De-duplication
- Auditing
- Data Privacy
- System z & z/OS Security Server
- Crypto solutions
IMS DB
Universal Driver
Bus. Analytics
DB2 DW + IDAA
+ IMS DB
Recent Features
References
- Ibm.com/ims
- Redbooks
- IMS 12 – SG24-7972
- Powering SOA Solutions with IMS - SG24-7662
- Enabling z/OS Applications for SOA - SG24-7669
|
{"Source-Url": "http://www.gsebelux.com/system/files/IMS_TM_Value_2012Q2.pdf", "len_cl100k_base": 9212, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 70546, "total-output-tokens": 10228, "length": "2e13", "weborganizer": {"__label__adult": 0.0005502700805664062, "__label__art_design": 0.0005984306335449219, "__label__crime_law": 0.00039458274841308594, "__label__education_jobs": 0.00115966796875, "__label__entertainment": 0.00029397010803222656, "__label__fashion_beauty": 0.0002474784851074219, "__label__finance_business": 0.017822265625, "__label__food_dining": 0.0003352165222167969, "__label__games": 0.0013837814331054688, "__label__hardware": 0.01007843017578125, "__label__health": 0.0004100799560546875, "__label__history": 0.00034999847412109375, "__label__home_hobbies": 0.00014781951904296875, "__label__industrial": 0.00206756591796875, "__label__literature": 0.0002930164337158203, "__label__politics": 0.00029540061950683594, "__label__religion": 0.0005207061767578125, "__label__science_tech": 0.085205078125, "__label__social_life": 7.784366607666016e-05, "__label__software": 0.291748046875, "__label__software_dev": 0.58447265625, "__label__sports_fitness": 0.000244140625, "__label__transportation": 0.0008478164672851562, "__label__travel": 0.00026106834411621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39026, 0.0076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39026, 0.02266]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39026, 0.81787]], "google_gemma-3-12b-it_contains_pii": [[0, 71, false], [71, 349, null], [349, 747, null], [747, 1218, null], [1218, 2968, null], [2968, 3412, null], [3412, 3460, null], [3460, 3738, null], [3738, 4890, null], [4890, 5508, null], [5508, 6799, null], [6799, 7880, null], [7880, 9136, null], [9136, 9501, null], [9501, 10848, null], [10848, 11398, null], [11398, 12470, null], [12470, 14631, null], [14631, 14909, null], [14909, 16081, null], [16081, 17038, null], [17038, 17809, null], [17809, 18087, null], [18087, 19143, null], [19143, 19722, null], [19722, 20459, null], [20459, 20826, null], [20826, 22067, null], [22067, 22352, null], [22352, 23654, null], [23654, 24634, null], [24634, 25662, null], [25662, 25897, null], [25897, 26390, null], [26390, 26674, null], [26674, 28318, null], [28318, 32084, null], [32084, 33360, null], [33360, 33594, null], [33594, 34576, null], [34576, 34866, null], [34866, 35473, null], [35473, 37228, null], [37228, 38678, null], [38678, 39026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 71, true], [71, 349, null], [349, 747, null], [747, 1218, null], [1218, 2968, null], [2968, 3412, null], [3412, 3460, null], [3460, 3738, null], [3738, 4890, null], [4890, 5508, null], [5508, 6799, null], [6799, 7880, null], [7880, 9136, null], [9136, 9501, null], [9501, 10848, null], [10848, 11398, null], [11398, 12470, null], [12470, 14631, null], [14631, 14909, null], [14909, 16081, null], [16081, 17038, null], [17038, 17809, null], [17809, 18087, null], [18087, 19143, null], [19143, 19722, null], [19722, 20459, null], [20459, 20826, null], [20826, 22067, null], [22067, 22352, null], [22352, 23654, null], [23654, 24634, null], [24634, 25662, null], [25662, 25897, null], [25897, 26390, null], [26390, 26674, null], [26674, 28318, null], [28318, 32084, null], [32084, 33360, null], [33360, 33594, null], [33594, 34576, null], [34576, 34866, null], [34866, 35473, null], [35473, 37228, null], [37228, 38678, null], [38678, 39026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39026, null]], "pdf_page_numbers": [[0, 71, 1], [71, 349, 2], [349, 747, 3], [747, 1218, 4], [1218, 2968, 5], [2968, 3412, 6], [3412, 3460, 7], [3460, 3738, 8], [3738, 4890, 9], [4890, 5508, 10], [5508, 6799, 11], [6799, 7880, 12], [7880, 9136, 13], [9136, 9501, 14], [9501, 10848, 15], [10848, 11398, 16], [11398, 12470, 17], [12470, 14631, 18], [14631, 14909, 19], [14909, 16081, 20], [16081, 17038, 21], [17038, 17809, 22], [17809, 18087, 23], [18087, 19143, 24], [19143, 19722, 25], [19722, 20459, 26], [20459, 20826, 27], [20826, 22067, 28], [22067, 22352, 29], [22352, 23654, 30], [23654, 24634, 31], [24634, 25662, 32], [25662, 25897, 33], [25897, 26390, 34], [26390, 26674, 35], [26674, 28318, 36], [28318, 32084, 37], [32084, 33360, 38], [33360, 33594, 39], [33594, 34576, 40], [34576, 34866, 41], [34866, 35473, 42], [35473, 37228, 43], [37228, 38678, 44], [38678, 39026, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39026, 0.13]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e715a4ce969e8a27ce4b61b2265c0849447797a4
|
A processing system for distributed multi-tier applications is provided. The system includes a server component that executes a replica of a client-side application, where a client component executes the client-side application. The client component captures events from the client-side application and transmits the events to the replica to validate the computational integrity security of the application.
FIG. 8
1. GENERATE CLIENT REPLICA
2. MONITOR CLIENT EVENTS
3. SEND MESSAGE INDICATING CLIENT ACTIVITY
4. EXECUTE CLIENT EVENTS VIA REPLICA
5. COMPARE CLIENT MESSAGE WITH MESSAGE GENERATED BY REPLICA
FIG. 9
AUTOMATICALLY SECURING DISTRIBUTED APPLICATIONS
BACKGROUND
[0001] Web applications are becoming increasingly distributed, marked by the emergence of popular AJAX (Asynchronous JavaScript and XML) applications such as Hotmail, Google Maps, Facebook, and many others. A typical multi-tier AJAX application consists of a server component implemented in Java J2EE or Microsoft .NET for example and a client-side component executing in the browser. The resulting application is more performant and responsive, since computation is moved closer to the client, thus avoiding unnecessary network round trips. Unlike a computation performed entirely on the server however, when a portion of the code is moved to the client, the overall computation can no longer be trusted.
[0002] Indeed, a malicious client can easily manipulates data that resides on and code that runs within the browser using one of many readily available data tampering or debugging tools. For example, consider a JavaScript-based shopping cart within a typical e-commerce retail site such as Amazon.com that allows the user to add items, adjust their quantities, add coupons, compute the shopping cart totals, and so forth. When run on the client, this application can be compromised in a variety of ways. For instance, coupon validation checks can be dodged, allowing the user to reduce the total. Even simpler, the total computation can be compromised to set the total to an arbitrary, potentially even negative amount.
[0003] Due to the possibility of these attacks, almost every action in a typical shopping cart application today requires a round trip to the server, the latency of which can be quite noticeable, especially on mobile or long-distance connections. For non-malicious users, who constitute the majority, this unnecessary precaution leads to a much less responsive user experience. Moreover, the developer of the distributed application currently is responsible for splitting the application in a manner that places all security-sensitive operations on the server. While some language-based approaches have recently been proposed to address this problem, these techniques still require a great deal of developer involvement, making them difficult to use for existing large-scale projects.
SUMMARY
[0004] The following presents a simplified summary in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of the various aspects described herein. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0005] A distributed execution system is provided that employs replicated application execution to automatically preserve the integrity of distributed computations between client and server applications. The system replicates a copy of client-side computations on a trusted server tier and captures user events such as keyboard or other command inputs (e.g., text inputs from a cell-phone client application). The captured user-initiated events are transferred to an abstract replica of the client (operated at the server) for execution, where the system observes results of the computation, both as computed on the client-side and on the server side utilizing the replica of the client-side code. Any discrepancy between server side execution via the replica and client execution results that are sent via messages are flagged as a potential violation of computational integrity. Most existing approaches for ensuring integrity of client computation involve the client sending a proof of certain properties that its execution state holds. The server efficiently validates these proofs convincing itself of the integrity of the client execution. For instance, the client could periodically send over its stack traces to the server, and the server could check the traces for any properties it desires. These techniques only provide a partial enforcement of integrity of client execution. The distributed execution system provides a more complete solution where integrity is guaranteed under a reasonable set of design assumptions.
[0006] To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings. These aspects are indicative of various ways which can be practiced, all of which are intended to be covered herein. Other advantages and novel features may become apparent from the following detailed description when considered in conjunction with the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 is a schematic block diagram illustrating a system for validating security of remote applications.
[0008] FIG. 2 is a block diagram that illustrates an example tier-split application.
[0009] FIG. 3 illustrates an example security validation system.
[0010] FIG. 4 illustrates example event transfer diagrams.
[0011] FIG. 5 illustrates audit logs for a security checker.
[0012] FIG. 6 illustrates an example threat model for a security validation system.
[0013] FIG. 7 illustrates miscellaneous considerations for a security validation system.
[0014] FIG. 8 illustrates an exemplary process for verifying security of remote applications.
[0015] FIG. 9 is a schematic block diagram illustrating a suitable operating environment.
[0016] FIG. 10 is a schematic block diagram of a sample-computing environment.
DETAILED DESCRIPTION
[0017] Systems and methods are provided for validating security of remote applications. In one aspect, a distributed processing system for remote applications is provided. The system includes a server component that executes an abstract replica of a client-side application, where a client component executes the client-side application. It is noted that the replica only has to mimic the relevant details, but can omit many others such as the actual graphical rendering of the client-side user interface on the server, for example. The client component captures events from the client-side application and transmits the events to the replica to validate security of the client-side application. The events can be generated by a user or an application component. Security can be validated by comparing execution messages or observed states between the replica and the client side application.
[0018] As used in this application, the terms “component,” “application,” “event,” “replica,” and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in
execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
[0019] Referring initially to FIG. 1, a system 100 is illustrated for validating security of remote applications. The system 100 provides security for applications that are split between a client tier operated by a client component 110 that executes a remote client application. A server tier 114 employs a server component 120 that operates in conjunction with the client component 110 to service the overall application that has been segmented between tiers. As the client component 110 is executing the client application, events 130 are monitored and transmitted back to an abstract replica 140 that mimics operation of the client application at the server tier 114. The events 130 are typically user generated and can come from a plurality of sources such as keyboards, mice, voice commands, touch screen commands, biometric operations, and so forth and are generally employed to control or direct the client application. It is noted that although the events 130 are typically generated by a user, that the events can also be machine or component-generated as well. As the client application executes on the client component 130, a message 150 is transmitted to a checker 160. The message 150 is constructed from actions of the client component 110 as it executes the client application and responds to the events 130.
[0020] Concurrently, as the events 130 are transmitted to the replica 140, the replica executes as if it were the client application. The replica generates a subsequent message and submits the message to the checker 160. The checker 160 then compares the message generated by the replica 140 and the message 150 generated by the client component 110. If the messages are the same (or within some predetermined threshold) then the checker can notify the client and the server that security is valid. If the respective messages are different, the checker can notify the client and the server that a security error has been detected. If an error is detected, several actions can occur. Error notifications can cause the client and the server to shut down. In another aspect, a re-boot message could be transmitted to the client and the application could be restarted where further checks could be employed by the checker to determine if security is valid. In yet another aspect, the client component 110 could be notified that a previous message checked invalid and that a previous section or portion of an application would need to be re-executed. As can be appreciated, a plurality of differing actions could occur upon error detection.
[0021] When a portion of application code is moved to the client, a malicious user can easily subvert the client side of the computation and potentially jeopardize sensitive server states. The system 100 employs replicated execution to automatically preserve the integrity of a distributed computation. The system 100 replicates an abstract replica of the client-side computation on the trusted server tier 114. Client-side events 130 are transferred to the replica 140 of the client for execution. The system 100 observes results of the computation, both as computed on the client-side and on the server side using the replica 140 of the client-side code. Any discrepancy is flagged as a potential violation of computational integrity. It is noted that checking may occur online, e.g., concurrently when the application is executed, or after the fact, as part of security auditing. In general, substantially any segmented application is supported for security verification and validation by the system 100.
[0022] A distributed Web application can be highly responsive because of client-side execution, but the results of this execution do not have to be trusted because they are replayed on the server via the abstract replica 140. Thus, the integrity of the overall distributed computation is the same as if the application had been run entirely on the server 120. The system 100 can even lead to better performance since the application is replicated on the server, which typically runs faster than the client. Remote procedure calls (RPCs) from the client can be anticipated and delivered to the client browser ahead of time, leading to low-latency RPCs and further enhancements in responsiveness. The system 100 capitalizes on a recent trend towards distributing compilers such as GWT, Links, Hilda, Swift, and Volta, for example. Distributing compilers allows both the client- and the server portion of the distributed application to be developed concurrently. As will be described in more detail below, the system 100 can be integrated with a Volta compiler, a compiling compiler that tier-splits .NET applications and translates them into JavaScript as needed. Integration with Volta significantly simplifies the process of code replication since the distributed application is given to the Volta compiler at the time of compilation. The system 100 also integrates into the RPC infrastructure of Volta, making the process of communication between remote system components on different tiers convenient. It is to be appreciated that Volta or other example applications described herein are but one example of a distributed application and means of creating them and substantially any application that can be segmented between remote computing systems or ways to create such an application are within the scope of the claimed subject matter.
[0023] Referring now to FIG. 2, an example tier-split application 200 is illustrated. An application 200 is split into a server-side component S 210 and a client-side component C 220. The client-side component C can be translated into JavaScript C' to be run within a browser. While the system approach described above with respect to FIG. 1, can be used for general AJAX-based Web applications (or others), integrating with Volta, for example, provides a number of clear advantages. As illustrated in FIG. 2, a Volta compiler is a distributing compiler that takes a .NET application as input and tier-splits it into a client and a server component by replacing appropriate cross-tier method calls by AJAX RPCs. Data is serialized before being sent to the server and deserialized on the server when received. The client-side component is translated into JavaScript for execution within a standard browser.
[0024] Volta generally requires the developer to declaratively define which portion of the application runs on the server 210 and which part on the client 220 with the help of class-level annotations. Tier-splitting is performed subse-
quent as a .NET byte-code rewriting pass that reads the placement annotations, introducing RPCs as needed. To implement the system, the Volta tier-splitter can be augmented to perform additional rewriting steps described below. Base Volta libraries can also be augmented to provide support for browser emulation. As noted previously, Volta provides one possible implementation of a tier-split application but other types of implementations are possible.
Turning to FIG. 3, an example security validation system 300 is illustrated. The system 300 enhances the Volta application described above in the following manner: Capture user events: For example, the system 300 captures user events on a client C 310 within a browser; Transmit events to the server for replay at 320; Events are transmitted to the client's replica C at 330 for replay; and Compare server and client results at 340. A server component S 350 is augmented with a checker 340 that compares arriving RPCs m, 360 and m, 370 received from the client C 310 and server-based client replica C 330, respectively, monitoring for discrepancies.
In general, the system 300 relies on re-execution to produce the correct result within C 330 based on user events that it receives, effectively ignoring malicious data changes that occur on the client 310. If the malicious changes result in different RPCs issued to the server 350, which constitutes the observable state, the checker 340 will flag a potential exploit and terminate that client's connection.
Fragment (A)
// a custom button handler
this.button.Click += delegate {
var name = this.userName.Value ;
var pass = this.passWord.Value ;
Login l = new Login () ;
_attempts(name, pass) ;
}
Fragment (B)
// our rewriter adds the following
handler this.button.Click += delegate {
// capture the event
HtmlEventArgs evt = this.Window.Event ;
// read target object ID
var id = evt._ObjectID ;
// event type: keyboard, click, etc.
var type = evt.Type ;
// extra event-specific data
var data = serializeData (evt) ;
// enqueue event for transfer _ClientManager
enqueueEvent (type, data, id) ;
}
In general, the system 300 can be implemented as an optional addition to the Volta tier-splitting process that takes the original application and produces S 350 and C 330, then optionally translating C 330 into C 310 that runs in JavaScript. It is noted that event capture can be performed with the help of the cooperating JavaScript interpreter or by introducing additional browser support. In the absence of such, event capture can be implemented differently. It is to be appreciated that the event capture examples shown and described herein are but one example implementation and various others are possible within the scope of the claimed subject matter. Integrating with the Volta tier-splitter allows the system to be implemented as several simple IL-to-IL byte-code rewriting passes. From the standpoint of the developer, enabling the system on an existing Volta application is straightforward as ticking a checkbox in a Volta project configuration.
Prior to being translated to JavaScript, the client binary C 330 generated by the tier-splitter is rewritten to capture client-side user events. In the system 300, events 320 are classified into two types: primitive events and custom events. Primitive events include each key press and mouse click event, regardless of whether the application actually has registered any handlers for them. Custom events are those that the application has registered explicit handlers for. A typical handler for a button click event is shown in code Fragment (A) above. The events 320 are intercepted on the client 310 and relayed to C 330 for replay.
Tracking primitive events 320 helps maintain state of elements such as text areas and radio buttons, for example. For instance, each keystroke a user types into an HTML form can produce a separate keyboard event that is intercepted by the system and transferred to the replica 330. Note that not all JavaScript events that occur on the client have to be processed as doing so would involve listening to all MouseMove events, for example, which occur every time the user repositions the mouse pointer. This may be prohibitively expensive.
Primitive events 320 can be intercepted by registering a handler for each on the HTML BODY element. Since in the HTML event model, all events bubble up (or propagate) to the top-level document BODY element, it is a convenient point to intercept them. To intercept custom events 320, the system registers an extra handler shown in pseudo-code in code Fragment (B) above for each event of interest.
System-generated event handlers queue details about the event into an application-specific queue. In addition to the event type (key press, key release, and so forth), the serialized event details include the key code for keyboard-related events, mouse button information for mouse events, and so forth. Finally, the unique identifier corresponding to the object which raised the event can also be sent over.
Referring now to FIG. 4, example event transfer diagrams are illustrated. To reduce the number of round trips to the server, which is likely to become a bottleneck on high-latency connections, events are relayed to the server in batches. Diagrams 410 and 420 show two scenarios of how events may be batched on the client and transmitted to the server. There is a natural trade-off between eager and lazy event transfer for example. As diagram 410 demonstrates, sending events eagerly results in excess network usage, which may be costly on a mobile connection, for instance, but can ensure speedy replication on the server. On the other hand, batching events longer as in the diagram 420 results in minimal network usage, but can delay the integrity checking and resulting server updates and responses. To resolve this trade-off between responsiveness and network usage, a simple middle-path strategy can be adopted. For efficiency, events can be batched until a queue reaches the maximum size of a
network packet, in which case they are sent. Otherwise, when there is an RPC, events in the queue are flushed to the server.
[0033] Referring to FIG. 5, audit logs for a security checker are illustrated. The system 300 described above modifies the server binary S to receive and properly handle events arriving from the client and relay them to the client replica C for replay. Events are de-serialized from the wire before being delivered to C. The system intercepts RPCs that are received from the JavaScript client and the replica and records them into audit logs 510 and 520. By default, the system waits until it receives and compares RPCs m and m'. Only when they are equivalent does the runtime relay the RPC call to the application server code. The return response from the server is again intercepted as a string at the HTTP level. Copies of the response are relayed to both the client replica C and the actual client C over the network. Note that lock-step execution fashion is not the only option. Alternatively, the system could allow the server-side client replica C to move ahead, by relaying m to the server and sending back the actual. When m' arrives, the server can confirm its equivalence with m. This is a likely scenario with over-provisioned servers and relatively slow clients.
[0034] An alternative approach consists of keeping audit logs for messages arriving from C and C' and to perform periodic cross-checking. Moreover, if RPCs are large, sending the entire RPCs is unnecessary-to save bandwidth, simply compute Message Authentication Codes (MAC) and send them over. Since there could be multiple clients connected to the same server, the client replica C is executed in its own AppDomain, a lightweight process-like abstraction in the .NET runtime. At runtime, the system maintains a separate AppDomain associated with each user session, and locks it up when a batch of events is received from the client. An advantage of using separate AppDomains is memory isolation: each uses its own heap and loads its own copy of dynamically linked libraries and maintains its copy of global data structures. Moreover, cross-AppDomain communications are cheaper than inter-process communication in general as they do not require a process context switch and AppDomains can share DLLs.
[0035] Proceeding to FIG. 6, an example security threat model 600 is illustrated. At 610, data manipulation threats are considered. The most obvious kind of attack against a distributed Web application involves manipulation of data that is sent to the server. As in a shopping cart example, where the cart total could be forged easily, any piece of data that is transferred to the server can be easily manipulated within the browser using one of many readily available data tampering and debugging tools. Moreover, the integrity of data may also be compromised on the wire by a man-in-the-middle attack. Not only can a malicious client change existing data before it is sent over to the server, it can also choose to manufacture new messages. If considering the interface the server exposes as a set of commands, the client may choose to “drive” the server by invoking them out of order, potentially violating internal application logic.
[0036] Protection scheme for data manipulation: As mentioned above, the system uses re-execution to produce the correct result within the replica C based on user events that it receives, effectively ignoring malicious data changes that occur on the client. If the malicious changes result in discrepancies in the RPCs, this can cause the system to flag a potential exploit.
[0037] At 620, Code manipulation is considered. The code sent over to the client can be easily edited within the browser to produce a variety of undesired effects. For instance, consistency or input validation check can be easily removed, which is why these checks have been traditionally relegated to the server, thus making even the benign users incur a round trip overhead. In a game application for example, the user may manipulate the code to make it possible to circumvent the rules of the game. Often, these changes are as simple as replacing the conditional of an if statement with true. In a language as dynamic as JavaScript, code changes may affect not only the current application, but others running within the same interpreter. A prime example of this is the prototype hijacking vulnerability, where a malicious widget in a mash-up overrides the Array constructor, thus allowing it to snop on any of the other widgets.
[0038] Protection scheme for code manipulation: Note that the system does not try to prevent code tampering in general; indeed, adding a semicolon that does not change the program semantics cannot be detected. However, the system prevents code modifications that result in different RPCs being issued by the client.
[0039] At 630, Script injections and JavaScript worms are considered. While the threats above deal with the case of a malicious user, the system can actually help detect situations when benign users are affected by a malicious environment. Two examples of such a situation are injection attacks such as cross-site scripting and JavaScript worms, both of which allow for potentially malicious actions to be executed on part of an innocent user. As an example, consider an auction site such as eBay.com where users are either buyers or sellers. A malicious seller may embed JavaScript in the item description page so that when the item description page is viewed, a bid would be placed automatically on behalf of the viewer. Another common case is a worm on a social networking site such as the Sony worm on MySpace.com. When a particular page was viewed, a hidden embedded malicious script would add the viewer as Sony’s MySpace friend.
[0040] Protection scheme for script injections and worms: Referring to FIG. 3 above, the replica C executes in the .NET CLR, not JavaScript, thus rendering injected JavaScript code non-executable when run within C. Thus, if the example above, the client-side component C produces an RPC which will not even be issued by C, thus causing the system to observe a discrepancy.
[0041] At 640, basic security assumptions are considered. One basic assumption is that code executing on the server tier is believed to be uncompromised and trusted, whereas the client tier may be compromised. In one aspect, the event stream received from the client is a faithful representation of events that are generated by the user. If the application is running alongside malicious code in the browser that either suppresses, changes, or generates new events, there is little the system can do towards insuring the integrity of this computation. Currently, user events are captured by instrumenting the client code, but its trustworthiness can be enhanced by modifications to existing browsers that can ensure a path from the user’s keyboard and mouse to the server runtime that cannot be tampered with using JavaScript. This may be easily implemented using an extension technology such as ActiveX controls for Internet Explorer or plug-ins for Firefox, for example.
[0042] In another aspect, program execution is considered deterministic. Allowing non-determinism will lead to differ-
ences in the execution of C and C' that are not captured by the system, thus resulting in false positives. Fortunately, there is a way to “virtualize” sources of randomness that are discussed below. For instance, if a random number generator is used, the client can block its execution until it gets the random number from the server. Similarly, for a computation that accesses local time, the server component can block until the time measurement arrives from the client.
Referring now to FIG. 7, miscellaneous security considerations 700 are described. At 710, Secure Event Capture is provided. According to one assumption, the system can faithfully capture and transfer events on the client side to the replica. Since the malicious client may attempt to suppress or change the user event stream, it is best to implement this support at the level of either the browser or the JavaScript interpreter. This situation is not unlike what happens in the case of a remote display session. Once authenticated, the remote display client can communicate with the server. If the machine running client software has a key logger of a piece of malware installed that manipulates events destined for the remote computer, clearly the remote session will be affected by the malicious event stream.
At 720, non-determinism is considered. Reliance of having non-deterministic execution specified can be removed through additional instrumentation. The following sources of non-determinism are most common in Web applications, discussed in turn below:
Using the Random family of functions. JavaScript exposes a random number generator through function Math.Random. Unless additional measures are taken, the value returned by calls to this function on the client and the replica can disagree. A uniform approach to treating randomness is to perform the computation on one, “canonical” tier. In this case, instrument the client-side code C’ to send the result of the call to Math.Random in the event stream. Provide a further instrument on the replica C to block until the outcome of the random call is received. When received, the result of the call is substituted in place.
Reading and measuring time. Access to time is provided through the Date object in JavaScript. Similarly to the approach described above, access to time routines can be instrumented and the replica can be blocked until the time measured on the client is delivered to continue the computation.
Accessing third-party servers. A systematic approach to deal with accessing third-party servers is to require that these accesses be tunneled through the server. For servers in a different domain, this may be necessary anyway, because of the same origin policy in JavaScript. This allows for easy centralized access to outside data for both the replica and the client-side code. Since calls to external services are performed once, this also deals with the issue of non-idempotent calls with side-effects.
In fact, a set of small changes to the JavaScript interpreter solves the issue of event delivery and also addresses the issues of non-determinism defined above. In particular, instrumenting Math.Random and Date routines as well as event handlers in the interpreter a systematic way to treat these issues that ensures that malicious JavaScript code co-existing within the same page—which is an attack model—is unable to gain access to this data. This effectively makes a portion of the browser or the JavaScript interpreter part of the trusted computing base. Since event capture is performed outside of JavaScript, it can also ensure that the overhead of this instrumentation is low. To ensure that event streams are not tampered with, standard techniques such as Message Authentication Codes can be employed. It is noted that the claimed subject matter provides the ability to virtualize client-side code execution by:
- Capturing user events
- Third-party interactions
- Compensating for non-determinism and timing. These features and others facilitate JavaScript or .NET runtime design.
At 730, Performance and Scalability is considered. Other system optimizations include: Actively “pushing” results to the client. An advantage of the system described above is that, once computed, RPC results can be actively pushed to the client. This is further illustrated on the client, its result will already be available, leading to low-latency RPCs. This demonstrates that not only does the system make the application more secure in many cases it can also make it more responsive. Deployment strategy for the system meshes nicely with the traditional load-balancing approach to deployment of large-scale Web applications. In particular, a load balancer could be used to repeatedly direct the same user to the server where both its replica and the corresponding server threads run. Currently, this functionality is implemented in the checker, which looks up the appropriate AppDomain for a user session. Moreover, to save memory, both the server thread and the replica can be serialized on high server load for long-running sessions and then brought back from disk.
FIG. 8 illustrates an exemplary process 800 for providing security in remote client and server applications. While, for purposes of simplicity of explanation, the process is shown and described as a series or number of acts, it is to be understood and appreciated that the subject processes are not limited by the order of acts, as some acts may, in accordance with the subject processes, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the subject processes described herein.
Proceeding to 810, a client application replica is generated that is executable on a server that is remote from the client component or machine. As noted above, a tier-splitting application can be employed to generate the remote client application and the replica. At 820, client events are monitored and processed by the client component and by the replica. As noted previously, these can include keyboard activities, mouse activities, or substantially any input that alters the state of the remote client application. After the inputs events have been monitored, a message is generated that indicates how the client responded to the respective events. At 830, the message indicating client activity is transmitted to the server application. Concurrently to the client, the replica also processes the received events and generates its own execution message at 840. Proceeding to 850, the replica message and the client-generated message of 830 are compared. If the messages compare, execution of the remote application can continue in a substantially unimpeded manner. If a discrepancy is detected between messages at 850, error events can be generated. As noted previously, various responses to errors can be set up including retries, reboots, or
prevention of further remote client activity until the source of the security violation is detected. Remote troubleshooting and guidance can be optionally generated and delivered to the user in order to help them determine the source of the respective security violation or other detected error. Alternatively, means of error recovery can be provided.
[0055] In order to provide a context for the various aspects of the disclosed subject matter, FIGS. 9 and 10 as well as the following discussion are intended to provide a brief, general description of a suitable environment in which the various aspects of the disclosed subject matter may be implemented. While the subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a computer and/or computers, those skilled in the art will recognize that the invention also may be implemented in combination with other program modules. Generally, program modules include routines, programs, components, data structures, etc. that perform particular tasks and/or implements particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods may be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, mini-computing devices, main-frame computers, as well as personal computers, hand-held computing devices (e.g., personal digital assistant (PDA), phone, watch . . . ), microprocessor-based or programmable consumer or industrial electronics, and the like. The illustrated aspects may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all aspects of the invention can be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[0056] With reference to FIG. 9, an exemplary environment 910 for implementing various aspects described herein includes a computer 912. The computer 912 includes a processing unit 914, a system memory 916, and a system bus 918. The system bus 918 couple system components including, but not limited to, the system memory 916 to the processing unit 914. The processing unit 914 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 914.
[0057] The system bus 918 can be of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any of a variety of available bus architectures including, but not limited to, 64-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
[0058] The system memory 916 includes volatile memory 920 and nonvolatile memory 922. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 912, such as during start-up, is stored in nonvolatile memory 922. By way of illustration, and not limitation, nonvolatile memory 922 can include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory 920 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
[0059] Computer 912 also includes removable/non-removable, volatile/non-volatile computer storage media. FIG. 9 illustrates, for example a disk storage 924. Disk storage 924 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition, disk storage 924 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage devices 924 to the system bus 918, a removable or non-removable interface is typically used such as interface 926.
[0060] It is to be appreciated that FIG. 9 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 910. Such software includes an operating system 928. Operating system 928, which can be stored on disk storage 924, acts to control and allocate resources of the computer system 912. System applications 930 take advantage of the management of resources by operating system 928 through program modules 932 and program data 934 stored either in system memory 916 or on disk storage 924. It is to be appreciated that various components described herein can be implemented with various operating systems or combinations of operating systems.
[0061] A user enters commands or information into the computer 912 through input device(s) 936. Input devices 936 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 914 through the system bus 918 via interface port(s) 938. Interface port(s) 938 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 940 use some of the same type of ports as input device(s) 936. Thus, for example, a USB port may be used to provide input to computer 912 and to output information from computer 912 to an output device 940. Output adapter 942 is provided to illustrate that there are some output devices 940 like monitors, speakers, and printers, among other output devices 940 that require special adapters. The output adapters 942 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 940 and the system bus 918. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 944.
[0062] Computer 912 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 944. The remote computer(s) 944 can be a personal computer, a server, a router, a network PC, a work station, a microprocessor-based appliance, a peer device or other common network node and the
like, and typically includes many or all of the elements described relative to computer 912. For purposes of brevity, only a memory storage device 946 is illustrated with remote computer(s) 944. Remote computer(s) 944 is logically connected to computer 912 through a network interface 948 and then physically connected via communication connection 950. Network interface 948 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 802.3, Token Ring/IEEE 802.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
[0063] Communication connection(s) 950 refers to the hardware/software employed to connect the network interface 948 to the bus 918. While communication connection 950 is shown for illustrative clarity inside computer 912, it can also be external to computer 912. The hardware/software necessary for connection to the network interface 948 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
[0064] FIG. 10 is a schematic block diagram of a sample-computing environment 1000 that can be employed. The system 1000 includes one or more client(s) 1010. The client(s) 1010 can be hardware and/or software (e.g., threads, processes, computing devices). The system 1000 also includes one or more server(s) 1030. The server(s) 1030 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1030 can house threads to perform transformations by employing the components described herein, for example. One possible communication between a client 1010 and a server 1030 may be in the form of a data packet adapted to be transmitted between two or more computer processes. The system 1000 includes a communication framework 1050 that can be employed to facilitate communications between the client(s) 1010 and the server(s) 1030. The client(s) 1010 are operably connected to one or more client data store(s) 1060 that can be employed to store information local to the client(s) 1010. Similarly, the server(s) 1030 are operably connected to one or more server data store(s) 1040 that can be employed to store information local to the servers 1030.
[0065] What has been described above includes various exemplary aspects. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing these aspects, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the aspects described herein are intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
What is claimed is:
1. A system for securing distributed applications, comprising:
a server component that executes an abstract replica of a client-side application; and
a client component that executes the client-side application, the client component captures events from the client-side application and transmits the events to the replica to validate the integrity of application execution.
2. The system of claim 1, the events are generated by a user or an application component.
3. The system of claim 2, the integrity is facilitated by comparing messages or observable states computed by the replica and received from the client-side application.
4. The system of claim 3, further comprising a checker to compare the messages computed by the replica and received from the client-side application.
5. The system of claim 3, further comprising a component to generate an error event if a discrepancy is detected between messages.
6. The system of claim 1, further comprising tier-splitting components to automatically generate the client-side application and the replica.
7. The system of claim 1, the replica provides minimal functionality of the client-side application in order to compute a desired integrity checking result from an event stream.
8. The system of claim 1, the client-side application is an asynchronous interpreted programming language based web application.
9. The system of claim 8, the client-side application is communicated with via one or more remote procedure calls.
10. The system of claim 9, the client-side application is translated into an interpreted programming language for execution within a web browser.
11. The system of claim 1, further comprising an event handler inserted into the client-side application through code rewriting.
12. The system of claim 1, further comprising an event handler inserted into the client-side application through runtime modifications.
13. The system of claim 1, further comprising a component to batch events to reduce performance overhead.
14. The system of claim 1, further comprising an audit log that is generated to compare messages or observable states between the client-side application and the replica.
15. The system of claim 14, the audit log is examined online in real time, examined at a later time, or provides a sample for further analysis.
16. The system of claim 15, further comprising a method authentication component that is transmitted in lieu of a complete message inside of a remote procedure call.
17. A method to validate security of a remote application, comprising:
generating a replica of a remote client application;
executing the replica in conjunction with the remote client application;
monitoring events associated with the remote client application;
and
comparing results between the replica and the remote client application to validate security of the application.
18. The method of claim 17, further comprising automatically instrumenting the remote client application to monitor user events and communications with third-party components.
19. The method of claim 17, further comprising generating a message to indicate an execution pattern for the remote client application.
20. A system for virtualizing script/bytecode execution across tiers, comprising:
means for instrumenting user events;
means for controlling third party interactions; and
means for discovering sources of non-determinism.
|
{"Source-Url": "https://patentimages.storage.googleapis.com/77/2e/a6/c41e189fd74071/US20100106767A1.pdf", "len_cl100k_base": 9702, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 24777, "total-output-tokens": 10960, "length": "2e13", "weborganizer": {"__label__adult": 0.0004069805145263672, "__label__art_design": 0.0002770423889160156, "__label__crime_law": 0.0013179779052734375, "__label__education_jobs": 0.0004396438598632813, "__label__entertainment": 7.349252700805664e-05, "__label__fashion_beauty": 0.0001316070556640625, "__label__finance_business": 0.0003802776336669922, "__label__food_dining": 0.0002366304397583008, "__label__games": 0.001270294189453125, "__label__hardware": 0.00402069091796875, "__label__health": 0.0003123283386230469, "__label__history": 0.00020265579223632812, "__label__home_hobbies": 9.685754776000977e-05, "__label__industrial": 0.0005688667297363281, "__label__literature": 0.00019419193267822263, "__label__politics": 0.00020933151245117188, "__label__religion": 0.0003535747528076172, "__label__science_tech": 0.049346923828125, "__label__social_life": 5.7697296142578125e-05, "__label__software": 0.03314208984375, "__label__software_dev": 0.90625, "__label__sports_fitness": 0.0001829862594604492, "__label__transportation": 0.00038743019104003906, "__label__travel": 0.00014412403106689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49436, 0.01881]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49436, 0.30823]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49436, 0.92349]], "google_gemma-3-12b-it_contains_pii": [[0, 408, false], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 608, null], [608, 615, null], [615, 615, null], [615, 7324, null], [7324, 14794, null], [14794, 20896, null], [20896, 28171, null], [28171, 35370, null], [35370, 42618, null], [42618, 49069, null], [49069, 49436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 408, true], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 408, null], [408, 608, null], [608, 615, null], [615, 615, null], [615, 7324, null], [7324, 14794, null], [14794, 20896, null], [20896, 28171, null], [28171, 35370, null], [35370, 42618, null], [42618, 49069, null], [49069, 49436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49436, null]], "pdf_page_numbers": [[0, 408, 1], [408, 408, 2], [408, 408, 3], [408, 408, 4], [408, 408, 5], [408, 408, 6], [408, 408, 7], [408, 408, 8], [408, 608, 9], [608, 615, 10], [615, 615, 11], [615, 7324, 12], [7324, 14794, 13], [14794, 20896, 14], [20896, 28171, 15], [28171, 35370, 16], [35370, 42618, 17], [42618, 49069, 18], [49069, 49436, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49436, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
1fac32e0b3c4686433a21bf50d368b7b23e3fe79
|
Towards Efficient and Verified Virtual Machines for Dynamic Languages
Martin Desharnais
National Cyber Defence Research Institute (CODE)
Universität der Bundeswehr München
Germany
martin.desharnais@unibw.de
Stefan Brunthaler
National Cyber Defence Research Institute (CODE)
Universität der Bundeswehr München
Germany
brunthaler@unibw.de
Abstract
The prevalence of dynamic languages is not commensurate with the security guarantees provided by their execution mechanisms. Consider, for example, the ubiquitous case of JavaScript: it runs everywhere and its complex just-in-time compilers produce code that is fast and, unfortunately, sometimes incorrect.
We present an Isabelle/HOL formalization of an alternative execution model—optimizing interpreters—and mechanically verify its correctness. Specifically, we formalize advanced speculative optimizations similar to those used in just-in-time compilers and prove semantics preservation. As a result, our formalization provides a path towards unifying vital performance requirements with desirable security guarantees.
CCS Concepts: • Software and its engineering → Correctness; Software verification; • Security and privacy → Software and application security.
Keywords: formalization and verification, Isabelle, semantics, dynamic typing, speculative optimizations, interpreters, just-in-time compilers, inline caching, unboxing
ACM Reference Format:
1 Motivation
Every day, every person with a computer or smartphone executes enormous amounts of JavaScript—knowingly or not.
Confident that the machinery executing JavaScript works correctly, we use it day in and day out. A closer look at the correctness of JavaScript virtual machines shows that this confidence is unwarranted. Through abuse of implementation errors, attackers hijack victim devices through arbitrary code execution. Recently, Google’s Project Zero published a complete series on so-called "JITSploitation" [18–20].
This should not come as a surprise, particularly as prior research has already looked at the prevalence of implementation errors in compilers [51]. Their comparison of the LLVM, GCC, and CompCert compilers provides strong evidence of the power of formalization and verification to reduce implementation errors.
To establish confidence in the JavaScript computing machinery, one would have to replicate the CompCert [30] effort for a JavaScript virtual machine. Prior research has shown that this approach is non-trivial [36]. Just-in-time compilers rely on self-modification and speculative optimizations to speed up programs. Both of these optimization techniques are at odds with the CompCert approach.
An alternative strategy to overcome these obstacles would be to sidestep just-in-time compilation and focus on interpreters instead. The expected advantages are ease of implementation, no self-modification, and no dynamic generation of native-machine code. Together, these advantages would also simplify the formalization and verification process.
But what kind of impact would such a strategy have on performance? Conventional wisdom states that interpreters are slow, and that performance requires just-in-time compilation. Prior research in interpreter optimization, however, reports remarkable and important speedups [8, 9, 13, 45, 50].
In this paper, we build on prior results in interpreter optimization to formalize and mechanically verify speculative optimizations: inline caching and unboxing. This formalization gives way to virtual machine interpreters that are both efficient and correct. Since our technique applies to all dynamic programming languages, it enables the construction of efficient and correct virtual machine interpreters for many popular languages, such as Lua, Perl, Python, and Ruby.
While these verifiably correct interpreters will not match the peak performance of their highly tuned just-in-time compiled counterparts, they offer acceptable performance for a
Figure 1. Resolving dynamic types and its impact on control flow. Squiggly lines indicate branches taken, dashed lines branches not taken. Arrows and horizontal lines indicate function entry and exit, i.e., calls and returns, respectively.
2 Background
Overhead of Dynamic Typing Figure 1 shows, on the left-hand side, the slightly simplified implementation of the add operation in JavaScriptCore, WebKit’s JavaScript implementation, which is the open source version of Apple’s Safari web browser. The dynamically-typed add operation resolves concrete type assignments according to the expected frequency. First, JavaScriptCore delegates to C++’s addition operator when both operands, \(v1\) and \(v2\), are numeric (lines 2 and 3 in Figure 1). Second, JavaScriptCore performs string concatenation, including coercion of the second operand, when the first operand is a string (lines 5–10 in Figure 1). Third, JavaScriptCore delegates implementation to \(\text{jsAddSlowCase}\) in all other cases (line 13), which is deemed “pretty uncommon” in the actual and original source code comment on line 12.
Figure 1 shows, on the right-hand side, the control flow including required branches for different operand type assignments. When both operands have type integer (\(\text{Int}\) in Figure 1, right-hand side, left column), control-flow takes the first branch and returns. When the first operand is a string (\(\text{String}\) in Figure 1, right-hand side, middle column), control flow requires at least one branch for testing against integers, plus a second branch if the second argument is not a string and needs to be coerced before returning the concatenated string. In all other cases, indicated by \(\text{Object}\) type assignments in the right column of Figure 1, the operation execution is delegated to yet another function, \(\text{jsAddSlowCase}\), which requires two branches to determine less likely type assignments.
Our formalization will be part of the next public \(\text{AFP}\) release, expected at the beginning of 2021. Before this release, use revision cb82935ea66a of the \(\text{AFP}\) development repository available at https://foss.heptapod.net/isaafp/afp-devel.
From a performance perspective, the implementation of the add operator in Figure 1 indicates the performance penalties when type assignment expectations are not met. Consider a frequently executed, tight loop with a single string concatenation:
```c
1 result = "";
2 for (i = 0; i < 100000; i++) {
3 result += i;
4 }
```
In this example, the add operation will incur four branches to concatenate strings for *all iterations*. These branches are, however, redundant, as the type assignments of the operands for that specific occurrence of the add operation are invariant. If the string operands case were ranked first, then none of these branches were required, with the downside that now integer operands would suffer from the surplus type checks.
The effect of suboptimal static type-encoding in operation implementations of dynamic languages, as illustrated by the example above, has been known for decades. In 1982, Baden analyzed Smalltalk code and discovered what he termed a "dynamic locality of type usage." [3] In their landmark paper from 1984, Deutsch and Schiffman described what was to become one, if not the most, important optimization techniques to address this problem: inline caching [12]. In its original form, inline caching means that the virtual machine directly overwrites the target address of a call instruction in memory. So instead of calling the default routine that checks the types of all parameters—e.g., the type-generic function shown in Figure 1—one would overwrite the address of the call instruction to type-dependent function, prefixed with so-called guards, i.e., type checks to ensure that the expected types were passed. As a result, a subsequent execution of the same instruction will "short-circuit" the type checks and merely guard against expected types.
### Overhead of Boxed Data Objects
Boxed objects wrap primitive data types, such as numbers or characters. These primitive data usually can be manipulated using efficient native-machine operations and data representations. "Boxing" primitive data involves replacing the data item with a reference to an object representing the primitive data item. The resulting boxed object can, therefore, not be directly manipulated: assume that two numbers are in boxed object representation, then a simple machine addition, would add their addresses, instead of their numeric values. To manipulate boxed objects, their wrapped primitive data need to be "unboxed" first.
Boxing and unboxing requires surplus computation: to access the wrapped data, the computer must resolve the data references in the boxed objects. Additional operations, most often related to automatic memory management, must be taken into consideration as well. In Python, for example, each push operation that puts data onto the operand stack needs to adjust the object’s reference count. Native-machine data, on the other hand, need not be reference counted, as they exist on their own in binary representation and need no automatic memory management.
With unboxing, data locality is improved, as the indirect via the boxed object wrapper is eliminated. Automatic memory management operations are reduced, as these operations are only required to manage boxed objects. Overall memory consumption can be reduced, because fewer objects are required. Automatic memory management techniques can be adjusted to take this into account. This effect is most pronounced on immediate memory management techniques such as reference counting.
On the other hand, boxed objects can be easily stored in the heap, and all other operations can refer to them in a uniform way using references or addresses. Boxed objects, furthermore, simplify the implementation of custom object and type systems.
### 3 Overview of the Formalization
Our formalization has three parts, each concerned with a separate programming language.
**Dyn** (Section 4) is a standard stack-based interpreter for dynamic languages, it provides a baseline for optimizations. The features provided by Dyn are intentionally kept minimal, but include the most representative features found in existing virtual machine interpreters for dynamic languages: operand stack manipulation, dynamic memory manipulation, built-in operations, conditional jumps, and (possibly recursive) function calls.
**Inca** (Section 5) extends Dyn with a speculative optimization known as inline caching. This type-based optimization is embedded directly in the semantics and, thus, performed automatically at run time. If the encountered types of an inlined operation match our speculation, the optimization is said to be a *hit*. Otherwise, the optimization is said to be a *miss* and must be rolled back. To ensure the soundness of this speculative optimization, we define a relation between unoptimized Dyn and optimized Inca programs, and prove that it is a bisimilarity, meaning that the compiled program has the same behavior as the unoptimized one and vice versa. In addition, we provide a simple compilation scheme and prove its soundness and completeness.
**Unx** (Section 6) extends Inca with operations to manipulate unboxed, native-machine data. This optimization is also type-based but proceeds in two stages. First, an optimization pass rewrites the program *ahead of time* by substituting some type-generic instructions with type-specific alternatives that directly manipulate unboxed data. Second, the semantics is extended to perform the minimum number of checks at run time to ensure that the type-specific, optimized instructions rewritten in the first stage operate on the expected types and roll the optimization back if needed. Again, the soundness of this optimization is based on a bisimilarity relation, this time...
between unoptimized INCA programs and optimized Ubx programs. We provide an exemplary compilation scheme, too, based on a simple static analysis, and prove its soundness. We finish by discussing the incompleteness of this compilation scheme and some possible way forward.
We strove to keep the languages highly general by abstracting over a variety of implementation considerations. The most important abstraction is concerned with built-in operations. Instead of fixing a small set of these operations (such as arithmetic and Boolean operations) and optimizing them, we instead define an algebra of operations. For any operation of the algebra’s carrier set, we can (i) determine the operation’s arity, and (ii) evaluate the operation on the given arguments. The semantics of all three languages, therefore, needs only to ensure that operations receive the correct number of arguments, and manipulate their results accordingly. By construction, this technique ensures that our formalization supports all operations, and we can mostly avoid arguing “without loss of generality.”
To optimize these abstract operations, we progressively introduce more ways to manipulate the operations and check for speculative optimization opportunities. Thereby we are forced to state our formalization’s assumptions.
Notation In the following paper, different typefaces or colors are used to identify different concepts. A blue color, prefixed with a backtick, is used for abstract types and a green color from their abstract operations, which we will call parameters from now on to distinguish them from the built-in operations of the discussed languages. In contrast, a monospace typeface is used for concrete functions, defined either in this formalization or in the Isabelle/HOL standard library.
4 Dyn: Stack-Based Interpreter for Dynamically Typed Languages
The Dyn language corresponds to a simple, stack-based bytecode interpreter to execute a dynamically typed programming language. Figure 2 shows the syntax and dynamic state of Dyn.
4.1 Syntax and Semantics
Identifiers The identifiers for variables and functions are members of the abstract types ‘var’ and ‘fun’, respectively.
Values The manipulated values belong to the abstract type ‘dyn’. Dyn’s semantics uses two disjoint subsets to decide whether values are true or false. Let x be a value, IsTrue x identifies the former, and IsFalse x the latter. This semantics is not affected by providing support for more types. Formally:
locale dynval =
fixes
The built-in operations are members of the type ‘op’ of the locale nary_operations. Let op be an operation, Arity op evaluates to op’s arity. Op op xs evaluates op on provided arguments xs; it is defined if and only if |xs| = Arity op. Formally:
locale nary_operations =
fixes
Op :: ‘op ⇒ ‘dyn list ⇒ ‘dyn and
Arity :: ‘op ⇒ nat
Environments The environments are partial mappings from keys to values. In this paper, the important operations are Get e k, which retrieves the value associated with key k from the environment e, and Add e k v, which binds the key k with the value v in the environment e, overriding any prior bindings. Formally:
locale env =
fixes
Get :: ‘env ⇒ ‘key ⇒ ‘val option and
Add :: ‘env ⇒ ‘key ⇒ ‘val ⇒ ‘env and …
We use two environments, one for function definitions (types `fenv`, `fun`, and `fundef`) and another one to model dynamic memory (types `henv`, `var` × `dyn`, `dyn`). The parameters are prefixed with `Fun` and `Mem`, respectively.
Static representation Instructions belong to one of the following categories: manipulation of operand stack, manipulation of dynamic memory, built-in operations, conditional jumps, and function calls. Function definitions contain a list of instructions and the function’s arity. Programs contain an environment for functions, an initial memory, and an initializing function.
Dynamic states Stack frames contain the identifier of the current function, a program counter relative to the beginning of the function, and a (possibly empty) operand stack. Program states contain an environment for functions, an initial memory, and a non-empty call stack.
Loading and initial states The binary loadDyn relation associates the static representation of a program to an initial dynamic program state. More precisely, the relation initializes the program state, obtains the initializing function from the program, and transfers control to this function.
Final states The predicate finalDyn identifies final states the ones having a call stack with a single stack frame, where the program counter points beyond the last instruction.
Operational semantics The operational semantics is defined by the small-step transition relation →Dyn between program states (Figure 3). Most instructions’ semantics corresponds to well-known, standard behavior. The dynamic memory is partitioned by variable names, which are statically encoded in the load and store instructions, and each partition may contain any number of dynamic values, which are indexed by a dynamic value taken from the operand stack.
The rule →Dyn·Op assumes that there are enough arguments on the operand stack before evaluating the operation. This assumption ensures that the function Op op is defined for the list of operands take ar Σ.
Similarly, the rule →Dyn·Fun·Call assumes that the number of operands equals the arity of the called function. A new stack frame is created, the arguments copied to the new stack frame’s operand stack. Note that a function may call itself recursively.
The rule →Dyn·Fun·End proceeds in two steps. First, the remaining values on the called function’s operand stack are interpreted as its result and its stack frame is discarded. Second, the arguments on top of the calling function’s operand
stack are replaced by the called function’s result and the program counter is incremented.
The rule \( \rightarrow_{\text{Dyn}}\text{CJUMP-TRUE} \) transfers the control flow to a position relative to the beginning of the function. Note that execution gets stuck if a jump condition represents neither true nor false.
5 INCA: Inline Caching
The Inca language extends Dyn with a single instruction for inline caching of operations.
5.1 Syntax and Semantics
The syntax of Inca is a proper superset of Dyn’s syntax. The only addition is an instruction to inline operations (Figure 4).
Inlined operations The built-in inlined operations are members of the type ‘opinl’ of the locale nary_operations_inl. Figure 5 illustrates the relationship between the sets ‘op’ and ‘opinl’. An operation from ‘op’ may be mapped to any number (including none) of inlined operations in ‘opinl’ with inl, which gives the most specific inlined operation for concrete operand types. This mapping may be inverted with Inl\(^{-1}\).
A typical implementation of the Inl function may start with a case analysis of the operation followed by a linear search for the most specific inlined function. Depending on the cardinality of ‘op’ and ‘opinl’, this may be time consuming and should be avoided when possible. When we are evaluating an inline operation, it is more efficient to leverage the “dynamic locality of type usage” by using isInl to ensure that the expected operand types and the actual operand types match.
Finally, InlOp can be used to evaluate inline operations with given arguments. It is defined if and only if Op is defined for the corresponding operation and given arguments. In that case, the inlined and the normal operations must always produce the same results. Formally:
locale nary_operations_inl = nary_operations +
fixes
\[
\begin{align*}
\begin{align*}
\text{InlOp} &:: \text{opinl} \Rightarrow \text{dyn} & \text{list} &\Rightarrow \text{dyn and} \\
\text{Inl} &:: \text{op} \Rightarrow \text{dyn} & \text{list} &\Rightarrow \text{opinl} & \text{option and} \\
\text{Inl}\text{-}1 &:: \text{opinl} \Rightarrow \text{op and} \\
\text{IsInl} &:: \text{opinl} \Rightarrow \text{dyn} & \text{list} &\Rightarrow \text{bool}
\end{align*}
\end{align*}
\]
assumes
\[
\begin{align*}
\text{Inl} \text{ op xs} & = \text{Some opinl} \Rightarrow \text{Inl}\text{-}1 \text{ opinl} = \text{op} & \text{and} \\
\text{Inl} \text{ op xs} & = \text{Some opinl} \Rightarrow \text{IsInl opinl xs} & \text{and} \\
\text{Arity} (\text{Inl}\text{-}1 \text{ opinl}) & \Rightarrow \text{InlOp opinl xs} = \text{Op (Inl}\text{-}1 \text{ opinl) xs}
\end{align*}
\]
Semantics Inca’s dynamic representation, its loading relation load\(\text{Inca}\), and its set of final states (identified by the predicate final\(\text{Inca}\)) are all the same as their Dyn counterparts. We modify the transition relation by adding three new rules and modifying the existing rule \( \rightarrow_{\text{Dyn}-\text{Op}} \) to \( \rightarrow_{\text{Inca}-\text{Op}} \) (Figure 6).
Finally, InlOp can be used to evaluate inline operations with given arguments. It is defined if and only if Op is defined for the corresponding operation and given arguments. In that case, the inlined and the normal operations must always produce the same results. Formally:
\[
\text{InlOp} := \ldots |
\]
instructions from Dyn
\[
\text{Inl}\text{opinl} \quad \text{inline operations on data}
\]
When executing an operation (Op), Inl is used to check if an inlined operation exists for the supplied arguments. If no such inlined operation exists (Rule \( \rightarrow_{\text{Inca}-\text{Op}} \)), then the operation is evaluated with Op, and execution continues as in Dyn. If such an inlined operation exists (Rule \( \rightarrow_{\text{Inca}-\text{Op}-\text{Inl}} \)), then two things take place. First, we evaluate the operation with InlOp. Second, we cache the search for an optimized inlined operation by replacing the Op instruction with an optimized OpInl instruction in the function definition (rewrite). As a result, any subsequent execution then “short-circuits” the check for an inlined operation.
When executing an inlined operation (OpInl), the efficient predicate IsnInl is used to test whether it is still appropriate for the supplied arguments. If they are, then execution continues as expected using an optimized function (Rule \( \rightarrow_{\text{Inca}-\text{Op}-\text{Inl}'} \)). Otherwise, we undo the optimization by replacing the optimized instruction with the generic, unoptimized instruction in the function definition (Rule \( \rightarrow_{\text{Inca}-\text{Op}-\text{Inl}-\text{Miss}} \)). Whether we use Op or InlOp is irrelevant, since they are semantically equivalent and because it is unknown which one would be more efficient.
5.2 Bisimulation Dyn-Inca
A Dyn and an Inca program that simulate each other differ only in the codomain of their functions environments. The dynamic memories, the call stacks, and the domains of the function environments are identical. Given two corresponding function definitions from Dyn and Inca, they may only differ by the potential use of inline operations.
The simulation relation \( \sim_D \) thus inspects all corresponding instructions and checks whether an inlined operation maps to its corresponding regular operation. We use the inverse function Inl\(^{-1}\) to map an inlined to its corresponding regular operation.
\[\sim_D \quad \text{Simulates} \quad \text{Dyn-Inca} \]
Following the path of CompCert, we proved the following lemmas to show that $\mathcal{D} \sim \mathcal{I}$ is a bisimulation, i.e., that two similar programs have similar behavior.
**Lemma 1 (Forward simulation).** If $s_1 \xrightarrow{\mathcal{D}} s'_1$ and $s_1 \sim s_2$, then there exists a state $s'_2$ such that $s_2 \xrightarrow{\mathcal{I}} s'_2$ and $s_1 \mathcal{D} \sim s'_2$.
**Lemma 2 (Forward matching final states).** If $s_1 \mathcal{D} \sim s_2$ and $s_1 \mathcal{I} \sim s_2$, then there exists a state $s'_2$ such that $s_2 \xrightarrow{\mathcal{I}} s'_2$ and $s_1 \mathcal{D} \sim s'_2$.
**Lemma 3 (Backward simulation).** If $s_2 \xrightarrow{\mathcal{I}} s'_2$ and $s_1 \mathcal{D} \sim s_2$, then there exists a state $s'_1$ such that $s_1 \xrightarrow{\mathcal{D}} s'_1$ and $s'_1 \mathcal{D} \sim s'_2$.
**Lemma 4 (Backward matching final states).** If $s_1 \mathcal{D} \sim s_2$ and $s_1 \mathcal{I} \sim s_2$, then $s_1 \xrightarrow{\mathcal{D}} s'_1$ and $s_1 \mathcal{I} \sim s'_1$.
5.3 **Compilation from Dyn to Inca**
Dyn’s function definitions can be compiled by mapping all instructions to their equivalent in Inca. The compilation function of full programs can then simply compile all function definitions of the program.
We proved that compiled programs simulate their uncompiled counterparts.
**Lemma 5 (Compiled matching states).** If $\text{compile } p_1 = \text{Some } p_2$ and $\text{load}_{\mathcal{D}} p_1 s_1$, then there exists a state $s_2$ such that $\text{load}_{\mathcal{I}} p_2 s_2$ and $s_1 \sim s_2$.
Building on the VeriComp framework for verified compilation [10], lemmas 1 to 5 imply that the successful execution of a compiled Inca program exhibits the identical behavior as the execution of the original Dyn program. Formally:
**Theorem 1 (Soundness of compilation).** Let the infix relation $\Downarrow$ pair a program to its run-time behavior and the infix relation $\approx$ be an equivalence relation between behaviors. If compile $p_1 = \text{Some } p_2$, and $p_1 \Downarrow b_1$, and $b_2$ does not go wrong, then there exists a behavior $b_1$ such that $p_1 \Downarrow b_1$ and $b_1 \approx b_2$.
Furthermore, compilation is complete for all loadable Dyn programs.
**Theorem 2 (Completeness of compilation).** If $\text{load}_{\mathcal{D}} p_1 s_1$, then there exists a program $p_2$ and state $s_2$ such that compile $p_1 = \text{Some } p_2$, and $\text{load}_{\mathcal{I}} p_2 s_2$, and $s_1 \mathcal{D} \sim s_2$.
6 **Unx: Operations on Unboxed Data**
The Unx language adds the concept of manipulating unboxed data representations to Inca.
6.1 **Syntax and Semantics**
The syntax of Unx is a proper superset of Inca’s syntax (Figure 7).
**Values** The values manipulated through the operand stack may either be boxed or unboxed. In principle, any fixed number of unboxed types may be supported but, Isabelle/HOL not supporting abstractions over arbitrary numbers of types, we abstract over two unboxed types ("ubx2" and "ubx3") and have to argue without loss of generality.
Because the operand stack may only contain values of a uniform type, we define the tagged union $\text{ubx}$ with three constructors: $\text{UbxDyn}$ represents a boxed value while both $\text{UbxBx}$ and $\text{UbxBx2}$ represent unboxed values. We extract a value stored in an $\text{ubx}$ by casting it to the desired type. Casting (i) checks that the $\text{ubx}$ value is tagged with the
expected constructor for the given type, and (ii) returns the unboxed value.
**fun castDyn :: ubx ⇒ ‘dyn option where**
\[
\begin{align*}
castDyn \ (\text{UbxDyn} \ d) &= \text{Some} \ d \\
castDyn \ _ &= \text{None}
\end{align*}
\]
The functions cast\textsubscript{ubx1} and cast\textsubscript{ubx2} are analog but return values of type ’ubx1 and ’ubx2, respectively. Our formalization proves that casts are always successful and an implementation of this optimization would be free to omit.
The boxing and unboxing operations are abstracted over in the locale unboxedval. Let d be a dynamic value and u an unboxed value of type ’ubx1. Unbox\textsubscript{x} x = Some u successfully extracts the native-machine value u, and Box\textsubscript{x} u boxes it back to d. Unboxing may fail by evaluating to None when the provided dynamic value is not of the expected type. The same holds for ’ubx2, and extends to any other supported unboxed type. Formally:
**locale unboxedval = dynval + fixes**
\[
\begin{align*}
\text{Box}_1 \ : \ 'ubx_1 \ ⇒ 'dyn \ \text{and} \\
\text{Unbox}_1 \ : \ 'dyn \ ⇒ 'ubx_1 \ \text{option and} \\
\text{Box}_2 \ : \ 'ubx_2 \ ⇒ 'dyn \ \text{and} \\
\text{Unbox}_2 \ : \ 'dyn \ ⇒ 'ubx_2 \ \text{option}
\end{align*}
\]
\textbf{assumes}
\[
\begin{align*}
\text{Unbox}_1 \ d &= \text{Some} \ u_1 \ ⇒ \text{Box}_1 \ u_1 = d \ \text{and} \\
\text{Unbox}_2 \ d &= \text{Some} \ u_2 \ ⇒ \text{Box}_2 \ u_2 = d
\end{align*}
\]
In order to uniformly manipulate ubx when boxing and unboxing, we define the type type which has one constructor per unboxed type, i.e., Ubx\textsubscript{1} is associated with ’ubx1 and Ubx\textsubscript{2} is associated with ’ubx2. The generic cast\_box function (i) casts unboxed values, and (ii) immediately boxes them to a dynamic value.
**fun cast\_box :: type ⇒ ubx ⇒ ‘dyn option where**
\[
\begin{align*}
cast\_box \ Ubx_1 &= \text{map\_option} \ \text{Box}_1 \ \circ \ cast\_ubx_1 \\
cast\_box \ Ubx_2 &= \text{map\_option} \ \text{Box}_2 \ \circ \ cast\_ubx_2
\end{align*}
\]
Conversely, the generic function unbox unboxes ’dyn values to some specified type.
**fun unbox :: type ⇒ ‘dyn ⇒ ubx option where**
\[
\begin{align*}
\text{unbox} \ Ubx_1 &= \text{map\_option} \ Ubx\textsubscript{1} \ \circ \ \text{Unbox}_1 \\
\text{unbox} \ Ubx_2 &= \text{map\_option} \ Ubx\textsubscript{2} \ \circ \ \text{Unbox}_2
\end{align*}
\]
Finally, a ubx value may be boxed and normalized to ’dyn.
**fun norm :: ubx ⇒ ‘dyn where**
\[
\begin{align*}
norm \ (\text{UbxDyn} \ d) &= d \\
norm \ (\text{UbxUb}_x \ u_1) &= \text{Box}_1 \ u_1 \\
norm \ (\text{UbxUb}_x \ u_2) &= \text{Box}_2 \ u_2
\end{align*}
\]
**Instructions** One new instruction per unboxed type pushes an unboxed constant onto the operand stack. Two generic instructions allow loading unboxed values from and storing them in memory. Finally, one instruction manipulates unboxed, native-machine data. The number of new instructions to support n unboxed types is thus n + 3.
**Operations on unboxed data** The built-in operations on unboxed data are members of the type ’opubx of the locale nary\_operations\_ubx. Let op\_ubx be an operation on unboxed data and xs be a list of values of type ubx, Ubx\textsubscript{Op} op\_ubx xs uses some efficient machine-native instructions to operate directly on the given unboxed arguments. In contrast to Op and Op\_inl which, when given the correct number of arguments, always succeed to calculate a result, Op\_ubx may fail by returning None when evaluated on unboxed values of the wrong type.
An inlined operation (Op\_inl) is mapped to an operation on unboxed data (’opubx) with the Ubx function. But instead of relying on the dynamic type-information extracted from the actual ’dyn arguments at run time, it relies on statically known type information. Each argument either has a boxed type (Some \( \tau \) for some \( \tau :: \text{type} \), or an unboxed, dynamic type (None). The mapping is inverted with Ubx\textsuperscript{-1}.
Finally, Type\textsubscript{Op} op\_ubx evaluates to the type of the operation op\_ubx encoded as a pair: the first element is the domain and the second element is the codomain. The type of an operation must be compatible with Arity, Ubx, and Ubx\textsubscript{Op}. Formally:
**locale nary\_operations\_ubx = nary\_operations\_inl + unboxedval + fixes**
\[
\begin{align*}
Ubx\textsubscript{Op} \ : \ ’opubx \ ⇒ \text{ubx list ⇒ ubx option and} \\
Ubx \ : \ ’opinl ⇒ \text{type list ⇒ ’opubx option and} \\
Ubx\textsuperscript{-1} \ : \ ’opubx ⇒ ’opinl \ \text{and} \\
\text{Type}\textsubscript{Op} \ : \ ’opubx ⇒ type option list × type option
\end{align*}
\]
\textbf{assumes}
\[
\begin{align*}
Ubx\textsubscript{Op} \ op\_inl \ ts &= \text{Some} \ op\_ubx \ ⇒ Ubx\textsuperscript{-1} \ op\_ubx = op\_inl \ \text{and} \\
Ubx\textsubscript{Op} \ op\_ubx \ xs &= \text{Some} \ y \ ⇒ \text{Some} \ y
\end{align*}
\]
Towards Efficient and Verified Virtual Machines for Dynamic Languages
We extend the Semantics and each execution step retrieves the instruction from the operand stack of all stack frame for function
function definition, all active function invocations will start with the generalized instructions. The function definition generalizes the function to use the generalized instructions. The function definition generalizes the function to use the generalized instructions. The function definition generalizes the function to use the generalized instructions. The function definition generalizes the function to use the generalized instructions.
Semantics We extend Inca’s transition relation to also support ubx (Figure 8).
All rules to push constants onto the stack now use the appropriate constructor from ubx.
The rules for loading values from the dynamic memory distinguish three cases: (i) a dynamic value is loaded and pushed directly on the operand stack; (ii) a dynamic value is loaded, successfully unboxed, and pushed on the operand stack; and (iii) a dynamic value is loaded, the unboxing fails, and the function is generalized to cancel the Unx optimization. All three rules start by (a) popping a value from the operand stack, and (b) casting it to a dynamic value, which is then used to index the dynamic memory.
In rule \( \Rightarrow \text{Ubx-Load-Unx-Miss} \), the unboxing fails because the dynamic value loaded from memory has a different type than what was expected when optimizing the program. Subsequent instructions expecting data in their native-machine representation cannot execute sensibly and must be generalized to cope with dynamic values. This generalization process applies to both the function definition and the call stack.
First, the function generalize identifies the function definition by mapping all Unx instructions to their Inca counterparts, e.g., PushUbx, to Push. For OpInl instructions, \( \text{Ubx}^{-1} \) identifies the corresponding \( \text{opinl} \) operation.
Second, we need to update the operand stack to ensure that all elements use the boxed representation. If a tagged union contains an operand in unboxed data representation, these operands would not be accepted by the newly generalized instructions. To address this, we use the type information stored in the tagged union to box the object and replace the element with another tagged union (UbxDyn) representing this newly boxed object. The operand stack of the current stack frame must be boxed, but so do the operand stacks of all other active stack frames of the same function. Because each stack frame only stores the identifier of the function, and each execution step retrieves the instruction from the function definition, all active function invocations will start to use the generalized instructions. The function box_stack does this by recursively traversing the call stack and generalizing the operand stack of all stack frame for function \( f \); all other stack frames are left untouched.
The rules for storing values in memory all cast the operand on the top of the stack to the expected type and box it before storing it in memory. No rules are needed to handle the case that an unboxed type does not match its expected type. The bisimulation relation proves that such a situation can never occur.
The rules for evaluating regular and inlined operations require minimal adaptation: they must first cast their operands to dynamic values before evaluation. Again, no rule is required to handle an invalid cast, as our proof shows that such situations can never occur. The new rule \( \Rightarrow \text{Unx-Op-Unx} \) does not need to perform any cast as it operates directly on unboxed data.
Similarly, the rules for conditional jumps and function call require minimal adaptation; they now cast their operand to a dynamic boolean value.
The rules \( \Rightarrow \text{Unx-Load-Unx-Miss} \) and \( \Rightarrow \text{Unx-Fun-End} \) do not need to change because they are polymorphic, i.e., they perform the same operation irrespective of the operand types they manipulate.
6.2 Bisimulation Inca-Ubx
The validity of a sequence of Unx instructions can be statically verified by an abstract interpretation that calculates a form of strongest postcondition, i.e., the arity and types of values on the operand stack following the execution of the sequence. This means that, if a function is given the right number of boxed arguments, then it will successfully execute and return values of the computed types. The strongest post-condition of an instruction takes a stack of types as input and calculates the stack of types resulting from executing that instruction.\(^3\)
Two corresponding program states from Inca and Unx simulate each other (expressed by the ~ binary relation) if they have the same dynamic memory, if both their function environments and call stacks are similar, and if an abstract interpretation of all function definitions succeeds.
Two function environments are similar if they have the same domain and if, given a function definition in Unx and in Inca, the Unx function definition generalizes to the Inca function definition.
Call stacks are similar if (i) they have the same height; (ii) two corresponding stack frames refer to the same function, have the same program counters, and Unx’s operand stack may be boxed to Inca’s; (iii) the abstract interpretation of the function up to the current program counter matches with the operand types in the stack; (iv) the current instruction of all caller stack frames must be a call instruction to the callees’ stack frames.
\(^3\)Note that the simple analysis used in this formalization cannot handle uses of the CJump instruction and, thus, can only interpret linear functions. Using a more complete abstract interpretation to enable more interesting functions is left for future work.
Figure 8. The subset of the \( \rightarrow_{\text{UBX}} \) transition relation that differs from \( \rightarrow_{\text{INCA}} \).
We proved that \( \sim_{I-U} \) is a bisimulation.
**Lemma 6** (Forward simulation). If \( s_1 \xrightarrow{\text{INCA}} s'_1 \) and \( s_1 \sim_{I-U} s_2 \), then there exists a state \( s'_2 \) such that \( s_2 \xrightarrow{\text{UBX}} s'_2 \) and \( s_1 \sim_{I-U} s'_2 \).
**Lemma 7** (Forward matching final states). If \( s_1 \sim_{I-U} s_2 \) and \( \text{final}_{\text{INCA}} s_1 \), then \( \text{final}_{\text{UBX}} s_2 \).
**Lemma 8** (Backward simulation). If \( s_2 \xrightarrow{\text{UBX}} s'_2 \) and \( s_1 \sim_{I-U} s_2 \), then there exists a state \( s'_1 \) such that \( s_1 \xrightarrow{\text{INCA}} s'_1 \) and \( s_1 \sim_{I-U} s'_2 \).
**Lemma 9** (Backward matching final states). If \( s_1 \sim_{I-U} s_2 \) and \( \text{final}_{\text{UBX}} s_2 \), then \( \text{final}_{\text{INCA}} s_1 \).
### 6.3 Compilation from INCA to UBX
The process of compiling has three steps.
1. Lift the program from INCA to UBX.
2. Optimize the program by using as many UBX instructions as possible.
3. Ensure that the result is valid with respect to the abstract interpretation.
The optimization pass is based on an oracle—an abstract function of type \( \text{fun} \Rightarrow \text{nat} \Rightarrow \text{type option} \)—which, given the position of a Load instruction in a function, evaluates to the expected unboxed type of the loaded value. A variant of the abstract interpretation used for the simulation relation optimizes instructions in a linear pass based on the following type information.
1. All function parameters have boxed dynamic types.
2. The type produced by Push is provided by inspecting the constant.
3. The type produced by Load is provided by the oracle, or assumed to be a boxed dynamic type if the oracle evaluates to None.
4. The type consumed by Store is obtained from the abstract interpretation.
5. The types consumed and produced by Op, OpIn1, and Call are always boxed dynamic and their number depends on the arity of the operation or function.
6. The types consumed and produced by OpUBx is obtained from TypeOf.
This information provided by the oracle could either be given directly by the programmer or be the result of automatic run-time instrumentation. In the second case, the virtual machine would first execute code in INCA mode and gather some statistics on encountered types, a stage usually referred to as **profiling**. When some heuristics indicate that a point of “dynamic locality of type usage” is reached, the program would then be compiled to UBX, and the control flow diverted to UBX’s execution engine.
The accuracy of the oracle’s predictions may increase or decrease run-time performance, but may never alter the semantics of the executed program. If a value loaded from memory does not match the oracle’s prediction, rule \( \rightarrow_{\text{UBX}} \) Load-UBX-Miss generalizes the function back to cope with boxed values before resuming the execution.
We proved that compiled, optimized programs simulate their uncompiled counterparts.
**Lemma 10** (Compiled matching states). If \( \text{compile} \ p_1 = \text{Some} \ p_2 \) and \( \text{load}_{\text{INCA}} p_1 s_1 \), then there exists a state \( s_2 \) such that \( \text{load}_{\text{UBX}} p_2 s_2 \) and \( s_1 \sim_{I-U} s_2 \).
Lemmas 6 to 10 imply that the successful execution of a compiled UBX program exhibits identical behavior to the execution of the original INCA program. Formally:
**Theorem 3** (Soundness of compilation). Let the infix relation \( \parallel \) pair a program to its run-time behavior and the infix relation \( \approx \) be an equivalence relation between behaviors. If \( \text{compile} \ p_1 = \text{Some} \ p_2 \) and \( p_2 \parallel b_2 \), and \( b_2 \) does not go wrong, then there exists a behavior \( b_1 \) such that \( p_1 \parallel b_1 \) and \( b_1 \approx b_2 \).
Compilation from INCA to UBX is incomplete in the sense of Theorem 2 because (i) the abstract interpretation and optimization algorithm are too simplistic to handle jumps, and (ii) the hypothesis is too weak.
The former issue can be addressed by using a more sophisticated data-flow analysis instead of our one-pass linear analysis. The latter issue is that the process of loading does not guarantee a successful execution. To address this issue and prove completeness, the hypothesis needs to be strengthened, for example, by giving a typing judgment that guarantees a valid execution of the initial program.
### 7 Practical Perspective
**A Brief History** The original idea on how to optimize dynamically-typed programming languages with advanced type specialization in a purely interpretative setting is about ten years old by now. The senior author implemented a full-fledged prototype in CPython 3.3, reporting speedups by a factor of up to 5. At the same time, the prototype retained traditional interpreter benefits: simplicity and ease-of-implementation. Papers were submitted to ACM SIGPLAN PLDI 2013, SIGPLAN PLDI 2014, and ACM Transactions on Architecture and Code Optimization 2014, with usually positive feedback but no PC member championing the paper.
Due to the important speedups enabled by the prototype, a series of talks were given in 2012: TU Wien, Universität Linz, IST Austria, and a talk at Mozilla.
**Benefits of Formalization** The full-fledged CPython prototype successfully passed all relevant unit tests and ran major Python applications, benchmarks, and frameworks. Although tests covered several ten thousands lines of Python
programs and C code for libraries, some “Heisenbugs” occurred every now and then. Through the presented formalization, we were able to discern a new requirement that addressed the bug.
The new requirement—obvious in hindsight, but non-obvious before—is due to the deoptimization of Ubx optimized code. When deoptimizing a certain function \( f \), a prior, yet incomplete call to function \( f \) may still be active on stack. Assume the prior stack frame of \( f \) was type-specialized to a specific type \( T \) and that the operand stack of the interpreter stack frame contained unboxed data of type \( T \). If we deoptimize the newer stack frame of the present function invocation of \( f \), then all unboxed data will be boxed again and stored in memory. Now, assume that during a following call of function \( f \), it will be optimized again, but to a different type \( T' \). The program continues, until it eventually continues to operate on the prior stack frame belonging to function \( f \). The interpreter operand stack may now hold unboxed data of type \( T \), but the optimized instructions will assume the data to be of type \( T' \). Potential errors following from this situation are: (i) deoptimization may fail when the types \( T \) and \( T' \) differ; (ii) execution of native-machine operations may fail, when the data representation differs; (iii) (un-)boxing of data may fail, when we try to access native-machine data incorrectly.
The underlying problem is that there is only one optimized interpreter code image stored for each interpreted function. A Ubx function is, therefore, not able to infer potential changes to its code. A variety of techniques address this issue, e.g., deoptimizing all invocations of the optimized code, or keeping a version counter of the code image and check, that these are identical.
**Evaluation Results** Table 1 presents speedup factors relative to a baseline interpreter: CPython 3.3.2 using switch-dispatch for instruction dispatch. The PyPy3 measurements correspond to the then (2014) most recent version: 2.1 beta 1. At present, PyPy3 is out of beta and offers a better performance profile and better compatibility with C extensions.
We evaluated our full-fledged implementation using the following benchmarks. First, we use the following microbenchmarks from the computer language benchmarks game [15]: binarytrees, mandelbrot, nbody, and spectralnorm. Second, we used a set of publicly available solutions to the first 50 Project Euler problems [1], where we selected programs that show a longer than average run-time (solutions to problems no. 27, 31, 39, and 50).
The benchmarks were run on an Intel Nehalem i7-920 running at a frequency of 2.67 GHz, on Linux kernel version 3.11.0-15 and gcc version 4.6.4. To minimize perturbations by third party systems, we took the following precautions. First, we disabled Intel’s TurboBoost [25] feature to avoid frequency scaling based on unknown heuristics. Second, we used nice -n -20 to minimize operating system scheduler effects. Third, we used 30 repetitions for each pairing of a benchmark with an interpreter to get stable results; we report the geometric mean of these repetitions, to account for outliers.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>PyPy3</th>
<th>Ubx-Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td>binarytrees</td>
<td>1.8225X</td>
<td>1.7081X</td>
</tr>
<tr>
<td>mandelbrot</td>
<td>0.9403X</td>
<td>2.0986X</td>
</tr>
<tr>
<td>nbody</td>
<td>1.5112X</td>
<td>3.7010X</td>
</tr>
<tr>
<td>spectralnorm</td>
<td>2.7900X</td>
<td>4.4012X</td>
</tr>
<tr>
<td>E27</td>
<td>3.8519X</td>
<td>2.3084X</td>
</tr>
<tr>
<td>E31</td>
<td>0.1460X</td>
<td>1.1393X</td>
</tr>
<tr>
<td>E39</td>
<td>1.9461X</td>
<td>4.9297X</td>
</tr>
<tr>
<td>E50</td>
<td>3.6531X</td>
<td>3.8018X</td>
</tr>
<tr>
<td>Geometric Mean</td>
<td>1.6984X</td>
<td>2.5367X</td>
</tr>
</tbody>
</table>
**Table 1.** Speedups of PyPy3 and Ubx-Prototype over the CPython baseline.
8 Related Work
To the best of our knowledge, there exists no prior work that is directly related to the formalization and verification of the speculative optimizations presented here. We therefore group the related work into the three most directly related groups of related work: (i) formalization and verification of translators, (ii) formalization and verification of dynamic languages, and (iii) just-in-time compiler optimizations.
8.1 Formalization and Verification of Translators
We combine the related work on compilers, just-in-time compilers, and interpreters and subsume all of them under the label “translators.” From a history perspective, the correctness of translators has been an active research area since at least the 1980s. The topic of compiler correctness has, for instance, been examined in the European FP2 research program ProCoS [22]. The findings of ProCoS subsequently lead to a larger German research project called Verifix, which examined several aspects of compiler correctness [16, 17]. In the 2000s, a group of researchers in France pioneered the field by mechanizing correctness of an industrial-strength C compiler [5, 30, 38, 42–44]. In the 2010s, a mechanized formalization and verification of ML followed [28].
In 2006, Klein and Nipkow formalized Jinja, a unified model of a Java-like source language, virtual machine, and compiler [26, 27]. Lochbihler later added support for interleaved execution of threads with JinjaThreads [31–35]. In 2018, Watt mechanized the WebAssembly specification [46, 47].
In a similar vein, the verification of compile-time optimizations has received considerable attention from the research
community. VellVM, for example, focused on verifying optimizations on the LLVM bytecode intermediate representation [52]. Tatlock and Lerner simplify the verification of optimizations in verified compilers by using SMT solvers to aid with the construction of verified translation validators [41] Prior research also focused on the formalization verification of intermediate representations, such as Java bytecode, without optimizations [29, 40].
In 2010, Myreen presents his work on the formalization and verification of just-in-time compilers [36], documenting some of the difficulties posed by self-modifying code. This paper is most directly related prior work, but addresses a different direction, namely, the formalization of just-in-time compilers. Our work, however, sidesteps the intricate difficulties of JIT compilers by focusing on optimizing interpreters instead. In 2017, Flückiger et al. investigate the correctness of speculative optimizations with dynamic deoptimization [14]. Since our virtual machine interpreters can be thought of as intermediate representations, the InCa language confirms the finding by Flückiger et al., namely, that reasoning about complex system interactions is a lot easier by embedding the proper information in it. However, Unix goes further than Flückiger et al. by covering different data representation.
8.2 Formalization and Verification of Dynamic Languages
The formalization of dynamic languages in general, and JavaScript in particular, has been the subject of substantial prior work. In 2010, $\lambda_{JS}$ presented the first executable, formal semantics of JavaScript [21]. By rewriting JavaScript surface syntax into equivalent Scheme code, JavaScript programs could be executed, with correctness and security guarantees depending on the underlying Scheme system. In 2013, $\lambda_\pi$ applied a similar technique to provide a formal semantics for Python [37]. A comprehensive formalization and verification effort of JavaScript is the Coq-based project JSCert [6]. JSCert generates a verified JavaScript interpreter from its formalization.
While a formal semantics is an indispensable prerequisite for a correct and verified virtual machine, it addresses the desirable performance aspect insufficiently. To attain performance, a formalization of speculative optimizations is required, which is the key contribution of our paper.
8.3 Just-in-Time Compilers & Interpreters
Aycock gives a good overview of the history of just-in-time compilers up to the early 2000s [2]. Particularly relevant prior work is the original work by Deutsch and Schiffman, which introduced the seminal idea of inline caching [12]. Originally, their work on Smalltalk 80 systems considered so-called monomorphic inline caches, i.e., inline caches that hold at most one address. Hölzle, Chambers and Ungar subsequently extended these with so-called polymorphic inline caches, i.e., a combination of an inline cache and a stub to cache multiple target addresses, which is particularly relevant in highly polymorphic call sites [23, 24].
In 1996, Roemer et al. studied the performance of interpreters and found no specific evidence to identify hints [39]. In 2003, Erle and Gregg investigated the performance of interpreters again and found evidence of the importance of branch predictors [13]. In 2009, Brunthaler analyzed the varying performance potential of interpreter optimizations and found that the interpreter abstraction level is the primary performance determinant for selecting interpreter optimizations [7]. In 2010, Brunthaler investigated the use of inline caching in a purely interpretative fashion, in contrast to its use in just-in-time compilers, and found speedups by a factor of up to 2 [8, 9]. In 2012, Würthinger generalized Brunthaler’s bytecode interpreter optimizations to abstract-syntax tree interpreters [50], which subsequently became the cornerstone for the development of the Truffle/Graal virtual machine implementation efforts [48, 49]. In 2014, Wang et al. demonstrated the potential of combining advanced optimizations in the R programming language and reported speedups of up to 3.5 [45].
All prior work in this area reports important speedups, either through dynamic code generation in a classic just-in-time compiler setting, or by way of optimizing interpreters. The exclusive focus of prior work is on improving performance, or sometimes also reducing memory footprint. The aspect of formalization and verification, in particular to establish correctness, is notably absent.
9 Conclusion
We presented a formalization of virtual machine interpreters for dynamically typed programming languages. Our formalization define an interpreter supporting the most representative features found and used by many virtual machine interpreters for mainstream languages. We then methodically extend the virtual machine interpreter’s instruction set and semantics to accommodate increasingly specialized and optimized instruction derivatives. These incrementally specialized derivatives eliminate much of the overhead frequently found in high abstraction-level virtual machines, such as those used by Python or JavaScript.
The optimized instruction derivatives, in particular, first eliminate the overhead of dynamic typing by inline caching a prior recorded type at its place. This recorded type information is subsequently used to expand the local knowledge of type usage in a specific region of the program, e.g., a loop, or a basic block. Once a suitable region of known types is determined, we can rewrite the whole sequence to eliminate the overhead of using boxed objects by using native-machine data representation instead.
Our formalization enables the proof of both, soundness and completeness, for speculative optimizations. Given a
formal semantics of a dynamic language, and a suitable intermediate representation, our formalization provides a systematic way to (i) integrate speculative optimizations, and (ii) establish the correctness of the resulting system. We believe that our formalization provides a foundation for the verification of industrial-strength implementations. These implementations will benefit from our formalization’s ability to pinpoint subtle errors and non-obvious requirements.
Acknowledgments
The authors thank Jasmin Blanchette and Johannes Kinder for invaluable feedback on earlier versions of this paper. This publication is part of the project CONCORDIA, a project that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 830927.
References
Towards Efficient and Verified Virtual Machines for Dynamic Languages
|
{"Source-Url": "https://martin.desharnais.me/public/documents/cpp2021-Dyn-Inca-Ubx.pdf", "len_cl100k_base": 12872, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 64887, "total-output-tokens": 17208, "length": "2e13", "weborganizer": {"__label__adult": 0.0003736019134521485, "__label__art_design": 0.0002639293670654297, "__label__crime_law": 0.00029587745666503906, "__label__education_jobs": 0.0005393028259277344, "__label__entertainment": 5.936622619628906e-05, "__label__fashion_beauty": 0.00015866756439208984, "__label__finance_business": 0.00021469593048095703, "__label__food_dining": 0.00035071372985839844, "__label__games": 0.0005445480346679688, "__label__hardware": 0.00101470947265625, "__label__health": 0.0005402565002441406, "__label__history": 0.0002503395080566406, "__label__home_hobbies": 7.998943328857422e-05, "__label__industrial": 0.00036525726318359375, "__label__literature": 0.00028252601623535156, "__label__politics": 0.00030994415283203125, "__label__religion": 0.0005207061767578125, "__label__science_tech": 0.0189056396484375, "__label__social_life": 7.444620132446289e-05, "__label__software": 0.004268646240234375, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.00029158592224121094, "__label__transportation": 0.0005617141723632812, "__label__travel": 0.00019609928131103516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64505, 0.03145]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64505, 0.39434]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64505, 0.82269]], "google_gemma-3-12b-it_contains_pii": [[0, 4288, false], [4288, 6479, null], [6479, 12223, null], [12223, 15498, null], [15498, 18015, null], [18015, 23517, null], [23517, 26969, null], [26969, 31890, null], [31890, 37785, null], [37785, 37915, null], [37915, 43422, null], [43422, 48887, null], [48887, 54696, null], [54696, 61562, null], [61562, 64505, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4288, true], [4288, 6479, null], [6479, 12223, null], [12223, 15498, null], [15498, 18015, null], [18015, 23517, null], [23517, 26969, null], [26969, 31890, null], [31890, 37785, null], [37785, 37915, null], [37915, 43422, null], [43422, 48887, null], [48887, 54696, null], [54696, 61562, null], [61562, 64505, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64505, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64505, null]], "pdf_page_numbers": [[0, 4288, 1], [4288, 6479, 2], [6479, 12223, 3], [12223, 15498, 4], [15498, 18015, 5], [18015, 23517, 6], [23517, 26969, 7], [26969, 31890, 8], [31890, 37785, 9], [37785, 37915, 10], [37915, 43422, 11], [43422, 48887, 12], [48887, 54696, 13], [54696, 61562, 14], [61562, 64505, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64505, 0.03179]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
0bcc6be2d3dbfafb3223e3a5f452cdbfdcb8dd2b
|
Top-Down Parsing
Handout written by Maggie Johnson and revised by Julie Zelenski.
Possible Approaches
The syntax analysis phase of a compiler verifies that the sequence of tokens extracted by the scanner represents a valid sentence in the grammar of the programming language. There are two major parsing approaches: top-down and bottom-up. In top-down parsing, you start with the start symbol and apply the productions until you arrive at the desired string. In bottom-up parsing, you start with the string and reduce it to the start symbol by applying the productions backwards. As an example, let’s trace through the two approaches on this simple grammar that recognizes strings consisting of any number of a’s followed by at least one (and possibly more) b’s:
\[
\begin{align*}
S & \rightarrow AB \\
A & \rightarrow aA | \epsilon \\
B & \rightarrow b | bB
\end{align*}
\]
Here is a top-down parse of aaab. We begin with the start symbol and at each step, expand one of the remaining nonterminals by replacing it with the right side of one of its productions. We repeat until only terminals remain. The top-down parse produces a leftmost derivation of the sentence.
\[
\begin{align*}
S & \rightarrow AB \\
aA & \rightarrow aA \\
aaA & \rightarrow aA \\
aaaA & \rightarrow aA \\
aaa\epsilon & \rightarrow \epsilon \\
aaab & \rightarrow b
\end{align*}
\]
A bottom-up parse works in reverse. We begin with the sentence of terminals and each step applies a production in reverse, replacing a substring that matches the right side with the nonterminal on the left. We continue until we have substituted our way back to the start symbol. If you read from the bottom to top, the bottom-up parse prints out a rightmost derivation of the sentence.
\[
\begin{align*}
aaab \\
aaa\epsilon & \rightarrow (insert \ \epsilon) \\
aaaA & \rightarrow \epsilon \\
a\epsilon & \rightarrow aA \\
aaA & \rightarrow aA \\
aA & \rightarrow aA \\
A & \rightarrow aA \\
B & \rightarrow b \\
S & \rightarrow AB
\end{align*}
\]
In creating a parser for a compiler, we normally have to place some restrictions on how we process the input. In the above example, it was easy for us to see which productions were appropriate because we could see the entire string aaab. In a compiler’s parser, however, we don’t have long-distance vision. We are usually limited to just one-symbol of lookahead. The lookahead symbol is the next symbol coming up in the input. This restriction certainly makes the parsing more challenging. Using the same grammar from above, if the parser sees only a single b in the input and it cannot lookahead any further than the symbol we are on, it can’t know whether to use the production $B \rightarrow b$ or $B \rightarrow bB$.
**Backtracking**
One solution to parsing would be to implement backtracking. Based on the information the parser currently has about the input, a decision is made to go with one particular production. If this choice leads to a dead end, the parser would have to backtrack to that decision point, moving backwards through the input, and start again making a different choice and so on until it either found the production that was the appropriate one or ran out of choices. For example, consider this simple grammar:
\[
\begin{align*}
S & \rightarrow \text{bab} \mid \text{bA} \\
A & \rightarrow \text{d} \mid \text{cA}
\end{align*}
\]
Let’s follow parsing the input bcd. In the trace below, the column on the left will be the expansion thus far, the middle is the remaining input, and the right is the action attempted at each step:
<table>
<thead>
<tr>
<th>S</th>
<th>bcd</th>
<th>Try S → bab</th>
</tr>
</thead>
<tbody>
<tr>
<td>bab</td>
<td>bcd</td>
<td>match b</td>
</tr>
<tr>
<td>ab</td>
<td>cd</td>
<td>dead-end, backtrack</td>
</tr>
<tr>
<td>S</td>
<td>bcd</td>
<td>Try S → bA</td>
</tr>
<tr>
<td>bA</td>
<td>bcd</td>
<td>match b</td>
</tr>
<tr>
<td>A</td>
<td>cd</td>
<td>Try A → d</td>
</tr>
<tr>
<td>d</td>
<td>cd</td>
<td>dead-end, backtrack</td>
</tr>
<tr>
<td>A</td>
<td>cd</td>
<td>Try A → cA</td>
</tr>
<tr>
<td>cA</td>
<td>cd</td>
<td>match c</td>
</tr>
<tr>
<td>A</td>
<td>d</td>
<td>Try A → d</td>
</tr>
<tr>
<td>d</td>
<td>d</td>
<td>match d</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Success!</td>
</tr>
</tbody>
</table>
As you can see, each time we hit a dead-end, we backup to the last decision point, unmake that decision and try another alternative. If all alternatives have been exhausted, we back up to the preceding decision point and so on. This continues until we either find a working parse or have exhaustively tried all combinations without success.
A number of authors have described backtracking parsers; the appeal is that they can be used for a variety of grammars without requiring them to fit any specific form. For a small grammar such as above, a backtracking approach may be tractable, but most programming language grammars have dozens of nonterminals each with several
options and the resulting combinatorial explosion makes this approach slow and impractical. We will instead look at ways to parse via efficient methods that have restrictions about the form of the grammar, but usually those requirements are not so onerous that we cannot rearrange a programming language grammar to meet them.
**Top-Down Predictive Parsing**
First, we will focus in on top-down parsing. We will look at two different ways to implement a non-backtracking top-down parser called a *predictive parser*. A predictive parser is characterized by its ability to choose the production to apply solely on the basis of the next input symbol and the current nonterminal being processed. To enable this, the grammar must take a particular form. We call such a grammar *LL(1)*. The first "L" means we scan the input from left to right; the second "L" means we create a leftmost derivation; and the 1 means one input symbol of lookahead. Informally, an LL(1) has no left-recursive productions and has been left-factored. Note that these are necessary conditions for LL(1) but not sufficient, i.e., there exist grammars with no left-recursion or common prefixes that are not LL(1). Note also that there exist many grammars that cannot be modified to become LL(1). In such cases, another parsing technique must be employed, or special rules must be embedded into the predictive parser.
**Recursive Descent**
The first technique for implementing a predictive parser is called *recursive-descent*. A recursive-descent parser consists of several small functions, one for each nonterminal in the grammar. As we parse a sentence, we call the functions that correspond to the left side nonterminal of the productions we are applying. If these productions are recursive, we end up calling the functions recursively.
Let’s start by examining some productions from a grammar for a simple Pascal-like programming language. In this programming language, all functions are preceded by the reserved word *FUNC*:
```
program -> function_list
function_list -> function_list function | function
function -> FUNC identifier ( parameter_list ) statements
```
What might the C function that is responsible for parsing a function definition look like? It expects to first find the token *FUNC*, then it expects an identifier (the name of the function), followed by an opening parenthesis, and so on. As it pulls each token from the parser, it must ensure that it matches the expected, and if not, will halt with an error. For each nonterminal, this function calls the associated function to handle its part of the parsing. Check this out:
void ParseFunction()
{
if (lookahead != T_FUNC) { // anything not FUNC here is wrong
printf("syntax error \n");
exit(0);
} else
lookahead = yylex(); // global 'lookahead' holds next token
ParseIdentifier();
if (lookahead != T_LPAREN) {
printf("syntax error \n");
exit(0);
} else
lookahead = yylex();
ParseParameterList();
if (lookahead != T_RPAREN) {
printf("syntax error \n");
exit(0);
} else
lookahead = yylex();
ParseStatements();
}
To make things a little cleaner, let's introduce a utility function that can be used to verify that the next token is what is expected and will error and exit otherwise. We will need this again and again in writing the parsing routines.
void MatchToken(int expected)
{
if (lookahead != expected) {
printf("syntax error, expected %d, got %d\n", expected, lookahead);
exit(0);
} else // if match, consume token and move on
lookahead = yylex();
}
Now we can tidy up the ParseFunction routine and make it clearer what it does:
void ParseFunction()
{
MatchToken(T_FUNC);
ParseIdentifier();
MatchToken(T_LPAREN);
ParseParameterList();
MatchToken(T_RPAREN);
ParseStatements();
}
The following diagram illustrates how the parse tree is built:
```
program
function_list
function
FUNC
identifier
parameter_list
statements
```
This part of the tree is parsed by the call to ParseIdentifier
This part is parsed by the call to ParseParameterList
This part is parsed by the call to ParseStatements
Here is the production for an if-statement in this language:
```
if_statement -> IF expression THEN statement ENDIF |
IF expression THEN statement ELSE statement ENDIF
```
To prepare this grammar for recursive-descent, we must left-factor to share the common parts:
```
if_statement -> IF expression THEN statement close_if
close_if -> ENDIF | ELSE statement ENDIF
```
Now, let’s look at the recursive-descent functions to parse an if statement:
```java
void ParseIfStatement()
{
MatchToken(T_IF);
ParseExpression();
MatchToken(T_THEN);
ParseStatement();
ParseCloseIf();
}
void ParseCloseIf()
{
if (lookahead == T_ENDIF) // if we immediately find ENDIF
lookahead = yylex(); // predict close_if -> ENDIF
else {
MatchToken(T_ELSE); // otherwise we look for ELSE
ParseStatement(); // predict close_if -> ELSE stmt ENDIF
MatchToken(T_ENDIF);
}
}
```
When parsing the closing portion of the if, we have to decide which of the two right-hand side options to expand. In this case, it isn’t too difficult. We try to match the first token again ENDIF and on non-match, we try to match the ELSE clause and if that doesn’t match, it will report an error.
Navigating through two choices seemed simple enough, however, what happens where we have many alternatives on the right side?
```
statement -> assg_statement | return_statement | print_statement | null_statement
| if_statement | while_statement | block_of_statements
```
When implementing the `ParseStatement` function, how are we going to be able to determine which of the seven options to match for any given input? Remember, we are trying to do this without backtracking, and just one token of lookahead, so we have to be able to make immediate decision with minimal information — this can be a challenge!
To understand how to recognize and solve problem, we need a definition:
The **first set** of a sequence of symbols \( u \), written as \( \text{First}(u) \) is the set of terminals which start the sequences of symbols derivable from \( u \). A bit more formally, consider all strings derivable from \( u \). If \( u \Rightarrow^* v \), where \( v \) begins with some terminal, that terminal is in \( \text{First}(u) \). If \( u \Rightarrow^* \varepsilon \), then \( \varepsilon \) is in \( \text{First}(u) \).
Informally, the first set of a sequence is a list of all the possible terminals that could start a string derived from that sequence. We will work an example of calculating the first sets a bit later. For now, just keep in mind the intuitive meaning. Finding our lookahead token in one of the first sets of the possible expansions tells us that is the path to follow.
Given a production with a number of alternatives: \( A \rightarrow u_1 | u_2 | ... \), we can write a recursive-descent routine only if all the sets \( \text{First}(u_i) \) are disjoint. The general form of such a routine would be:
```c
void ParseA()
{
// case below not quite legal C, need to list symbols individually
switch (lookahead) {
case First(u_1): // predict production A -> u_1
/* code to recognize u_1 */
return;
case First(u_2): // predict production A -> u_2
/* code to recognize u_2 */
return;
....
default:
printf("syntax error \n");
exit(0);
}
}
```
If the first sets of the various productions for a nonterminal are not disjoint, a predictive parser doesn't know which choice to make. We would either need to re-write the grammar or use a different parsing technique for this nonterminal. For programming languages, it is usually possible to re-structure the productions or embed certain rules
into the parser to resolve conflicts, but this constraint is one of the weaknesses of the top-
down non-backtracking approach.
It is a bit trickier if the nonterminal we are trying to recognize is nullable. A nonterminal
A is nullable if there is a derivation of A that results in $\epsilon$ (i.e., that nonterminal would
completely disappear in the parse string) i.e., $\epsilon \in \text{First}(A)$. In this case A could be replaced
by nothing and the next token would be the first token of the symbol following A in the
sentence being parsed. Thus if A is nullable, our predictive parser also needs to consider
the possibility that the path to choose is the one corresponding to $A \Rightarrow^* \epsilon$. To deal with
this we define the following:
The follow set of a nonterminal A is the set of terminal symbols that can appear
immediately to the right of A in a valid sentence. A bit more formally, for every
valid sentence $S \Rightarrow^* uAv$, where $v$ begins with some terminal, and that terminal is
in $\text{Follow}(A)$.
Informally, you can think about the follow set like this: A can appear in various places
within a valid sentence. The follow set describes what terminals could have followed the
sentential form that was expanded from A. We will detail how to calculate the follow set
a bit later. For now, realize follow sets are useful because they define the right context
consistent with a given nonterminal and provide the lookahead that might signal a
nullable nonterminal should be expanded to $\epsilon$.
With these two definitions, we can now generalize how to handle $A \Rightarrow u_1 | u_2 | ...$, in a
recursive-descent parser. In all situations, we need a case to handle each member in
$\text{First}(u_i)$. In addition if there is a derivation from any $u_i$ that could yield $\epsilon$ (i.e. if it is
nullable) then we also need to handle the members in $\text{Follow}(A)$.
```c
void ParseA()
{
switch (lookahead)
{
case First(u_1):
/* code to recognize u_1 */
return;
case First(u_2):
/* code to recognize u_2 */
return;
...
case Follow(A): // predict production A->epsilon if A is nullable
/* usually do nothing here */
default:
printf("syntax error \n");
exit(0);
}
}
```
What about left-recursive productions? Now we see why these are such a problem in a predictive parser. Consider this left-recursive production that matches a list of one or more functions.
\[
\text{function_list} \rightarrow \text{function_list function} \mid \text{function}
\]
\[
\text{function} \rightarrow \text{FUNC identifier parameter_list statement}
\]
```c
void ParseFunctionList()
{
ParseFunctionList();
ParseFunction();
}
```
Such a production will send a recursive-descent parser into an infinite loop! We need to remove the left-recursion in order to be able to write the parsing function for a `function_list`.
\[
\text{function_list} \rightarrow \text{function function_list} \mid \text{function}
\]
then we must left-factor the common parts
\[
\text{function_list} \rightarrow \text{function more_functions}
\]
\[
\text{more_functions} \rightarrow \text{function more_functions} \mid \epsilon
\]
And now the parsing function looks like this:
```c
void ParseFunctionList()
{
ParseFunction();
ParseMoreFunctions(); // may be empty (i.e. expand to epsilon)
}
```
**Computing first and follow**
These are the algorithms used to compute the first and follow sets:
**Calculating first sets.** To calculate First(\(u\)) where \(u\) has the form \(X_1X_2...X_n\), do the following:
a) If \(X_i\) is a terminal, add \(X_i\) to First(\(u\)) and you’re finished.
b) Else \(X_1\) is a nonterminal, so add First(\(X_1\)) - \(\epsilon\) to First(\(u\)).
a. If \(X_i\) is a nullable nonterminal, i.e., \(X_i \Rightarrow^* \epsilon\), add First(\(X_2\)) - \(\epsilon\) to First(\(u\)).
Furthermore, if \(X_2\) can also go to \(\epsilon\), then add First(\(X_3\)) - \(\epsilon\) and so on, through all \(X_n\) until the first non-nullable symbol is encountered.
b. If \(X_1X_2...X_n \Rightarrow^* \epsilon\), add \(\epsilon\) to the first set.
Calculating follow sets. For each nonterminal in the grammar, do the following:
1. Place EOF in Follow(S) where S is the start symbol and EOF is the input's right endmarker. The endmarker might be end of file, newline, or a special symbol, whatever is the expected end of input indication for this grammar. We will typically use $ as the endmarker.
2. For every production A → uBv where u and v are any string of grammar symbols and B is a nonterminal, everything in First(v) except ε is placed in Follow(B).
3. For every production A → uB, or a production A → uBv where First(v) contains ε (i.e. v is nullable), then everything in Follow(A) is added to Follow(B).
Here is a complete example of first and follow set computation, starting with this grammar:
```
S → AB
A → Ca | ε
B → BaAC | c
C → b | ε
```
Notice we have a left-recursive production that must be fixed if we are to use LL(1) parsing:
```
B → BaAC | c becomes B → cB'
B' → aACB' | ε
```
The new grammar is:
```
S → AB
A → Ca | ε
B → cB'
B' → aACB' | ε
C → b | ε
```
It helps to first compute the nullable set (i.e., those nonterminals X that X =>* ε), since you need to refer to the nullable status of various nonterminals when computing the first and follow sets:
```
Nullable(G) = {A B' C}
```
The first sets for each nonterminal are:
```
First(C) = {b ε}
First(B') = {a ε}
First(B) = {c}
First(A) = {b a ε}
```
Start with First(C) - ε, add a (since C is nullable) and ε (since A itself is nullable)
First(S) = {b a c}
Start with First(A) - ε, add First(B) (since A is nullable). We don’t add ε (since S itself is not-nullable — A can go away, but B cannot)
It is usually convenient to compute the first sets for the nonterminals that appear toward the bottom of the parse tree and work your way upward since the nonterminals toward the top may need to incorporate the first sets of the terminals that appear beneath them in the tree.
To compute the follow sets, take each nonterminal and go through all the right-side productions that the nonterminal is in, matching to the steps given earlier:
Follow(S) = {$}
S doesn’t appear in the right hand side of any productions. We put $ in the follow set because S is the start symbol.
Follow(B) = {$}
B appears on the right hand side of the S -> AB production. Its follow set is the same as S.
Follow(B') = {$}
B' appears on the right hand side of two productions. The B' -> aACB' production tells us its follow set includes the follow set of B', which is tautological. From B -> cB', we learn its follow set is the same as B.
Follow(C) = {a $}
C appears in the right hand side of two productions. The production A -> Ca tells us a is in the follow set. From B' -> aACB', we add the First(B') which is just a again. Because B' is nullable, we must also add Follow(B') which is $.
Follow(A) = {c b a $}
A appears in the right hand side of two productions. From S -> AB we add First(B) which is just c. B is not nullable. From B' -> aACB', we add First(C) which is b. Since C is nullable, so we also include First(B') which is a. B' is also nullable, so we include Follow(B') which adds $.
It can be convenient to compute the follows sets for the nonterminals that appear toward the top of the parse tree and work your way down, but sometimes you have to circle around computing the follow sets of other nonterminals in order to complete the one you’re on.
The calculation of the first and follow sets follow mechanical algorithms, but it is very easy to get tripped up in the details and make mistakes even when you know the rules. Be careful!
Table-driven LL(1) Parsing
In a recursive-descent parser, the production information is embedded in the individual parse functions for each nonterminal and the run-time execution stack is keeping track of our progress through the parse. There is another method for implementing a predictive parser that uses a table to store that production along with an explicit stack to keep track of where we are in the parse.
This grammar for add/multiply expressions is already set up to handle precedence and associativity:
\[
\begin{align*}
E & \rightarrow E + T | T \\
T & \rightarrow T * F | F \\
F & \rightarrow (E) | \text{int}
\end{align*}
\]
After removal of left recursion, we get:
\[
\begin{align*}
E & \rightarrow TE' \\
E' & \rightarrow + TE' | \epsilon \\
T & \rightarrow FT' \\
T' & \rightarrow * FT' | \epsilon \\
F & \rightarrow (E) | \text{int}
\end{align*}
\]
One way to illustrate the process is to study some transition graphs that represent the grammar:
A predictive parser behaves as follows. Let’s assume the input string is 3 + 4 * 5. Parsing begins in the start state of the symbol E and moves to the next state. This transition is marked with a T, which sends us to the start state for T. This in turn, sends us to the start state for F. F has only terminals, so we read a token from the input string. It must either be an open parenthesis or an integer in order for this parse to be valid. We consume the integer token, and thus we have hit a final state in the F transition diagram, so we return to where we came from which is the T diagram; we have just finished processing the F
nonterminal. We continue with $T'$, and go to that start state. The current lookahead is + which doesn’t match the * required by the first production, but + is in the follow set for $T'$ so we match the second production which allows $T'$ to disappear entirely. We finish $T'$ and return to $T$, where we are also in a final state. We return to the $E$ diagram where we have just finished processing the $T$. We move on to $E'$, and so on.
A table-driven predictive parser uses a stack to store the productions to which it must return. A parsing table stores the actions the parser should take based on the input token and what value is on top of the stack. $\$ is the end of input symbol.
<table>
<thead>
<tr>
<th>Input/ Top of parse stack</th>
<th>int</th>
<th>+</th>
<th>*</th>
<th>(</th>
<th>)</th>
<th>$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$E$</td>
<td>$E\rightarrow TE'$</td>
<td>$E\rightarrow TE'$</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$E'$</td>
<td>$E'\rightarrow +TE'$</td>
<td>$E'\rightarrow \varepsilon$</td>
<td>$E'\rightarrow \varepsilon$</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$T$</td>
<td>$T\rightarrow FT'$</td>
<td>$T\rightarrow FT'$</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$T'$</td>
<td>$T'\rightarrow \varepsilon$</td>
<td>$T'\rightarrow +FT'$</td>
<td>$T'\rightarrow \varepsilon$</td>
<td>$T'\rightarrow \varepsilon$</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$F$</td>
<td>$F\rightarrow \text{int}$</td>
<td>$F\rightarrow (E)$</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Tracing**
Here is how a predictive parser works. We push the start symbol on the stack and read the first input token. As the parser works through the input, there are the following possibilities for the top stack symbol $X$ and the input token $a$ using table $M$:
1. If $X = a$ and $a = \text{end of input ($\$)}$, parser halts and parse completed successfully.
2. If $X = a$ and $a \neq \$, successful match, pop $X$ and advance to next input token. This is called a *match* action.
3. If $X \neq a$ and $X$ is a nonterminal, pop $X$ and consult table at $M[X,a]$ to see which production applies, push right side of production on stack. This is called a *predict* action.
4. If none of the preceding cases applies or the table entry from step 3 is blank, there has been a parse error.
Here is an example parse of the string $\text{int + int * int}$:
Suppose, instead, that we were trying to parse the input +$. The first step of the parse would give an error because there is no entry at \( M[E, +] \).
**Constructing The Parse Table**
The next task is to figure out how we built the table. The construction of the table is somewhat involved and tedious (the perfect task for a computer, but error-prone for humans). The first thing we need to do is compute the first and follow sets for the grammar:
\[
\begin{align*}
E & \rightarrow TE' \\
E' & \rightarrow + TE' | \epsilon \\
T & \rightarrow FT' \\
T' & \rightarrow * FT' | \epsilon \\
F & \rightarrow (E) | \text{int}
\end{align*}
\]
<table>
<thead>
<tr>
<th>Parse stack</th>
<th>Remaining input</th>
<th>Parser action</th>
</tr>
</thead>
<tbody>
<tr>
<td>E$</td>
<td>int + int * int$</td>
<td>Predict ( E \rightarrow TE' ), pop E from stack, push ( TE' ), no change in input</td>
</tr>
<tr>
<td>TE$</td>
<td>int + int * int$</td>
<td>Predict ( T \rightarrow FT' )</td>
</tr>
<tr>
<td>FT$E$</td>
<td>int + int * int$</td>
<td>Predict ( F \rightarrow \text{int} )</td>
</tr>
<tr>
<td>intT$E$</td>
<td>int + int * int$</td>
<td>Match int, pop from stack, move ahead in input</td>
</tr>
<tr>
<td>TE$</td>
<td>+ int * int$</td>
<td>Predict ( T' \rightarrow \epsilon )</td>
</tr>
<tr>
<td>E$</td>
<td>+ int * int$</td>
<td>Predict ( E' \rightarrow + TE' )</td>
</tr>
<tr>
<td>+TE$</td>
<td>+ int * int$</td>
<td>Match +, pop</td>
</tr>
<tr>
<td>TE$</td>
<td>int * int$</td>
<td>Predict ( T \rightarrow FT' )</td>
</tr>
<tr>
<td>FT$E$</td>
<td>int * int$</td>
<td>Predict ( F \rightarrow \text{int} )</td>
</tr>
<tr>
<td>intT$E$</td>
<td>int * int$</td>
<td>Match int, pop</td>
</tr>
<tr>
<td>TE$</td>
<td>* int$</td>
<td>Predict ( T' \rightarrow * FT' )</td>
</tr>
<tr>
<td>*FT$E$</td>
<td>* int$</td>
<td>Match *, pop</td>
</tr>
<tr>
<td>FT$E$</td>
<td>int$</td>
<td>Predict ( F \rightarrow \text{int} )</td>
</tr>
<tr>
<td>intT$E$</td>
<td>int$</td>
<td>Match int, pop</td>
</tr>
<tr>
<td>TE$</td>
<td>$</td>
<td>Predict ( T' \rightarrow \epsilon )</td>
</tr>
<tr>
<td>E$</td>
<td>$</td>
<td>Predict ( E' \rightarrow \epsilon )</td>
</tr>
<tr>
<td>$</td>
<td>$</td>
<td>Match $, pop, success!</td>
</tr>
</tbody>
</table>
First(E) = First(T) = First(F) = { ( int }
First(T') = { * ε }
First(E') = { + ε }
Follow(E) = Follow(E') { $ ) }
Follow(T) = Follow(T') = { + $ ) }
Follow(F) = { * + $ ) }
Once we have the first and follow sets, we build a table M with the leftmost column labeled with all the nonterminals in the grammar, and the top row labeled with all the terminals in the grammar, along with $. The following algorithm fills in the table cells:
1. For each production A -> u of the grammar, do steps 2 and 3
2. For each terminal a in First(u), add A -> u to M[A,a]
3. If ε in First(u), (i.e. A is nullable) add A -> u to M[A,b] for each terminal b in Follow(A), If ε in First(u), and $ is in Follow(A), add A -> u to M[A,]$
4. All undefined entries are errors
The concept used here is to consider a production A -> u with a in First(u). The parser should expand A to u when the current input symbol is a. It’s a little trickier when u = ε or u =>* ε. In this case, we should expand A to u if the current input symbol is in Follow(A), or if the $ at the end of the input has been reached, and $ is in Follow(A).
If the procedure ever tries to fill in an entry of the table that already has a non-error entry, the procedure fails — the grammar is not LL(1).
**LL(1) Grammars: Properties**
These predictive top-down techniques (either recursive-descent or table-driven) require a grammar that is LL(1). One fully-general way to determine if a grammar is LL(1) is to build the table and see if you have conflicts. In some cases, you will be able to determine that a grammar is or isn’t LL(1) via a shortcut (such as identifying obvious left-factors). To give a formal statement of what is required for a grammar to be LL(1):
- No ambiguity
- No left recursion
- A grammar G is LL(1) iff whenever A -> u | v are two distinct productions of G, the following conditions hold:
- for no terminal a do both u and v derive strings beginning with a (i.e., first sets are disjoint)
- at most one of u and v can derive the empty string
- if v =>* ε then u does not derive any string beginning with a terminal in Follow(A) (i.e., first and follow must be disjoint if nullable)
All of this translates intuitively that when trying to recognize A, the parser must be able to examine just one input symbol of lookahead and uniquely determine which production to use.
**Error Detection and Recovery**
A few general principles apply to errors found regardless of parsing technique being used:
- A parser should try to determine that an error has occurred as soon as possible. Waiting too long before declaring an error can cause the parser to lose the actual location of the error.
- A suitable and comprehensive message should be reported. “Missing semicolon on line 36” is helpful, “unable to shift in state 425” is not.
- After an error has occurred, the parser must pick a reasonable place to resume the parse. Rather than giving up at the first problem, a parser should always try to parse as much of the code as possible in order to find as many real errors as possible during a single run.
- A parser should avoid *cascading errors*, which is when one error generates a lengthy sequence of spurious error messages.
Recognizing that the input is not syntactically valid can be relatively straightforward. An error is detected in predictive parsing when the terminal on top of the stack does not match the next input symbol or when nonterminal A is on top of the stack, a is the next input symbol and the parsing table entry M[A,a] is empty.
Deciding how to handle the error is bit more complicated. By inserting specific error actions into the empty slots of the table, you can determine how a predictive parser will handle a given error condition. At the least, you can provide a precise error message that describes the mismatch between expected and found.
Recovering from errors and being able to resume and successfully parse is more difficult. The entire compilation could be aborted on the first error, but most users would like to find out more than one error per compilation. The problem is how to fix the error in some way to allow parsing to continue.
Many errors are relatively minor and involve syntactic violations for which the parser has a correction that it believes is likely to be what the programmer intended. For example, a missing semicolon at the end of the line or a misspelled keyword can usually be recognized. For many minor errors, the parser can "fix" the program by guessing at what was intended and reporting a warning, but allowing compilation to proceed unhindered. The parser might skip what appears to be an erroneous token in the input or insert a necessary, but missing, token or change a token into the one expected (substituting BEGIN for BGEIN). For more major or complex errors, the parser may have
no reliable correction. The parser will attempt to continue but will probably have to skip over part of the input or take some other exceptional action to do so.
_Panic-mode_ error recovery is a simple technique that just bails out of the current construct, looking for a safe symbol at which to restart parsing. The parser just discards input tokens until it finds what is called a _synchronizing_ token. The set of synchronizing tokens are those that we believe confirm the end of the invalid statement and allow us to pick up at the next piece of code. For a nonterminal $A$, we could place all the symbols in $\text{Follow}(A)$ into its synchronizing set. If $A$ is the nonterminal for a variable declaration and the garbled input is something like `double d;` the parser might skip ahead to the semicolon and act as though the declaration didn’t exist. This will surely cause some more cascading errors when the variable is later used, but it might get through the trouble spot. We could also use the symbols in $\text{First}(A)$ as a synchronizing set for re-starting the parse of $A$. This would allow input `junk double d;` to parse as a valid variable declaration.
**Bibliography**
|
{"Source-Url": "http://www.keithschwarz.com/cs143/WWW/sum2011/handouts/090_Top_Down_Parsing.pdf", "len_cl100k_base": 8479, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36723, "total-output-tokens": 9100, "length": "2e13", "weborganizer": {"__label__adult": 0.0002982616424560547, "__label__art_design": 0.00026416778564453125, "__label__crime_law": 0.0002124309539794922, "__label__education_jobs": 0.00034332275390625, "__label__entertainment": 5.40614128112793e-05, "__label__fashion_beauty": 0.00011277198791503906, "__label__finance_business": 0.00010401010513305664, "__label__food_dining": 0.0003426074981689453, "__label__games": 0.0005049705505371094, "__label__hardware": 0.0007905960083007812, "__label__health": 0.00026226043701171875, "__label__history": 0.00013816356658935547, "__label__home_hobbies": 7.253885269165039e-05, "__label__industrial": 0.0002753734588623047, "__label__literature": 0.00024271011352539065, "__label__politics": 0.0001710653305053711, "__label__religion": 0.0003993511199951172, "__label__science_tech": 0.005535125732421875, "__label__social_life": 5.7220458984375e-05, "__label__software": 0.0037174224853515625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00025916099548339844, "__label__transportation": 0.0003268718719482422, "__label__travel": 0.00015914440155029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32770, 0.00296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32770, 0.51384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32770, 0.84032]], "google_gemma-3-12b-it_contains_pii": [[0, 2010, false], [2010, 4801, null], [4801, 7438, null], [7438, 8714, null], [8714, 10308, null], [10308, 12853, null], [12853, 15226, null], [15226, 17109, null], [17109, 18625, null], [18625, 20736, null], [20736, 22341, null], [22341, 24473, null], [24473, 26318, null], [26318, 28484, null], [28484, 31153, null], [31153, 32770, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2010, true], [2010, 4801, null], [4801, 7438, null], [7438, 8714, null], [8714, 10308, null], [10308, 12853, null], [12853, 15226, null], [15226, 17109, null], [17109, 18625, null], [18625, 20736, null], [20736, 22341, null], [22341, 24473, null], [24473, 26318, null], [26318, 28484, null], [28484, 31153, null], [31153, 32770, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32770, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32770, null]], "pdf_page_numbers": [[0, 2010, 1], [2010, 4801, 2], [4801, 7438, 3], [7438, 8714, 4], [8714, 10308, 5], [10308, 12853, 6], [12853, 15226, 7], [15226, 17109, 8], [17109, 18625, 9], [18625, 20736, 10], [20736, 22341, 11], [22341, 24473, 12], [24473, 26318, 13], [26318, 28484, 14], [28484, 31153, 15], [31153, 32770, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32770, 0.09028]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
74904fadc0a686d9e3a36fdd02937e63c9152c2c
|
Simulation-based Feature Selection for Software Requirements Baseline
Rabeb Mizouni
Department of Software Engineering, Khalifa University
Abu Dhabi, United Arab Emirates
rabeb.mizouni@kustar.ac.ae
Sanja Lazarova-Molnar
Faculty of Information Technology, United Arab Emirates University, PO Box 17551,
Al Ain, United Arab Emirates
sanja@uaeu.ac.ae
Abstract—Requirements baseline is the set of features intended to be delivered in a specific version of a software application under development. During this decade the constant growth of software products along with the evident pressure on time to market has made the selection of features a crucial step for a software project success. It is both a challenging and time consuming process that requires a substantial expertise from project managers. Prioritization of features is one of the means that help in making the choice. It is typically performed by grouping features into three priority levels: critical, important, and useful. Critical and important features are seen as “must have”, while useful features are qualified as “nice-to-have”. Paradoxically, the latter plays an important role in customer satisfaction and achieving the “wow” factor. A good selection of useful features identifies efficiently those features that can be delivered by the end of the project without any additional delay. So far, managers have little support in this process increasing the chances of making a poor selection. To answer this need, we propose a new modeling and simulation approach that takes into account feature priorities and calculates the probabilities of having useful features implemented within the timeframe of the project. It also incorporates uncertainties related to human resources availability providing a more realistic schedule and estimation.
Index Terms—Requirements Baseline, Feature Selection, Features Priority, Simulation, Proxel-based Simulation
I. INTRODUCTION
Project management is the discipline of planning, organizing, and managing resources to achieve specific project goals and objectives. It is the activity which uses schedules to plan and subsequently report progress within the project environment. In the initial project stages, project managers are usually concerned with defining initial project plans and requirements baseline where the project mission as well as the project schedule plan is identified [1]. This initial plan is used as a basis on which delivery commitments are made. Therefore, constructing credible initial plans that provide a good estimation of completion dates helps the project success and may avoid customers’ disappointment.
A. Problem Statement
With the ever-growing size of software projects, managers are facing the growing problem of feature selection. Product feature is defined as a set of logically related requirements that provide certain functionality to the software and enable the fulfillment of business objectives. Thus, feature selection addresses the problem of selecting features that can be implemented within project constraints, such as human resources, budget, and time. Typically, the set of features that should be implemented in each release are identified in the initial plan. However, this process is not yet controlled and even though feature selection is carried out based on the available project resources, in many cases, managers fail to deliver all the features they first promised to their clients. In fact, activities in a real industrial project may take more time than their original estimate, leading to a delay in other activities due to resource unavailability [2] which may cause delays in features’ implementation. Consequently, even good projects fail [3], particularly so in the software industry where statistics show that approximately one-third of software projects fail to deliver anything, and another third deliver something workable but not satisfactory [4]. This failure is mainly to poor upfront planning, late re-planning and non-tracked planning.
“Planning and control” has been named as one of the top three factors that influence project success, besides “project objectives” and “personnel and team building” [5]. In addition, it is well recognized today that the quality and depth of early planning is a common element on most successful projects. Such plans should remain of high quality even when the environment deviates due to uncertainties [6]. They also define accurately the product features to deliver within the timeframe of the project and which become the initial baseline for product design.
Requirements baseline is the set of features intended to be delivered in a specific version of the application. This baseline represents a contract between the customer and the development team. It reflects, in customer’s point of
A cancelable task is defined as a project task that can be postponed or canceled, i.e.
- postponed, if:
1) resources needed to implement the task are not available, or
In this paper we propose a new simulation approach that helps managers in the selection of useful features, as shown in Figure 2. Our approach promotes a proactive scheduling that takes into consideration the available human resources to schedule task’s execution with respect to the priority of the feature it is implementing. Analogous to features’ priorities, our model distinguishes between cancelable and non-cancelable tasks.
2) there is a task with higher priority that is not implemented yet and that needs the resources of the cancelable task, and
- canceled, if:
3) the project runs out of budget and time.
To achieve our goal, we enhance the project schedule model to enable calculation of probabilities of having a certain task implemented by a given user deadline. Once the simulation results are available, probabilities of completing each cancelable task are provided to the manager. These probabilities may act as a new selection criterion and help them in making a judicious choice of features to consider in their baseline. The obtained baseline will have a higher probability to be implemented and delivered within the given deadline.
We use Gantt chart for modeling initial project schedules to display the precedence constraints. We extend this formalism to include cancelable tasks and to calculate the probability of their completion. We choose the proxel-based method for simulating project schedules for its flexibility and accuracy [10]. Namely, with one single simulation run, the hybrid method provides complete transient solution of a stochastic model, showing its behavior at every point in the discretized timeline. The method has been successfully applied to the simulation of classical project schedules [11].
C. Paper Organization
The rest of the paper is organized as follows. Section II presents the related work on feature selection as well as on simulation models in project scheduling. Section III defines different project tasks according to feature priority levels. It also presents the proxel-based simulation. Section IV illustrates our approach with a simulation example that shows how our model can help the selection of useful features. Section V concludes the paper and outlines our future work.
II. RELATED WORK
Unfortunately, major project planning software packages are too stiff when it comes to defining project schedules. Also, many of the analysis methods and tools oversimplify the uncertainty in projects and thus provide inaccurate results [12] [13], [14].
In [16], authors present a new framework, NextMove, that assists project managers in allocating and managing tasks in an agile, distributed development environment. The framework considers team backlog, as well as requirement priorities to help project teams in tracking, coordinating and communicating tasks in a distributed development environment. However the simulation model used does not consider emergent uncertainties that any software project often encounters. To have a realistic definition and analysis, project schedules have to anticipate high uncertainty, and should also provide recommendations to aid the decision making process in various possible uncertain scenarios. This is what we term an Enhanced Project Schedule, for which generation we have already developed a framework [15].
On the other hand, because of the importance of release planning, many researchers have addressed the assignment of requirements to a sequence of releases. The authors in [17] analyze the effects of defect and effort re-estimation in the process of release re-planning. Each planned release has a limited effort capacity. This capacity limits the number of features that can be implemented during that release. In the example they present, the features of the baseline were chosen according to their respective priorities. However, uncertainties that may arise are again not taken into account, affecting badly the feature selection. In [18], the authors present a six step process model for release planning, termed EVOLVE. This approach takes into account stakeholder priorities as well as effort constraints for all releases. While their approach supports the feature selection decision according to their priorities, it does not include resource constraints and other aspects of task scheduling, a fact that impairs the accomplishment of a robust schedule.
In our previous work [19], [20], we aimed towards providing a more realistic model and more accurate predictions of durations of project schedules. We have developed a new type of activity in project schedules, termed “floating task”. Floating task is a task that anticipates high uncertainty and is highly flexible in terms of its human resource allocation. In addition, this task has the property of being flexible in its order of execution with respect to other tasks. In this paper, we extend our model to help managers decide on features selection and answer the following three challenges:
1) How can project scheduling differentiate between nice-to-have features and vital features?
2) How to take full advantage of the global resource pool and try to implement the maximum number of nice-to-have features within the timeframe of the project?
3) How can simulation-based tools guide the manager in her/his choice of nice-to-have features to deliver in each release?
III. PROJECT SCHEDULING: A NOVEL MODEL
To provide a more realistic modeling of software project schedules, we consider a multi-modal task representation where each activity can be processed in one of several modes. Each mode describes a task implementation option in terms of duration and human resource allocation. We introduce several types of project tasks to enable proper mapping of each feature priority to a task in the project schedule.
### Table 1: Mapping between Feature Priority and Task Nature
<table>
<thead>
<tr>
<th>Feature Type</th>
<th>Non-Cancelable with Fixed Human Resource Allocation</th>
<th>Non-Cancelable with Multimodal Human Resource Allocation</th>
<th>Cancelable with Fixed Human Resource Allocation</th>
<th>Cancelable with Multimodal Human Resource Allocation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Critical</td>
<td>X</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Important</td>
<td>X</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Useful</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
</tbody>
</table>
### A. Assumptions
In the following we list the set of assumptions that our model is based upon:
1. Managers are provided with a pool of human resources with different degrees of productivity to perform various types of tasks.
2. Each team is responsible for completing the implementation of a feature as a whole.
3. Tasks are sharing human resources and their implementation is subject to certain precedence order.
4. A change in the team structure is considered as a change of the whole team. The new team has new characteristics and hence may take less or more time to implement its assigned tasks.
5. Duration of a task is modeled by a probability distribution function. Input probability distribution functions can be fitted based on historical data for similar tasks and situations and may be adapted to concrete situations of projects. The estimation process would, obviously, require a high level of expertise.
6. A feature can be implemented by various teams. Each team has a specified probability distribution to implement it according to its expertise.
7. Features are correctly prioritized. Change of prioritization [21], as it may happen during the project implementation due to addition/removal/change of other features, is not observed in our current model.
8. The type of a project task depends on the priority of the feature that the task is representing.
### B. Tasks Definition
We propose to distinguish tasks so that they reflect the priority of the features they are implementing. Two factors are to be considered: 1) flexibility of the order of execution in the schedule, and 2) flexibility in the human resource allocation. Further, we distinguish two tasks, named: non-cancelable tasks and cancelable tasks defined as follows:
1. **Non-Cancelable Tasks**: tasks that may have flexibility in human resource allocation, but cannot be canceled. In fact, when teams responsible for implementing a non-cancelable task are either unavailable or solicited to do tasks with higher priority, then these tasks can be postponed but never canceled. Resource allocations for non-cancelable tasks can follow a fixed strategy, where only one team can be assigned to implement them, or a multi-modal strategy, where many teams can be assigned to implement them according to teams availability.
2. **Cancelable Tasks**: tasks that have flexibility in of the human resource allocation and can be canceled. When teams responsible for implementing a cancelable task are either unavailable or solicited to implement tasks with higher priority, and the project is overdue, then the task is canceled. Resource allocations for cancelable tasks can follow a fixed strategy, where only one team can be responsible for implementing it, or a multi-modal strategy, where many teams can be responsible to implement it with respect to their availability.
Table 1 presents a possible mapping between features priority levels and their potential assigned tasks. Let us outline some facts:
1. As expected, critical and important features can never be modeled as cancelable tasks. Only nice-to-have features can be canceled.
2. While it is possible to model useful features with non-cancelable tasks, we do not recommend it. It prevents the model from gaining flexibility in the task execution order.
3. Critical and important features have to be modeled as non-cancelable tasks, either with fixed or multimodal human resources’ allocation. We believe that this depends on the risk level of the feature. When the risk associated with the feature is high then it would be more judicious to assign its implementation to an experienced team and model it as non-cancelable task with fixed resource allocation. However, when the feature is critical, but presents low risk, then we can tolerate more flexibility in its implementation.
Let us formally define a task.
Let \( R = \{ R_i, 1 \leq i \leq n \} \) be the set of \( n \) features of the software under implementation. Let \( T = \{ T_i, 1 \leq i \leq m \} \) be the set of \( m \) teams (representing the human resources available for that software project).
**Definition 1: Task**
A task is a 3-tuple \( task = (R_i, D, type) \) where:
1. \( R_i \in R \) the feature the task is implementing.
2. \( type \in \{ cancelable, non-cancelable \} \) the type of the task
3. \( D = \{ D_{ij} = (R_i, T_j)/T_j \in T \} \) is the set of probability functions assigned to team \( T_j \) to implement feature \( R_i \).
In the case of fixed human resource allocation strategy \( D \) is defined as a singleton.
C. Enhanced Project Schedule
Ideally, the nature of tasks needs to be taken into consideration when simulating the project schedule. To achieve this goal, we propose to extend the project schedule with fuzzy rules [22, 23]. In the context of our approach, fuzzy rules are conditional statements that express potential deviations to the initial project plan and remedial actions that should be undertaken consequently.
Each fuzzy rule is made up of two parts: condition and action, formally written as “condition $\Rightarrow$ action”. Conditions can be described either by using strict terms, or fuzzy ones. Actions can typically be canceling or interrupting some of the tasks, or one of the various types of rescheduling. Using these fuzzy rules makes our schedule description evolving, rather than rigid and inflexible. An example of a fuzzy rule would be:
$$\text{Task}_x \text{ takes too long } \Rightarrow \text{ cancel Task}_y$$
or
$$\text{Task}_x \text{ completes quickly after Task}_y \Rightarrow \text{ cancel Task}_x.$$
Both are examples for typical actions during project execution. However, in our approach we allow for their modeling, assessment and quantitative evaluation. The fuzzy rules are in fact the interpretation of the task type on the simulation schedule. They should be consistent with the type of the tasks specified by the analyst. As an example, a rule can never recommend to cancel a non-cancellable task as they represent critical and important features. Consequently, the two rules mentioned above are correct only if Task $x$ and Task $y$ are cancellable tasks.
**Definition 2: Enhanced Project Schedule**
An Enhanced Project Schedule (EPS) is a 5-tuple $\text{EPS} = (\text{Tasks}, P, T, F, \text{Initial})$ where:
1) $\text{Tasks} = \{\text{Task}_1, \text{Task}_2, ..., \text{Task}_n\}$, set of tasks in the project schedule
2) $P = \{P_1, P_2, ..., P_m\}$, where $P \subseteq \text{Tasks} \times \text{Tasks}$ is the set of tuples representing the tasks precedence constraints. The tuple ($\text{Task}_x, \text{Task}_y$) would mean that completing $\text{Task}_x$ is a pre-requisite for beginning $\text{Task}_y$.
3) $T = \{T_1, T_2, ..., T_n\}$ set of teams available for the project implementation
4) $F = \{F_1, F_2, ..., F_l\}$ set of fuzzy rules that are in line with features’ priorities.
5) $\text{Initial}$ set of possible starting points of the project implementation.
Note that $\text{Initial}$ represents the assignments of tasks to the different teams at the starting point of the project implementation. $\text{Initial}$ can be either a singleton, where we have only one possible starting point to the project implementation or a set of different possible starting points. $\text{Initial}$ can also be a Gantt Chart that determines the initial assignment of tasks to available teams, as well as a possible initial order of execution.
As shown in Figure 3, the generation of EPS is based on the feature precedence constraints and the nature of the tasks. As mentioned previously, any rule specified within the fuzzy rules should not violate them.
**D. Proxel-Based Simulation for Feature Selection**
The proxel-based method is a simulation method based on the method of supplementary variables [24]. It was introduced and formalized in [10, 25]. The advantages of the proxel-based method are its flexibility to analyze stochastic models that can have complex dependencies and at the accuracy of results, which is comparable to the accuracy of Markov chain numerical solvers [26]. It has been successfully applied for project schedule simulation [11], and due to its flexibility it is highly suitable to model and simulate project schedules with additional

complexity of re-scheduling, governed by the inclusion of fuzzy rules.
The proxel-based method is based on expanding the definition of a state by including additional parameters which trace relevant quantities in one model following a previously chosen time step. Typically, this includes, but is not limited to, age intensities of the relevant transitions. The expansion implies that all parameters pertinent for calculating probabilities for future development of a model are identified and included in the state definition of a model.
Proxels (stands for probability elements), as basic computational units of the algorithm, follow dynamically all possible expansions of one model. The state-space of the model is built on-the-fly, as illustrated in Figure 4, by observing every possible transiting state and assigning a probability value to it (Pr in the figure stands for the probability value of the proxel). Basically, the state space is built by observing all possible options of what can happen at the next time step. The first option is for the model to transit to another discrete state in the next time step, according to the associated transitions. The second option is that the model stays in the same discrete state, which results in a new proxel too. Zero-probability states are not stored and, as a result, no further investigated. This implies that only the truly reachable (i.e. tangible) states of the model are stored and consequently expanded.
At the end of a proxel-based simulation run, a transient solution is obtained which outlines the probability of every state at every point in time, as discretized through the chosen size of the time step. It is important to notice that one source of error of the proxel-based method comes from the assumption that the model makes at most one state change within one time step. This error is elaborated in [10].
Each proxel carries the probability of the state that it describes (denoted as Pr in Figure 4). Probabilities are calculated using the instantaneous rate function (IRF), also known as hazard rate function. IRF approximates the probability that an event will happen within a predetermined elementary time step, given that it has been pending for a certain amount of time \( \tau \) (indicated as ‘age intensity’). It is calculated from the probability density function \( f \) and the cumulative distribution function \( F \) using the following formula:
\[
\mu(\tau) = \frac{f(\tau)}{1 - F(\tau)}
\]
(1)
As all state-space based methods, this method also suffers from the state-space explosion problem [27], but it can be predicted and controlled by calculating the lifetimes of discrete states in the model. In addition, its efficiency and accuracy can be further improved by employing discrete phases and extrapolation of solutions [28]. More on the proxel-based method can be found in [10].
Figure 4: Illustration of the development of the proxel-based simulation algorithm
For our purpose we extended the original proxel-based simulation algorithm to account for the fuzzy scenarios (shown by the $p_{fuzzy}$ variable in Figure 4). They fitted straightforwardly into the existing framework. In addition, the algorithm was adapted to collect statistics about the probability of having a certain feature implemented.
The general simplified proxel format is the following:
$$Proxel = (State, t, Pr)$$
where:
- $State = (Task Vector, Age Vector, Completed Tasks)$, and
- $Task Vector$ is a vector whose size is equal to the number of teams available and records the task that each team is working on,
- $Age Vector$ tracks the length that each team has been working on the task specified in the Task Vector, correspondingly,
- $Completed Tasks$ stores the set of completed tasks,
- $t$ is the time at which the afore-described state is observed, and
- $Pr$ stores the probability that the schedule is in the afore-specified state at time $t$.
Algorithm 1 demonstrates the on-the-fly building of the state-space of the project schedule model. Thus, there is no need for any pre-processing to generate the state-space. It is directly derived from the input file specification. The initial state proxel is derived from the initial state that is specified in the input file as well. The algorithm operates by using two interchangeable data structures, Proxel_Tree[0] and Proxel_Tree[1], that store the proxels from two subsequent time steps (regulated by the switch variable). If two proxels represent the same state, there is only one proxel stored, and their corresponding probabilities are summed up.
For collecting statistics about useful features (cancelable tasks), we introduce rewards in the simulation model that are associated with the event of completion of a task. This provides us with a probabilistic assessment of the completion of a task, subject to the fuzzy scenarios associated with the project schedule. Finally, the obtained results are probability functions of time that show the probabilities of having each task completed. The ones that are most relevant for our approach are those of the cancelable tasks as they provide us with insight useful to the selection of features to be implemented within the timeframe of the project or the release. More precisely, we are looking for the ranking of probabilities of having each “nice-to-have” feature implemented.
### Algorithm 1: Proxel-based simulation of enhanced project schedules
| Input: | EPS, Project Goals |
| Output: | Simulation Results |
```plaintext
switch = 0
insert Initial State Proxel in the Proxel_Tree[switch]
switch = 1 - switch
while (maximum simulation time has not been reached)
{
px = get_proxel(Proxel_Tree[switch]);
for (each task in the Task Vector(px))
{
check task precedence & team availability;
generate next state S;
compute probability for S in computed_prob
search for the S in the Proxel_Tree[1-switch];
if (S found)
{
px1 = found_proxel(S);
probability(px1) = (probability(px1) ) + computed_prob;
}
else
{
generate new proxel px2(S);
insert proxel in Proxel_Tree[1-switch];
}
delete px from Proxel_Tree[switch];
increase simulation time by one time step;
calculate statistics with respect to project goals;
switch = 1- switch;
}
```
The obtained results are probability functions of time that show the probabilities of having each task completed. The ones that are most relevant for our approach are those of the cancelable tasks as they provide us with insight useful to the selection of features to be implemented within the timeframe of the project or the release. More precisely, we are looking for the ranking of probabilities of having each “nice-to-have” feature implemented.
### Table 2: Features vs. Tasks and Human Resources Allocation
<table>
<thead>
<tr>
<th>Features</th>
<th>Priority</th>
<th>Precedence Constraints</th>
<th>Human Resources Allocation</th>
<th>Task</th>
<th>Task Nature</th>
</tr>
</thead>
<tbody>
<tr>
<td>Feature 1</td>
<td>Critical</td>
<td>Null</td>
<td>Team A</td>
<td>Task 1</td>
<td>Non-cancelable</td>
</tr>
<tr>
<td>Feature 2</td>
<td>Critical</td>
<td>Feature1</td>
<td>Team B</td>
<td>Task 2</td>
<td>Non-cancelable</td>
</tr>
<tr>
<td>Feature 3</td>
<td>Important</td>
<td>Null</td>
<td>Team B</td>
<td>Task 3</td>
<td>Non-cancelable</td>
</tr>
<tr>
<td>Feature 4</td>
<td>Useful</td>
<td>Feature3</td>
<td>Team A or B</td>
<td>Task 4</td>
<td>Cancelable</td>
</tr>
<tr>
<td>Feature 5</td>
<td>Useful</td>
<td>Null</td>
<td>Team B</td>
<td>Task 5</td>
<td>Cancelable</td>
</tr>
<tr>
<td>Feature 6</td>
<td>Useful</td>
<td>Null</td>
<td>Team A</td>
<td>Task 6</td>
<td>Cancelable</td>
</tr>
</tbody>
</table>
The adapted proxel-based simulation method is illustrated in Subsection IV.B, where using our example project schedule model, we show step-by-step the simulation process.
IV. EXPERIMENTS
We consider a general example of a project schedule that contains six features: two critical features, one important feature and three useful features. Based on their priorities, each feature is mapped to a task (Task1, Task2, Task3, Task4, Task5, and Task6) and assigned a task nature (i.e. cancelable or non-cancelable). We consider that the project has two teams available: Team A and Team B. Tasks 1, 2, 3, 5 and 6 have fixed human resource allocation while Task4 can be implemented by either Team A or Team B. Table 2 summarizes the mapping between the features and tasks as well as the human resources allocation. We notice that Feature2 and Feature4 cannot be implemented until Feature1 is completed, a fact that adds some precedence constraints to our model.
A. Model Specifications
The purpose of simulation is to help managers determine which of the three useful features have higher probability to be implemented within the project deadline, under the fuzzy rule constraints, and with respect to teams’ availability, and hence be part of the project baseline. The Gantt chart of the sample project schedule is shown in Figure 5, where the blue-colored tasks are cancelable. In addition to this, the project schedule has a predefined deadline Δ=15.
The project schedule is described as an enhanced project schedule, which means it also features a fuzzy scenario, under which conditions it is observed. The fuzzy scenario enhances the degree of uncertainty available and creates a dynamically evolving project schedule, based on an initial one. In our case, the fuzzy scenario is defined as follows:
If the duration of the project is close to the deadline Δ and all non-cancelable tasks are completed, then do not start any cancelable task in-line and do not interrupt the other team if they have already started to work on either of the tasks.
It is formally described as:
\[ t \text{ is close to the } \Delta \Rightarrow \text{ cancel uncompleted Task}_y, y \in \{4,5,6\}. \]
Recall that our goal is to discover, based on the available information, which of the “nice-to-have” features are most probable to be implemented, given the constraints of the initial project schedule. For this purpose we run proxel-based simulation which allows us to collect the necessary statistics to answer this question.
The concrete parameters of the tasks’ duration distribution functions of our model are as follows:
- Duration of Task 1 ~ Uniform (2.0, 10.0)
- Duration of Task 2 ~ Normal (6.0, 1.0)
- Duration of Task 3 ~ Uniform (2.0, 6.0)
- Duration of Task 4, performed by:
- Team A ~ Uniform (3.5, 5.5)
- Team B ~ Uniform (2.0, 5.0)
- Duration of Task 5 ~ Uniform (0.5, 2.0)
- Duration of Task 6 ~ Uniform(0.2, 5.8)
The fuzzy membership function that describes close to deadline is defined as follows:
\[
\lambda(t,a,b) = \begin{cases}
0, & t < a \\
\frac{t-a}{b-a}, & a \leq t \leq b \\
1, & t > b
\end{cases}, \text{ where } \\
\Delta - \frac{\Delta}{4}, & b = \Delta
\]
B. Simulation Details
To illustrate the proxel-based simulation in the context of enhanced project schedules we use our sample model and provide the step-by-step development of its simulation. Thus, the first step is to define the format of the proxel, which is dependent on the model to allow the tracking of the relevant quantities in the model. The proxel in the simplified form, by definition, consists of five
components, i.e. the state, the age intensities, the relevant rewards, the simulation time \( t \), and the probability of the system being in that state at that point in time. In this way, it uniquely defines each and every state the model can be in every point in time.
For our model, the state definition is the following:
\[
\text{State} = (\text{TaskOfTeamA}, \text{TaskOfTeamB}),
\]
where both elements hold the names of the tasks that both teams are working on, correspondingly. Furthermore, we track the amount of time that each team has been working on the respective task, which forms the age intensity vector. Finally, to this we add the set of completed tasks, as an additional parameter (relevant reward) to the state vector that gets updated each time a task completes. Thus, the proxel definition for the example model is as follows:
\[
\text{Proxel} = (\text{State}, \text{AgeVector}, \text{CompletedTasks}, t, Pr)
\]
For the concrete example, the initial proxel is:
\[
\text{InitialProxel} = ((\text{Task1}, \text{Task3}), (0,0), \emptyset, 0,1.0)
\]
At the beginning, teams A and B work on Task1 and Task3, correspondingly, and have been doing this for zero duration of time. There are no completed tasks, and thus, this parameter is an empty set. The simulation time is zero as well, as it has just begun, and the probability is 1.0 as that is the certain initial state of the model. Theoretically, depending on the distribution functions and size of the time step \( \Delta t \), one of the following can happen:
1) Task1 completes,
2) Task3 completes, and
3) None of the tasks complete
According to this, following proxels will be generated:
1) \(((\text{Task4}, \text{Task3}), (0, \Delta t), \{\text{Task1}\}, \Delta t, p11)\), \(((\text{Task6}, \text{Task3}), (0, \Delta t), \{\text{Task1}\}, \Delta t, p12)\), \(((C, \text{Task3}), (0, \Delta t), \{\text{Task1}\}, \Delta t, p13)\),
2) \(((\text{Task1}, \text{Task5}), (\Delta t, 0), \{\text{Task3}\}, \Delta t, p21)\), \(((\text{Task1}, C)(\Delta t, 0), \{\text{Task3}\}, \Delta t, p22)\), and
3) \(((\text{Task1}, \text{Task3}), (\Delta t, \Delta t), \emptyset, \Delta t, 1 - p11 - p12 - p13 - p21 - p22)\)
In case (1) and (2) there are more sub-cases due to the fuzzy rules, i.e. depending on the fuzzy membership function “close to deadline” value. Let us consider the proxel (1). The first case represents the possibility that Team A starts implementing Task4. The second case represents the possibility that Team A starts implementing Task6. Finally the third case represents the possibility that Team A is release because the project is close to deadline, as specified in the fuzzy rule. In that sense:
\[
p13 = 1.0 \times \lambda_1(0) \Delta t \times \mu(0, \Delta - \frac{\Delta}{2}, \Delta),
\]
where \( \lambda_1(0) \) is the instantaneous rate function for the completion of Task1, and \( \mu(0, \Delta - \frac{\Delta}{2}, \Delta) \) is the fuzzy membership function of “close to deadline”.
This clearly illustrates the development of the proxel-based simulation. To obtain the final statistics we sum the probabilities for each discrete state of the model at every time step. The discrete state is composed of the tasks that each team is working on, along with the set of completed tasks.
C. Simulation Results
In the following we present the simulation results of our model. The proxel-based simulation provides complete results, i.e. a probability function of the duration of the project (see Figure 6) along with any needed statistics as is the case with the task completion probabilities. In addition, it provides the probability functions of having each of the tasks completed, as shown in Figure 7. The simulation shows that:
1) the three cancelable tasks, representing the three useful features we have specified, as expected have probabilities less than 1.0 of getting completed within the timeframe of the project. This is due to the fact that they can be canceled if the project is nearing the deadline, more realistic result.
2) Task6 has the lowest probability to be implemented within the given timeframe, whereas Task5 has the highest probability to be implemented within the given timeframe. This is slightly counter-intuitive, as one would expect that the task that can be accomplished by more teams has the highest probability of completion. This implies that the simulation can provide a significant insight into the real assumptions and behavior of the project schedule, thus impacting the feature selection, by providing additional information.

3) Task5 can be implemented before the implementation of Task2, in spite of Task5 being a cancelable task. This is due to the precedence constraints we have. Recall that Task2 should be implemented after Task1. Hence, in order to optimize the usage of resources, once Team B finishes the implementation of Task3, it starts working on Task5 rather than staying idle waiting for Task1 to complete.
D. Discussion
As afore-described, our focus is to obtain the ranking of probabilities to have cancelable tasks completed within the project deadline. This provides us with an insight to aid the process of selection of useful features all along with the optimization of the human resources usage.
The enhanced project schedule model allows for including higher degree of uncertainty, while increasing the authenticity of the model and aiding the decision making. We have illustrated our approach with a simple model to aid the comprehension of the approach. However, the fuzzy rules can be much more complex, and theoretically, the proxel-based method can handle them easily. This is a work in progress, where the goal is to produce a tool to fully automate the process. In addition, simulation results depend on the initial state of the project schedule model. Hence, in order to have an accurate result, we need to consider all possible initial assignments of teams over the tasks.
Another considered future improvement is the team takeover, which occurs regularly and can be also managed by the fuzzy.
V. SUMMARY AND OUTLOOK
We presented a new simulation model that supports the process of useful features selection. The goal is to help the manager in selecting and prioritizing these nice-to-have features and to guide such selection based not solely on human judgment, but also on a robust simulation approach that takes into account additional uncertainty factors.
We consider features’ priorities to be the input to the simulation tool. The simulation approach distinguishes between useful features that under some circumstances may be canceled, and the rest of them, for which the manager has no choice but to deliver them at the end of the project. Finally, we calculate probabilities of having each project task completed, which provides the manager with insight of realistic chances of having a feature implemented if the project plan deviates because of human resources uncertainties. The approach that we are presenting is not meant to be a comprehensive solution to features selection. Rather, it aims to augment the chance of a better selection of features based on feature priority and team availability.
Our model has three main advantages: 1) it targets the optimization of resources usage, and hence, minimizes idle time of teams; 2) it identifies the nice-to-have features that have the highest probability to be implemented within project/release deadlines and with respect to human resources uncertainties; and 3) it gives a new criterion for useful features selection. As a result, the obtained project baseline is expected to be of higher quality and depth.
In our future work, we aim at applying our approach in an industrial project planning. We also plan to extend the simulation model to represent more priority levels. And finally, we aim at investigating the extension of the model to handle multi-project resource sharing.
REFERENCES
criteria to review via appropriate management, methodologies and teams," Brunel University, London, 2010.
|
{"Source-Url": "http://rabebmizouni.com/uploads/3/5/4/5/3545434/paper_2.pdf", "len_cl100k_base": 8455, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45690, "total-output-tokens": 10410, "length": "2e13", "weborganizer": {"__label__adult": 0.00036215782165527344, "__label__art_design": 0.0005240440368652344, "__label__crime_law": 0.0002872943878173828, "__label__education_jobs": 0.00366973876953125, "__label__entertainment": 0.00010001659393310548, "__label__fashion_beauty": 0.0001982450485229492, "__label__finance_business": 0.00128173828125, "__label__food_dining": 0.0003814697265625, "__label__games": 0.0010995864868164062, "__label__hardware": 0.0006418228149414062, "__label__health": 0.0004963874816894531, "__label__history": 0.0003037452697753906, "__label__home_hobbies": 0.00016808509826660156, "__label__industrial": 0.0005331039428710938, "__label__literature": 0.000316619873046875, "__label__politics": 0.00018334388732910156, "__label__religion": 0.0003170967102050781, "__label__science_tech": 0.031890869140625, "__label__social_life": 0.0001461505889892578, "__label__software": 0.01342010498046875, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.00031828880310058594, "__label__transportation": 0.0004658699035644531, "__label__travel": 0.0002605915069580078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44389, 0.02107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44389, 0.34371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44389, 0.91034]], "google_gemma-3-12b-it_contains_pii": [[0, 4817, false], [4817, 5421, null], [5421, 10801, null], [10801, 16433, null], [16433, 20189, null], [20189, 23153, null], [23153, 27964, null], [27964, 31562, null], [31562, 36194, null], [36194, 40104, null], [40104, 44389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4817, true], [4817, 5421, null], [5421, 10801, null], [10801, 16433, null], [16433, 20189, null], [20189, 23153, null], [23153, 27964, null], [27964, 31562, null], [31562, 36194, null], [36194, 40104, null], [40104, 44389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44389, null]], "pdf_page_numbers": [[0, 4817, 1], [4817, 5421, 2], [5421, 10801, 3], [10801, 16433, 4], [16433, 20189, 5], [20189, 23153, 6], [23153, 27964, 7], [27964, 31562, 8], [31562, 36194, 9], [36194, 40104, 10], [40104, 44389, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44389, 0.05859]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
15e41b4cb023c1282b2dc4bc854ba80d2ef0b96b
|
Pervasive Model Checking
Jin Song Dong
National University of Singapore
(joint work with two formal PhD students: Jun Sun and Yang Liu and 11 other current PhD students and 3 postdocs)
Overview
- Model checking has made excellent progress in recent years, i.e., Microsoft SLAM project and *Intel i7 processor*
- At CAV 2009, Intel reported that Intel i7 CPU is verified using model checking without a single test case!
- There are a number of model checkers like SPIN, SMV and FDR which are designed for specialized domains and are therefore based on restrictive modeling languages.
- PAT is a self-contained, extensible and modularized multi-domain model checking systems for composing, simulating and reasoning of concurrent, real-time, probabilistic systems and other possible domains (i.e. distributed algorithms, security protocols, web services, sensor networks, etc).
PAT System Design (ICSE’08’12, CAV’09’12, FM’11’12, TOSEM’12)
Two formal PhD students:
11 current PhD students
3 postdocs
PAT Languages features
- **Global variables**: Boolean, Integer, Multi-dimensional arrays, etc.
- **Data/State operations**: a sequential program in PAT’s language or a C# external method.
- **Event Control flow**: CSP process constructs (choice, parallel, interrupt, etc.) + timed patterns (delay, timeout, timed interrupt, deadline, etc.) + probabilistic choices ...
- **Assertions**: reachability, refinement relationship (trace, timed trace, failures, failures/divergence), state/event LTL, min/max probability.
PAT Vision: *Pervasive Model Checking*
- Model Checking as Planning/Problem-Solving/Scheduling/Services
- Wide application domains, including Real-Time and Probabilistic systems.
Model checking as planning/problem-solving
// Sliding Game
// The following models the sliding game with the extra 'costs' complexity
var board[9]::[0..8] = [3, 5, 6, // 0, 1, 2 : index
0, 2, 7, // 3 4 5 : index
8, 4, 1]; // 6, 7, 8 : index
hvar empty::[0..8] = 3; // empty position is a secondary variable, no need to put it in the state space
var c = 0; // cost utility, e.g. costs 1 for left and right move, 2 for up, 0 for down
Game() = Left() [] Right() [] Up() [] Down();
Left() = [empty!2 && empty!=5 && empty!=8] left
{board[empty]=board[empty+1]; board[empty+1]=0; empty=empty+1; c++} -> Game();
Right() = [empty!=0 && empty!=3 && empty!=6] right
{board[empty]=board[empty-1]; board[empty-1]=0; empty=empty-1; c++} -> Game();
Up() = [empty!=6 & empty!=7 & empty!=8] up
{board[empty]=board[empty+3]; board[empty+3]=0; empty=empty+3; c=c+2} -> Game();
Down() = [empty!=0 & empty!=1 & empty!=2] down
{board[empty]=board[empty-3]; board[empty-3]=0; empty=empty-3} -> Game();
#define goal board[0] == 1 && board[1] == 2 && board[2] == 3 &&
#assert Game() reaches goal with min(c);
The sliding game problem cont’d
Figure: Initial configurations of the sliding game problem instances
Experimental Results
Figure: Execution time comparison of PAT, NuSMV and SatPlan on the sliding game problem, shown on a logarithm scale.
Model Checking as Planning/Scheduling/Service:
Transport4You, an intelligent public transportation manager
ICSE 2011 SCORE Competition Project (PAT won FM Award)
- PAT model checker is used not only as a verification tool for the system design but also as a service that computes an optimal travel plan.
- 94 teams from 48 universities in 22 countries started the competition; 55 finished and made final submission; 18 teams were selected for the second round; 5 finalist teams invited to Hawaii with 2000USD travel award for each team. Two winners (Formal Methods Award and Overall Award) were selected during the conference.
PAT student team won Formal Method Award
Model Checking Timed Systems
- A language for modeling compositional real-time systems using implicit clocks.
- Concurrency + Hierarchy + Data
- Real-time constructs: wait, within, deadline, timeout ...
- A method for abstracting and verifying the models.
- Zone abstraction
- Reachability checking, LTL, trace refinement checking and timed refinement checking.
This mutual exclusion protocol is proposed by Fischer in 1985. Mutual exclusion in Fischer's Protocol is guaranteed by carefully placing bounds on the execution times of the instructions, leading to a protocol which is very simple, and relies heavily on time aspects.
```c
#define N 4;
#define Delta 3;
#define Epsilon 4;
#define Idle -1;
var x = Idle;
var counter;
//timed version
P(i) = ifb(x == Idle) {
((update.i{x = i} -> Wait[Epsilon]) within[Delta]);
if (x == i) {
cs.i{counter++} -> exit.i{counter--; x=Idle} -> P(i)
} else {
P(i)
}
};
FischersProtocol = ||| i:{0..N-1}@P(i);
//verifying mutual exclusion by reachability analysis
#define MutualExclusionFail counter > 1;
#assert FischersProtocol reaches MutualExclusionFail;
```
Probabilistic Model Checking
- Syntax
- Hierarchical concurrent systems with probabilistic choices
- Semantics
- Markov decision processes
- Given a property, probabilistic model checking returns, instead of true or false
- the maximum and minimum probability of satisfying the property.
Monty Hall Problem
The Monty Hall problem is based on the American television game show *Let's Make a Deal* and named after the show's original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to the *American Statistician* in 1975.
- In search of a new car, the player picks a door, say 1. The game host then opens one of the other doors, say 3, to reveal a goat and offers to let the player pick door 2 instead of door 1. Should the player take the offer?
- What if the host is dishonest, e.g., place car after 1st guess or host do a switch 33% time after the guess?
enum{Door1, Door2, Door3};
var car = -1;
var guess = -1;
var goat = -1;
var final = false;
#define goal guess == car && final;
PlaceCar = []i:{Door1,Door2,Door3}@ placecar.i{car=i} -> Skip;
Guest = pcase {
1 : guest.Door1{guess=Door1} -> Skip
1 : guest.Door2{guess=Door2} -> Skip
1 : guest.Door3{guess=Door3} -> Skip
};
Goat = []i:{Door1,Door2,Door3}@
ifb (i != car && i != guess) {
hostopen.i{goat = i} -> Skip
};
TakeOffer = []i:{Door1,Door2,Door3}@
ifb (i != guess && i != goat) {
changeguess{guess = i; final = true} -> Stop
};
NotTakeOffer = keepguess{final = true} -> Stop;
Sys_Take_Offer = PlaceCar; Guest; Goat; TakeOffer;
assert Sys_Take_Offer reaches goal with prob;
Sys_Not_Take_Offer = PlaceCar; Guest; Goat; NotTakeOffer;
assert Sys_Not_Take_Offer reaches goal with prob;
What if the host is Dishonest?
```plaintext
//place after guessing
Sys_With_Dishonest_Program = Guest; PlaceCar; Goat; NotTakeOffer;
#assert Sys_With_Dishonest_Program reaches goal with prob;
HostSwitch = pcase {
1 : switch{car = guess} -> Skip
2 : Skip
};
Sys_With_Cheating_Host_Switch = PlaceCar; Guest; Goat; HostSwitch; TakeOffer;
#assert Sys_With_Cheating_Host_Switch reaches goal with prob;
Sys_With_Cheating_Host_Not_Switch = PlaceCar; Guest; Goat; HostSwitch; NotTakeOffer;
#assert Sys_With_Cheating_Host_Not_Switch reaches goal with prob;
```
Combine Real-Time and Probability
Passing me without stopping!
Given the C# Program of a lift algorithm
```csharp
public class LiftControl : ExpressionValue
{
// -1; for not assigned; i for assigned to i-lift;
int[,] ExternalRequestsUp;
int[,] ExternalRequestsDown;
// 0; for not pressed, 1 for pressed
int[,][] InternalRequests;
// 0 for stopped at ground level; ready to go up.
int[] LiftStatus;
public LiftControl()
{
ExternalRequestsUp = new int[2];
ExternalRequestsDown = new int[2];
InternalRequests = new int[2][,];
InternalRequests[0] = new int[2];
InternalRequests[1] = new int[2];
LiftStatus = new int[2];
}
public LiftControl(int levels, int lifts)
{
ExternalRequestsUp = new int[levels];
ExternalRequestsDown = new int[levels];
for (int i = 0; i < levels; i++)
{
ExternalRequestsUp[i] = -1;
ExternalRequestsDown[i] = -1;
}
InternalRequests = new int[lifts][,];
LiftStatus = new int[lifts];
}
public int PassBy (int lift, int level, int up)
{
// [IsToOpenDoor(lift, level) == 0]
if (up > 0)
{
if (ExternalRequestsUp[level] != lift && ExternalRequestsUp[level] > 0)
{
return 1;
}
}
else
{
if (ExternalRequestsDown[level] > 0 & ExternalRequestsDown[level] == lift)
{
return 1;
}
}
return 0;
}
public void AddInternalRequest(int lift, int level)
{
InternalRequests[lift][level] = 1;
}
public int UpdateLiftStatus(int lift, int level, int direction)
{
LiftStatus[lift] = LiftStatus[lift] + 1;
return PassBy(lift, level, direction);
}
}
```
PAT checking the C# program with time+probability
```csharp
#import "PAT.Lib.Lift";
define NoOfFloors 2;
define NoOfLifts 2;
var<LiftControl> ctrl = new LiftControl(NoOfFloors,NoOfLifts);
var passby = 0;
aSystem = ( ||| x:{0..NoOfLifts-1} @ Lift(x, 0, 1)) ||| Requests();
Requests() = Request();Request();
Request() = pcase {
1 : extreq.0.1{ctrl.AssignExternalRequest(0,1)} -> Skip
1 : intreq.0.0.1{ctrl.AddInternalRequest(0,0)} -> Skip
1 : intreq.1.0.1{ctrl.AddInternalRequest(1,0)} -> Skip
1 : extreq.1.0{ctrl.AssignExternalRequest(1,0)} -> Skip
1 : intreq.0.1.1{ctrl.AddInternalRequest(0,1)} -> Skip
1 : intreq.1.1.1{ctrl.AddInternalRequest(1,1)} -> Skip
} within[1];
Lift(i, level, direction) = case {
ctrl.isToOpenDoor(i, level) == 1 : (serve.level.direction{ctrl.ClearRequests(i, level, direction)}
-> Lift(i, level, direction))
ctrl.KeepMoving(i, level, direction) == 1 : (reach.level+direction.direction
{passby = ctrl.UpdateLiftStatus(i, level, direction)}
-> Lift(i, level+direction, direction))
ctrl.HasAssignment(i) == 1 : changedirection.i{ctrl.ChangeDirection(i)}
-> Lift(i, level, -1*direction)
default : idle.i -> Lift(i, level, direction)
} within[2];
#define goal passby == 1;
#assert aSystem reaches goal with prob;
```
The Current Status
- PAT is available at [http://pat.comp.nus.edu.sg](http://pat.comp.nus.edu.sg)
- 1 Million lines of code, 15 modules with 200+ build in examples
- Used as an educational tool in many universities.
- Attracted 2000+ registered users in the last 4 years from 400+ organizations in 52 countries, e.g. Microsoft, HP, ST Elec, ... Sony, Hitachi, Canon.
Current and Ongoing Works
- Security systems (FCS’12)
- Security Protocols
- Trusted Platform Module
- Web Service (Orc language /BPEL language) (APSEC’10, ICFEM’11)
- Sensor networks system written in NesC (SenSys’11)
- Distributed algorithms
- Context-aware systems (ICOST’10)
- Model Driving Development MDA: UML diagram, StateFlow (ITTT’12)
- Merlion 2011 funding on “Software Verification from Design to Implementation”
- Software-System Architecture Description Language (in implementation)
- Event Grammar/ADL
- Verification of C# Programs (in progress)
- Multi-agent Systems (ICSE’12)
- Timed Transition Systems (TOSEM’12)
Some related and background papers
- Jun Sun, Yang Liu, Jin Song Dong, Yan Liu, Ling Shi, Etienne, Andre. **Modeling and Verifying Hierarchical Real-time Systems using Stateful Timed CSP.** The ACM Transactions on Software Engineering and Methodology (TOSEM). (Accepted)
Thank you!
• Additional slides ...
Monty hall: why switch?
<table>
<thead>
<tr>
<th>Door 1</th>
<th>Door 2</th>
<th>Door 3</th>
<th>result if switching</th>
<th>result if staying</th>
</tr>
</thead>
<tbody>
<tr>
<td>Car</td>
<td>Goat</td>
<td>Goat</td>
<td>Goat</td>
<td>Car</td>
</tr>
<tr>
<td>Goat</td>
<td>Car</td>
<td>Goat</td>
<td>Car</td>
<td>Goat</td>
</tr>
<tr>
<td>Goat</td>
<td>Goat</td>
<td>Car</td>
<td>Car</td>
<td>Goat</td>
</tr>
</tbody>
</table>
Verification under Fairness
- Automata-based LTL model checking
- weak fairness: SCC search
- strong fairness: strongly connected sub-graph search
- strong global fairness = terminal SCC search
## Experiment
<table>
<thead>
<tr>
<th>Model</th>
<th>Size</th>
<th>EWF</th>
<th>ESF</th>
<th>SGF</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Result</td>
<td>PAT</td>
<td>SPIN</td>
</tr>
<tr>
<td>LE_C</td>
<td>5</td>
<td>Yes</td>
<td>4.7</td>
<td>35.7</td>
</tr>
<tr>
<td>LE_C</td>
<td>6</td>
<td>Yes</td>
<td>26.7</td>
<td>229</td>
</tr>
<tr>
<td>LE_C</td>
<td>7</td>
<td>Yes</td>
<td>152.2</td>
<td>1190</td>
</tr>
<tr>
<td>LE_C</td>
<td>8</td>
<td>Yes</td>
<td>726.6</td>
<td>5720</td>
</tr>
<tr>
<td>LE_T</td>
<td>5</td>
<td>Yes</td>
<td>0.2</td>
<td>0.7</td>
</tr>
<tr>
<td>LE_T</td>
<td>7</td>
<td>Yes</td>
<td>1.4</td>
<td>7.6</td>
</tr>
<tr>
<td>LE_T</td>
<td>9</td>
<td>Yes</td>
<td>10.2</td>
<td>62.3</td>
</tr>
<tr>
<td>LE_T</td>
<td>11</td>
<td>Yes</td>
<td>68.1</td>
<td>440</td>
</tr>
<tr>
<td>LE_T</td>
<td>13</td>
<td>Yes</td>
<td>548.6</td>
<td>3200</td>
</tr>
<tr>
<td>LE.OR</td>
<td>3</td>
<td>No</td>
<td>0.2</td>
<td>0.3</td>
</tr>
<tr>
<td>LE.OR</td>
<td>5</td>
<td>No</td>
<td>1.3</td>
<td>8.7</td>
</tr>
<tr>
<td>LE.OR</td>
<td>7</td>
<td>No</td>
<td>15.9</td>
<td>95</td>
</tr>
<tr>
<td>LE.R</td>
<td>3</td>
<td>No</td>
<td>0.1</td>
<td>< 0.1</td>
</tr>
<tr>
<td>LE.R</td>
<td>4</td>
<td>No</td>
<td>0.3</td>
<td>< 0.1</td>
</tr>
<tr>
<td>LE.R</td>
<td>5</td>
<td>No</td>
<td>0.8</td>
<td>< 0.1</td>
</tr>
<tr>
<td>LE.R</td>
<td>6</td>
<td>No</td>
<td>1.8</td>
<td>0.2</td>
</tr>
<tr>
<td>LE.R</td>
<td>7</td>
<td>No</td>
<td>4.7</td>
<td>0.6</td>
</tr>
<tr>
<td>LE.R</td>
<td>8</td>
<td>No</td>
<td>11.7</td>
<td>1.7</td>
</tr>
<tr>
<td>TC.R</td>
<td>3</td>
<td>Yes</td>
<td>< 0.1</td>
<td>< 0.1</td>
</tr>
<tr>
<td>TC.R</td>
<td>5</td>
<td>No</td>
<td>< 0.1</td>
<td>< 0.1</td>
</tr>
<tr>
<td>TC.R</td>
<td>7</td>
<td>No</td>
<td>0.2</td>
<td>0.1</td>
</tr>
<tr>
<td>TC.R</td>
<td>9</td>
<td>No</td>
<td>0.4</td>
<td>0.2</td>
</tr>
</tbody>
</table>
Comparing with LTSA
- PAT supports a variety of fairness (process-level/event-level weak strong fairness and strong global fairness), LTSA supports only event-level strong fairness.
- PAT supports shared variables and external C# library, while LTSA doesn't support that.
- PAT supports both DFS and BFS search for deadlock-freeness check, while LTSA supports only BFS.
- PAT supports verification of LTL formulae made up of variable predicates and events, while LTSA supports LTL constituted by events only.
- PAT supports real-time systems, while LTSA supports the ad-hoc tick event.
- LTSA supports message sequence charts, and UML2, while PAT has not yet.
Example C: Pacemaker
\[ \text{AATpace} = \]
\[
(\text{atomic}\{\text{senseA} \rightarrow \text{paceA}\{\text{SA} = 0\} \rightarrow \text{Skip}\}
\]
\text{timeout}[\text{LRI}]
\]
\[
((\text{paceA}\{\text{SA} = 0\} \rightarrow \text{Skip}) \text{ within}[0])
\]
\text{Wait}[\text{URI}];
\]
\[
(\text{enableSA}\{\text{SA} = 1\} \rightarrow \text{AATpace}_1) \text{ within}[0]);
\]
The behaviors of the composition of the pacemaker and a abnormal heart must refine a normal heart!
Operational Semantics
```c
#include "PAT.Lib.Example"
#define NoOfFloors 2;
#define NoOfLifts 2;
#define NoOfUsers 2;
var extrequestsUP[NoOfFloors];
var extrequestsDOWN[NoOfFloors];
var intrequests[NoOfLifts][NoOfFloors];
var door = [-1(NoOfLifts)]; //initiate an array of -1 with length NoOfLifts
LiftSystem() = (||| { NoOfUsers} @ User()) ||| (||| x:{0..NoOfLifts-1} @ Lift(x, 0, 1));
User() = \[
pos:{0..NoOfFloors-1}@ ( ExternalPush(pos); UserWaiting( pos));
\]
ExternalPush(pos) = case {
pos = = 0 : pushup.pos{extrequestsUP[pos] = 1;} -> Skip
pos = = NoOfFloors-1 : pushdown.pos {extrequestsDOWN[pos] = 1;} -> Skip
default : pushup.pos{extrequestsUP[pos] = 1;} -> Skip
[] pushdown.pos{extrequestsDOWN[pos] = 1;} -> Skip
};
UserWaiting(pos) = \
i:{0..NoOfLifts-1} @
(\[door[i] = = pos\]enter.i -> (\[y:{0..NoOfFloors -1}\]@ (push.y{intrequests[i][y] = 1;} ->
(\[door[i] == y\] exit.i -> User()))));
Lift(i, level, direction) =
if (intrequests[i][level] != 0 || (direction = = 1 && extrequestsUP[level] = = 1) || (direction = = -1 && extrequestsDOWN[level] == 1)) {
opendoor.i.level{
door[i] = level; intrequests[i][level] = 0;
if (direction > 0) {
extrequestsUP[level] = 0;
} else {
extrequestsDOWN[level] = 0;
}
} -> close.i.level{door[i] = -1;} -> Lift(i, level, direction)
} else {
checkIfToMove.i.level ->
if (call(CheckIfToMove, level, direction, i, NoOfFloors, intrequests, extrequestsUP, extrequestsDOWN)) {
moving.i.level.direction ->
if (level + direction == 0 || level + direction = = NoOfFloors-1) {
Lift(i, level + direction, -1*direction)
} else {
Lift(i, level + direction, direction)
}
} else {
if ((level = = 0 && direction = = 1) || (level = = NoOfFloors-1 && direction = = -1)) {
Lift(i, level, direction)
} else {
changedir.i.level -> Lift(i, level, -1*direction)
}
}
};
#define liveness extrequestsUP[0] = = 0;
#define liveness extrequestsU[1] = = 0;
#assert LiftSystem() deadlockfree;
#assert LiftSystem() |= [[]<> liveness];
```
Operational Semantics
\[
\begin{align*}
(V, e\{\text{prog}\} \rightarrow P) & \xrightarrow{e} (\text{upd}(V, \text{prog}), P) & \text{[prefix]} \\
\text{c is not empty in } V & \hline \\
(V, c?x \rightarrow P) & \xrightarrow{c?\text{top}(c)} (\text{pop}(V, c?x), P) & \text{[in]} \\
V \not\ni b, (V, Q) & \xrightarrow{e} (V', Q') & \text{[cond2]} \\
(V, \text{if } b \{P\} \text{ else } \{Q\}) & \xrightarrow{e} (V', Q') \\
(V, P) & \xrightarrow{x} (V', P'), x \in \alpha P, x \not\in \alpha Q & \text{[par1]} \\
(V, P \parallel Q) & \xrightarrow{x} (V', P' \parallel Q)
\end{align*}
\]
Abstraction
\[
(V, P, D) \xrightarrow{\tau} (V', P', D') \quad [\text{ato1}]
\]
\[
(V, P \ \text{timeout}[d]_{tm} \ Q, D) \xrightarrow{\tau} (V', P' \ \text{timeout}[d]_{tm} \ Q, D' \wedge tm \leq d)
\]
\[
(V, P, D) \xrightarrow{x} (V', P', D') \quad [\text{ato2}]
\]
\[
(V, P \ \text{timeout}[d]_{tm} \ Q, D) \xrightarrow{x} (V', P', D' \wedge tm \leq d)
\]
\[
(V, P \ \text{timeout}[d]_{tm} \ Q, D) \xrightarrow{\tau} (V, Q, tm = d \wedge \nu(V, P, D)) \quad [\text{ato3}]
\]
Abstraction: Example
(a $\rightarrow$ Wait[5]; b $\rightarrow$ Stop) interrupt[3] (c $\rightarrow$ Stop)
- Introduce clock $t_1$,
(a $\rightarrow$ Wait[5]; b $\rightarrow$ Stop) interrupt[3]$_{t_1}$ (c $\rightarrow$ Stop), $t_1 = 0$
- Event $a$ occurs,
(Wait[5]; b $\rightarrow$ Stop) interrupt[3]$_{t_1}$ (c $\rightarrow$ Stop), $0 \leq t_1 \leq 3$
- Introduce $t_2$,
(Wait[5]$_{t_2}$; b $\rightarrow$ Stop) interrupt[3]$_{t_1}$ (c $\rightarrow$ Stop), $0 \leq t_1 \leq 3$ and $t_2 = 0$
- Event $\tau$ occurs,
(b $\rightarrow$ Stop) interrupt[3]$_{t_1}$ (c $\rightarrow$ Stop), $0 \leq t_1 \leq 3$ and $t_2 = 5$
### Experiment
<table>
<thead>
<tr>
<th>Model</th>
<th>Size</th>
<th>Property</th>
<th>States/Transitions</th>
<th>PAT (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fischer</td>
<td>4</td>
<td>$\Box ct \leq 1$</td>
<td>3452/8305</td>
<td>0.22</td>
</tr>
<tr>
<td>Fischer</td>
<td>5</td>
<td>$\Box ct \leq 1$</td>
<td>26496/73628</td>
<td>2.49</td>
</tr>
<tr>
<td>Fischer</td>
<td>6</td>
<td>$\Box ct \leq 1$</td>
<td>207856/654776</td>
<td>27.7</td>
</tr>
<tr>
<td>Fischer</td>
<td>7</td>
<td>$\Box ct \leq 1$</td>
<td>1620194/5725100</td>
<td>303</td>
</tr>
<tr>
<td>Fischer</td>
<td>4</td>
<td>$\Box (x = i \Rightarrow \Diamond cs.i)$</td>
<td>5835/16776</td>
<td>0.53</td>
</tr>
<tr>
<td>Fischer</td>
<td>5</td>
<td>$\Box (x = i \Rightarrow \Diamond cs.i)$</td>
<td>49907/169081</td>
<td>5.83</td>
</tr>
<tr>
<td>Fischer</td>
<td>6</td>
<td>$\Box (x = i \Rightarrow \Diamond cs.i)$</td>
<td>384763/1502480</td>
<td>70.5</td>
</tr>
<tr>
<td>Fischer</td>
<td>4</td>
<td>Protocol refines $u$Protocol</td>
<td>7741/18616</td>
<td>5.22</td>
</tr>
<tr>
<td>Fischer</td>
<td>5</td>
<td>Protocol refines $u$Protocol</td>
<td>72140/201292</td>
<td>126.3</td>
</tr>
<tr>
<td>Fischer</td>
<td>6</td>
<td>Protocol refines $u$Protocol</td>
<td>705171/2237880</td>
<td>3146</td>
</tr>
<tr>
<td>Railway Control</td>
<td>4</td>
<td>deadlock-free</td>
<td>853/1132</td>
<td>0.11</td>
</tr>
<tr>
<td>Railway Control</td>
<td>5</td>
<td>deadlock-free</td>
<td>4551/6115</td>
<td>0.42</td>
</tr>
<tr>
<td>Railway Control</td>
<td>6</td>
<td>deadlock-free</td>
<td>27787/37482</td>
<td>3.07</td>
</tr>
<tr>
<td>Railway Control</td>
<td>7</td>
<td>deadlock-free</td>
<td>195259/263641</td>
<td>24.2</td>
</tr>
<tr>
<td>Railway Control</td>
<td>8</td>
<td>deadlock-free</td>
<td>1563177/2111032</td>
<td>223.1</td>
</tr>
<tr>
<td>Railway Control</td>
<td>4</td>
<td>$\Box (\text{app}r.1 \rightarrow \Diamond \text{leave}.1)$</td>
<td>1504/1985</td>
<td>0.16</td>
</tr>
<tr>
<td>Railway Control</td>
<td>5</td>
<td>$\Box (\text{app}r.1 \rightarrow \Diamond \text{leave}.1)$</td>
<td>8137/10862</td>
<td>0.95</td>
</tr>
<tr>
<td>Railway Control</td>
<td>6</td>
<td>$\Box (\text{app}r.1 \rightarrow \Diamond \text{leave}.1)$</td>
<td>50458/67639</td>
<td>6.58</td>
</tr>
<tr>
<td>Railway Control</td>
<td>7</td>
<td>$\Box (\text{app}r.1 \rightarrow \Diamond \text{leave}.1)$</td>
<td>359335/482498</td>
<td>58.63</td>
</tr>
</tbody>
</table>
Refinement Checking
- The property is given as a model (often in the same language).
- A property is proved by showing a refinement relationship (i.e. language inclusion) from the system model to the model capturing the property.
- Trace refinement checking,
- Stable failures refinement checking,
- Failures/divergence refinement checking,
- Timed trace refinement checking,
- and etc.
Refinement Checking
System Model
Property Model
Semantics
All system behaviors
Example B: Parallel Objects
- **Sequential stack**
- call Push → put item on the top → finish Push
- **Lock-free concurrent stack**
- call Push → read stack → make local modification → check if the stack has been updated → if not, commit; else retry → finish Push
A concurrent stack must refine the sequential stack. The sequential stack must refine the concurrent stack.
PAT’s Approach
- Given two transition systems S and T, to show that S refines T,
- Build pair \((s, X)\) on-the-fly where \(s\) is a reachable state of S and \(X\) is the set of states which can be reached via the same trace.
- If \(X\) becomes empty, then S doesn’t refine T;
## Experiment D
<table>
<thead>
<tr>
<th>Model</th>
<th>Size</th>
<th>Property</th>
<th>States/Transitions</th>
<th>Result</th>
<th>Time (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pacemaker</td>
<td>-</td>
<td>deadlock-free</td>
<td>302442/2405850</td>
<td>true</td>
<td>92.1</td>
</tr>
<tr>
<td>Pacemaker</td>
<td>-</td>
<td>correctness</td>
<td>986342/2608226</td>
<td>true</td>
<td>122</td>
</tr>
<tr>
<td>Fischer</td>
<td>4</td>
<td>mutual exclusion</td>
<td>9941/34244</td>
<td>true</td>
<td>0.78</td>
</tr>
<tr>
<td>Fischer</td>
<td>5</td>
<td>mutual exclusion</td>
<td>141963/599315</td>
<td>true</td>
<td>17.2</td>
</tr>
<tr>
<td>Fischer</td>
<td>6</td>
<td>mutual exclusion</td>
<td>2144610/10795380</td>
<td>true</td>
<td>401</td>
</tr>
<tr>
<td>Fischer</td>
<td>6</td>
<td>bounded bypass</td>
<td>2429/8065</td>
<td>false</td>
<td>0.36</td>
</tr>
<tr>
<td>Fischer</td>
<td>7</td>
<td>bounded bypass</td>
<td>9213/34611</td>
<td>false</td>
<td>1.47</td>
</tr>
<tr>
<td>Fischer</td>
<td>8</td>
<td>bounded bypass</td>
<td>32785/137417</td>
<td>false</td>
<td>6.16</td>
</tr>
<tr>
<td>Fischer</td>
<td>9</td>
<td>bounded bypass</td>
<td>91665/425966</td>
<td>false</td>
<td>21.1</td>
</tr>
<tr>
<td>Fischer</td>
<td>10</td>
<td>bounded bypass</td>
<td>300129/1542020</td>
<td>false</td>
<td>79.8</td>
</tr>
<tr>
<td>Fischer</td>
<td>11</td>
<td>bounded bypass</td>
<td>693606/3880577</td>
<td>false</td>
<td>214</td>
</tr>
<tr>
<td>Railway Control</td>
<td>4</td>
<td>bounded waiting</td>
<td>918/1359</td>
<td>true</td>
<td>0.45</td>
</tr>
<tr>
<td>Railway Control</td>
<td>5</td>
<td>bounded waiting</td>
<td>4764/7199</td>
<td>true</td>
<td>3.21</td>
</tr>
<tr>
<td>Railway Control</td>
<td>6</td>
<td>bounded waiting</td>
<td>28782/43795</td>
<td>true</td>
<td>26.2</td>
</tr>
<tr>
<td>Railway Control</td>
<td>7</td>
<td>bounded waiting</td>
<td>201444/307071</td>
<td>true</td>
<td>238</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://rosaec.snu.ac.kr/meet/file/20120803.pdf", "len_cl100k_base": 8250, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 50996, "total-output-tokens": 11118, "length": "2e13", "weborganizer": {"__label__adult": 0.0004270076751708984, "__label__art_design": 0.00041866302490234375, "__label__crime_law": 0.0003757476806640625, "__label__education_jobs": 0.0015354156494140625, "__label__entertainment": 7.826089859008789e-05, "__label__fashion_beauty": 0.0001809597015380859, "__label__finance_business": 0.00023484230041503904, "__label__food_dining": 0.0003750324249267578, "__label__games": 0.0007576942443847656, "__label__hardware": 0.001068115234375, "__label__health": 0.0006437301635742188, "__label__history": 0.00030994415283203125, "__label__home_hobbies": 0.0001246929168701172, "__label__industrial": 0.00043702125549316406, "__label__literature": 0.0002827644348144531, "__label__politics": 0.0003082752227783203, "__label__religion": 0.000518798828125, "__label__science_tech": 0.0230560302734375, "__label__social_life": 0.00014150142669677734, "__label__software": 0.0038089752197265625, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.0004291534423828125, "__label__transportation": 0.0009241104125976562, "__label__travel": 0.00023734569549560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29049, 0.03398]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29049, 0.11]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29049, 0.65078]], "google_gemma-3-12b-it_contains_pii": [[0, 186, false], [186, 879, null], [879, 1003, null], [1003, 1520, null], [1520, 1700, null], [1700, 2911, null], [2911, 3013, null], [3013, 3152, null], [3152, 3823, null], [3823, 4195, null], [4195, 4970, null], [4970, 5265, null], [5265, 5866, null], [5866, 6684, null], [6684, 7251, null], [7251, 7315, null], [7315, 9116, null], [9116, 10397, null], [10397, 10930, null], [10930, 11579, null], [11579, 16113, null], [16113, 16124, null], [16124, 16148, null], [16148, 16528, null], [16528, 16729, null], [16729, 19006, null], [19006, 19667, null], [19667, 20146, null], [20146, 22513, null], [22513, 23101, null], [23101, 23584, null], [23584, 24207, null], [24207, 26311, null], [26311, 26709, null], [26709, 26792, null], [26792, 27170, null], [27170, 27452, null], [27452, 29049, null]], "google_gemma-3-12b-it_is_public_document": [[0, 186, true], [186, 879, null], [879, 1003, null], [1003, 1520, null], [1520, 1700, null], [1700, 2911, null], [2911, 3013, null], [3013, 3152, null], [3152, 3823, null], [3823, 4195, null], [4195, 4970, null], [4970, 5265, null], [5265, 5866, null], [5866, 6684, null], [6684, 7251, null], [7251, 7315, null], [7315, 9116, null], [9116, 10397, null], [10397, 10930, null], [10930, 11579, null], [11579, 16113, null], [16113, 16124, null], [16124, 16148, null], [16148, 16528, null], [16528, 16729, null], [16729, 19006, null], [19006, 19667, null], [19667, 20146, null], [20146, 22513, null], [22513, 23101, null], [23101, 23584, null], [23584, 24207, null], [24207, 26311, null], [26311, 26709, null], [26709, 26792, null], [26792, 27170, null], [27170, 27452, null], [27452, 29049, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29049, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29049, null]], "pdf_page_numbers": [[0, 186, 1], [186, 879, 2], [879, 1003, 3], [1003, 1520, 4], [1520, 1700, 5], [1700, 2911, 6], [2911, 3013, 7], [3013, 3152, 8], [3152, 3823, 9], [3823, 4195, 10], [4195, 4970, 11], [4970, 5265, 12], [5265, 5866, 13], [5866, 6684, 14], [6684, 7251, 15], [7251, 7315, 16], [7315, 9116, 17], [9116, 10397, 18], [10397, 10930, 19], [10930, 11579, 20], [11579, 16113, 21], [16113, 16124, 22], [16124, 16148, 23], [16148, 16528, 24], [16528, 16729, 25], [16729, 19006, 26], [19006, 19667, 27], [19667, 20146, 28], [20146, 22513, 29], [22513, 23101, 30], [23101, 23584, 31], [23584, 24207, 32], [24207, 26311, 33], [26311, 26709, 34], [26709, 26792, 35], [26792, 27170, 36], [27170, 27452, 37], [27452, 29049, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29049, 0.13906]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
7a609c931b2ee86f55c633dbb28091e417ba744c
|
Application Development Language v1.2
Preethi Pandian
Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents
Part of the Other Computer Engineering Commons
Recommended Citation
This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.
Application Development Language v1.2
by
Preethi Pandian
A Creative Component submitted to the graduate faculty
in partial fulfillment of the requirements for the degree of
MASTER OF SCIENCE
Major: Computer Science
Program of Study Committee:
Simanta Mitra, Co-major Professor
Gurpur Prabhu, Co-major Professor
Ying Cai
The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation/thesis. The Graduate College will ensure this dissertation/thesis is globally accessible and will not permit alterations after a degree is conferred.
Iowa State University
Ames, Iowa
2019
Copyright © Preethi Pandian, 2019. All rights reserved.
# TABLE OF CONTENTS
<table>
<thead>
<tr>
<th>LIST OF TABLES</th>
<th>v</th>
</tr>
</thead>
<tbody>
<tr>
<td>LIST OF FIGURES</td>
<td>vi</td>
</tr>
<tr>
<td>ACKNOWLEDGMENTS</td>
<td>viii</td>
</tr>
<tr>
<td>ABSTRACT</td>
<td>ix</td>
</tr>
<tr>
<td>CHAPTER 1. INTRODUCTION</td>
<td>1</td>
</tr>
<tr>
<td>1.1 Background</td>
<td>1</td>
</tr>
<tr>
<td>1.2 Challenges and Objectives</td>
<td>2</td>
</tr>
<tr>
<td>1.3 Organization</td>
<td>3</td>
</tr>
<tr>
<td>CHAPTER 2. MOTIVATION</td>
<td>4</td>
</tr>
<tr>
<td>CHAPTER 3. Application Development Language v1.2</td>
<td>5</td>
</tr>
<tr>
<td>3.1 General Idea</td>
<td>5</td>
</tr>
<tr>
<td>3.2 Architecture</td>
<td>6</td>
</tr>
<tr>
<td>3.3 Technology Description</td>
<td>6</td>
</tr>
<tr>
<td>3.3.1 Gradle/Maven</td>
<td>6</td>
</tr>
<tr>
<td>3.3.2 SpringBoot</td>
<td>7</td>
</tr>
<tr>
<td>3.3.3 Mustache</td>
<td>7</td>
</tr>
<tr>
<td>3.3.4 Websocket</td>
<td>8</td>
</tr>
<tr>
<td>3.4 Specification</td>
<td>8</td>
</tr>
<tr>
<td>3.5 Implementation</td>
<td>12</td>
</tr>
<tr>
<td>3.5.1 Parser</td>
<td>13</td>
</tr>
<tr>
<td>3.5.2 Server Generator</td>
<td>13</td>
</tr>
</tbody>
</table>
LIST OF TABLES
<table>
<thead>
<tr>
<th>Table</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Table 3.1</td>
<td>This table shows possible data types of attributes in model</td>
<td>14</td>
</tr>
<tr>
<td>Table 3.2</td>
<td>This table shows possible features in input spec and how they are automated in Controller</td>
<td>15</td>
</tr>
<tr>
<td>Table 3.3</td>
<td>ADL v1.2 Automation in Controller</td>
<td>20</td>
</tr>
</tbody>
</table>
## LIST OF FIGURES
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Figure 3.1</td>
<td>Architecture Flow Diagram of ADL v1.2</td>
<td>6</td>
</tr>
<tr>
<td>Figure 3.2</td>
<td>Architecture Flow Diagram of generated code</td>
<td>7</td>
</tr>
<tr>
<td>Figure 3.3</td>
<td>Example of Mustache</td>
<td>8</td>
</tr>
<tr>
<td>Figure 3.4</td>
<td>Spec file for Music Site</td>
<td>9</td>
</tr>
<tr>
<td>Figure 4.1</td>
<td>Provided Input spec</td>
<td>22</td>
</tr>
<tr>
<td>Figure 4.2</td>
<td>Generated Output Directory Structure</td>
<td>23</td>
</tr>
<tr>
<td>Figure 4.3</td>
<td>Generated Server project structure</td>
<td>24</td>
</tr>
<tr>
<td>Figure 4.4</td>
<td>Auto generated Database Tables</td>
<td>24</td>
</tr>
<tr>
<td>Figure 4.5</td>
<td>Generated application properties file</td>
<td>24</td>
</tr>
<tr>
<td>Figure 4.6</td>
<td>A sample auto generated Controller Class</td>
<td>25</td>
</tr>
<tr>
<td>Figure 4.7</td>
<td>Generated Websocket project structure</td>
<td>25</td>
</tr>
<tr>
<td>Figure 4.8</td>
<td>Generated Client project structure</td>
<td>27</td>
</tr>
<tr>
<td>Figure 4.9</td>
<td>A sample generated login page</td>
<td>27</td>
</tr>
<tr>
<td>Figure 4.10</td>
<td>Registration page for sign up</td>
<td>27</td>
</tr>
<tr>
<td>Figure 4.11</td>
<td>Upload ArtistBio detail</td>
<td>29</td>
</tr>
<tr>
<td>Figure 4.12</td>
<td>Get all records for Users</td>
<td>29</td>
</tr>
<tr>
<td>Figure 4.13</td>
<td>Auto-generated Google Maps integration in UI</td>
<td>29</td>
</tr>
<tr>
<td>Figure 4.14</td>
<td>A sample input excel file for bulk upload</td>
<td>30</td>
</tr>
<tr>
<td>Figure 4.15</td>
<td>Data entry in MySQL db post upload</td>
<td>30</td>
</tr>
<tr>
<td>Figure 4.16</td>
<td>A sample auto-generated working chat feature</td>
<td>30</td>
</tr>
<tr>
<td>Figure 4.17</td>
<td>Input spec file for ABET</td>
<td>31</td>
</tr>
</tbody>
</table>
## NOMENCLATURE
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADL</td>
<td>App description Language</td>
</tr>
<tr>
<td>ADL v1.2</td>
<td>ADL version 1.2</td>
</tr>
<tr>
<td>CRUDL</td>
<td>Create, Read, Update, Delete, List</td>
</tr>
<tr>
<td>UI</td>
<td>User Interface</td>
</tr>
<tr>
<td>POJO</td>
<td>Plain old java object</td>
</tr>
<tr>
<td>JSON</td>
<td>Javascript Object Notation</td>
</tr>
<tr>
<td>DDL</td>
<td>Data Definition Layer</td>
</tr>
<tr>
<td>PK</td>
<td>Primary Key</td>
</tr>
<tr>
<td>ORM</td>
<td>Object-relational Mapping</td>
</tr>
<tr>
<td>OOP</td>
<td>Object-Oriented Programming</td>
</tr>
<tr>
<td>REST</td>
<td>Representational State Transfer Architecture</td>
</tr>
<tr>
<td>IDE</td>
<td>Integrated Development Environment</td>
</tr>
</tbody>
</table>
ACKNOWLEDGMENTS
I would like to express my heartfelt gratitude to Dr. Simanta Mitra for his substantial guidance and support throughout the course of this research. I would also like to extend my appreciation to my co-major professor Dr. Gurpur Prabhu and my committee member Dr. Ying Cai for their help and support.
In addition, I would also like to thank my friends, colleagues, the departmental faculty, and department staff for making this journey not only possible, but also a memorable one.
Last but not the least, I would like to extend my gratitude to my parents for their unbounded support and love.
ABSTRACT
ADL was an attempt at auto-generating web applications that included both client and server implementation. It served as a successful proof of concept and version 1.2 of ADL builds on top of its existing architecture and features. Many real world applications extensively use complex relationships between their database tables to store and fetch information. While ADL implemented one-to-many relationship, it is only natural to leverage the existing technique to implement many-to-many relationship. The other important aspect to focus is security: a login and registration page are default across all applications, authentication is a primary step towards adding security. Along with authentication, it’s also evident to protect server APIs from unauthorised access. ADL also did not provide different views for different users by capturing their permission levels. ADL v1.2 aims at closing in those gaps to further enhance the capabilities of this language. ADL v1.2 continues to function by parsing an input spec file and generates server-side code in Spring Boot and client-side in HTML and JavaScript, connected using XMLHttp Requests. Considerable changes are made to template files, including changes to code base, new tags are introduced in this version that supports more description for a given app. ADL v1.2 also supports all the features available in previous version.
CHAPTER 1. INTRODUCTION
When was the last time we implemented an HTTP Server from scratch? Probably not, very few people bother to implement an HTTP server because that task has been done. How about connecting server to an App by creating a basic REST API? Seems there’s a command line tool that does that now. This constant automation of software is why software tools get better every year. Automation of code generation is a very popular field that is garnering a lot of attention in recent times. The App Description Language (ADL) was a successful attempt at auto generating REST client with Spring Boot, and user Interface with HTML and JavaScript. The specification or language to auto generate code is simple enough to be written by anyone with little to no programming experience.
ADL v1.2 presented in this report adds more features to existing JSON language, in terms of security, complexity. This is language is useful for those, who are skilled to develop the whole application but have time constraints on rewriting the simple and mundane thousands of lines of code. Instead, the time and energy can be focused on building the more important parts of application i.e. logic and security aspects.
1.1 Background
A typical server-side REST application which follows MVC (Model-View-Controller) pattern, involves a main application class, plain java model objects for each entity or table in the database, controllers for exposing the endpoints, repositories corresponding to each model for Hibernate or any other ORM tool. Further, it requires settings and description of dependency management, and properties file to configure application level properties. The view is optional and can be omitted
as we will be developing the front-end UI separately.
The user interface/client-side is an interactive layer between the end-user and server-side code. jQuery is used to select and work on the HTML components. Together, Javascript with HTML and jQuery is definitely the most popular method of generating front-end application and tie it to the corresponding back-end. Due to increased demand of usability and manageability, integration of external APIs is pretty common in current applications. Google cloud platform provides easy and sophisticated APIs to integrate popular functionalities such as Maps, Drive, Calendar etc, and saves a huge amount of time by not rewriting the code for same functionalities.
### 1.2 Challenges and Objectives
The major challenge in developing app description language is to design the input spec file. The file should capture as much information as possible, at the same time minimizing the different input parameters and keeping it simple to understand and write a new one for users. The existing code generators like Swagger (9) primarily focuses on capturing the very relevant parameters, but it might be challenging for a non-technical person to design the same, although it supports multiple languages and can accommodate many features. Another challenge is to understand what the user wants and requires without explicitly having to mention it. For example, to get the list of all users based on a geographic location say city is to be captured by a simple understandable word like getUsersByCity and should be able to generate corresponding server-side and client-side.
Furthermore, since the logic and security aspects are something, which must be configured manually later on top of the generated code, it is important to keep the code modular, indented and easy to understand, so that developers can further work on it.
Another important aspect to consider is that fact that there is plethora of applications users might want to generate, and thus the language description should be able to understand the require-
ment and accommodated them. Hence our objective is to develop a simple description language, easy to parse and understand the users requirements and generate the code with high modularity and simplicity.
1.3 Organization
The next part of the report is organized as follows. Firstly, we talk about the motivation behind pursuing this project, and why it is intriguing and interesting to work. Next, we discuss about the core methodology, technology used and flow of information to automate the application development. Then, we experiment with the automation using two examples. One example is to verify and establish the automation achieved. Another one is to showcase its limitations and domain and scope. Lastly, we talk about the related works done in this domain, and possible future works.
CHAPTER 2. MOTIVATION
Initially, ADL was put to test by auto-generating a website for course management. The time to create an end to end functional app was certainly reduced, front-end was tweaked to include few additional features, back-end had few configuration changes to deploy at a remote server. All users for the app were identified, the app was hosted and made ready to use. It quickly came to notice that ADL lacked in security. ADL lacked to generate a basic login and registration page, a feature that’s commonly required across all application. Authentication is crucial and primary step towards security. In addition to login page, the auto generated server APIs should also be session covered, to additionally ensure no unauthenticated user has access to backend AP’s.
Secondly, the idea of authentication could be further extended to provide multiple views of same website for different kind of users. If the input spec file have provision to capture different users that are intended to use the auto-generated app along with their permission levels - permissions for CREATE, READ, UPDATE, DELETE, LIST - CRUDL, combined we can seamlessly create different UI views for appropriate users, enhancing capabilities of ADL.
Lastly, implementing many-to- many interaction between entities, many real world website and applications extensively use such relationships and by providing a facility to auto-generate such feature, would greatly increase the usability of ADL and enable create of complex web applications.
CHAPTER 3. Application Development Language v1.2
3.1 General Idea
The general idea behind ADL v1.2 is to add security and to capture different users and their permission levels to provide different views and accessibility to intended users of an application generated using ADL. This is achieved by creating an input specification file in JSON format. ADL parses the input file and internally creates a corresponding java object, which makes it convenient to access information of each tag without much overhead.
After the information is extracted, server-side Spring boot code is first generated with its appropriate pom.xml, application.properties file, all the specified mode entities which would correspond to tables in the database made possible by the use of Hibernate mapping provided by Sping boot configuration. Each entity repository and controller are also created by combing the base template and information provided in the spec file. The auto-generated code adheres to Spring boot guidelines, along with automatic DDL configuration, which would automatically generate the tables and relationships, when the generated application runs for the first time. A standard convention is followed for creating the API endpoints with corresponding GET and POST methods.
Client-side code is generated followed by the server-side. A default login page that links to a registration page is created, followed by a home page that serves a landing page after a successful sign in and additionally provides links to all entity pages created as per input file specification. Two javascript files are created - login.js referenced in index.html which is the login page and another js file - control.js referenced in every entity page. Two CSS style sheet pages are also created one : home.css applied to every UI component in entity pages, and, style.css applied to every UI component in index.html(login). HTML elements are mapped in Javascript by jQuery. Standard conventions are used to call Spring boot APIs to interact with the database. The entire code base is generated in folder location specified as an input argument for ADL.
3.2 Architecture
The figure depicts the generation of the server side and client-side code along with its integration.

Figure 3.1 Architecture Flow Diagram of ADL v1.2
3.3 Technology Description
3.3.1 Gradle/Maven
Gradle continues to be the choice of build tool for v1.2. Gradle supplements dependencies required for this project such as Apache StringUtils which is heavily relied on for String manipulation, a feature extensively used in this project. For the server code that is generated, Maven continues to be the choice of build tool with pom.xml as the file to specify maven configuration information and dependencies.
3.3.2 SpringBoot
ADL v1.2 server-side code generation is built on SpringBoot architecture which is the same as the previous version. SpringBoot MVC pattern and Hibernate feature along with its embedded Tomcat continues to be the choice of framework both for its ease of use and modularity.

Figure 3.2 Architecture Flow Diagram of generated code
3.3.3 Mustache
Mustache enables us to achieve the core feature of ADL, which is to automatically generate code for given different information. Mustache is a tag-based template language. It provides us with a powerful mechanism to replace tags to any specified value. Let us consider an example to better understand what Mustache does:
If the above-given String is considered a template, then by giving different values to variable enables us to generate different strings.
3.3.4 Websocket
The WebSocket feature of ADL v1.2 is untouched and retains the same functionality of the previous version. Care and sufficient tests have been done to ensure that this features behavior in unaltered in this version. Just as in (1), even in v1.2 websocket is leveraged to create chat functionality. And the chat option continues to be a group chat, as in any user having access to the website auto-generated using ADL can start communicating with anyone else having access to the same website.
3.4 Specification
ADL v1.2 follows the same semantics as (1), except a few newly added tags, that follow the same convention of JSON datatype.
The spec file below demonstrates complete features ADL v1.2 currently supports.
1. **basePath:** If we want to our server-side APIs to precede a common path, we can specify that string in this tag. This is an optional argument, if not specified there will be no error in parsing or in code generation. Avoid special characters in the input. The basePath in the attached spec in Figure 3.4 is "/music".
Figure 3.4 Spec file for Music Site
2. **title**: title tag captures the name of the root folder under which the entire client and server code gets generated. Also used as the package name for all server-side code. And finally, this name also appears in home.html - as a string describing the website. This is not an optional argument, failure to specify this detail will cause an error at run-time. Similarly, the title in the attached spec in Figure 3.4 is "Music-verse".
3. **description**: This information is used to provide value for the description tag in pom.xml that gets generated for server-side code. Again this is also an optional parameter, failure to specify this tag will not result in any parser or run-time errors. The description in the attached spec in Figure 3.4 is "A go to Database of your favorite Music".
4. **host**: Optional tag, sets the server URL for front end (in JavaScript files), if not provided then the default is set to localhost.
5. **port**: This is also an optional tag, used to set port value for server URL for front end, if not provided default is set to 8080.
6. **models**: It contains a list of various entities involved in the application, and each model contains a list of its attributes and corresponding data types. ADL v1.2 currently supports String, Integer, File and Timestamp data type. It also captures the relationship between 2 entities by mentioning the data type as either another entity name or by entity name followed by [] or by entity name followed by to represent one-to-one, one-to-many and many-to-many relationship respectively. In the attached spec in Figure 3.4, we have 5 models named User, Artist, Album, Song and ArtistBio. Also, "name" and "email" are the attributes of model "User", with both string data types. Similarly, other attributes can be deduced. The important thing to note here is the model "Artist". Out of it’s 5 attributes, 2 are relationship mapping. Since one artist can have multiple Albums, and similarly Album can have multiple artist, therefore Artist has an attribute called "albumList" with type "Album", implying Many-to-Many relationship with Album. Similarly, Artist is related in One-to-One mapping with artistBio, assuming that one Artist can have one Bio. Also if you notice in model
Album, since one Album can have many songs, it has a One-to-Many mapping with Song, specified by attribute songList of type Song[], [] is used to specify One-to-Many relationship. Moreover, an ArtistBio can have a picture, thus an attribute is mentioned as "pic" with "file" type. Any media type can be attached as file type. Similarly, an Album’s release date can be noted, thus the attribute called "releaseDate" has the type as "timestamp”, so as to store the time in database. Note that by mentioning the models, our intention is to create the tables with name same as model and the attributes as columns.
item **ui**: It is again an object containing various other important arguments, which are following:
(a) **platform**: Since there are multiple options to launch and expose the service such as web, mobile etc. Currently, ADL supports just web as the value of this attribute, but we can expand it to work for lot more such as Android, iOS etc.
(b) **loginuser**: Not an optional tag, this tag is used to specify among all models, which one is to be considered the entity for authentication, failure to specify this tag will result in run-time error in client-side generation. In Figure 3.4 it is User model that’s identified as login user.
(c) **permissions**: This also not an optional tag, this tag is best understood, if imagined as a table, with column headers being the type users specified in User model, since permissions tag depends on user type attribute of login user, even type attribute of login user is mandatory. Row headers are all the models specified for database, each cell has a 6 character, where each character can take value "1" for yes or "0" for no, for Create, Read, Update, Delete, List activity respectively. For example in the attached spec in Figure 3.4, for model User and User type Admin, the value set is 11111, meaning Admin user has all permissions for CRUDL on User model.
(d) **features**: This is the most important part of the specification, as here we mention the services needed for the application, we are trying to build. The corresponding services
or features would then be exposed via a simple button in the corresponding models html page. We specify the features for each model individually. If there are no features corresponding to a model, the application will assume the corresponding entity as an inner or helper model. The queries corresponding to relationships are critical and must be mentioned carefully with a standard unified formatting technique. In the attached spec in Figure 3.4, the major features required are:
**getAll:** Get the list of all Artists currently listed in database
**getByName in Artist:** Get/Search the Artist by the Artist name. Note that name is an attribute of Artist. The case is very important here. The first letter of attribute has to be capitalized along with the ‘B’ in By.
**save in Artist:** This feature is to save a record in database, i.e. to add a row in Artist table.
**getAlbums in Artist:** Since Artists and Albums are related as Many-to-Many relationship, this feature is to get a list of all Albums in a given course. The same would hold for a One-to-Many relationship too.
**chat:** This feature is to enable chat in the concerned screen developed via Websockets.
(e) **integrations:** In this part, we provide the 3rd party integration we might need. It is again, has to be mentioned corresponding to individual model. If no integration is needed for any model, then this part can be ignored. Currently ADL supports just Google Maps integration. But maps integration provides a solid evidence that most of the other integrations such as Calendar, Drive etc can be integrated similarly. The "maps" integration requirement is mentioned in the input spec in Figure 3.4 under integration and corresponding model.
### 3.5 Implementation
ADL v1.2 has in total 9 classes with ADLApplication being the main class. It takes 2 arguments, one, a input specification JSON file, two, a target directory where the auto generated files
get stored. When main() function gets invoked it requests for these 2 arguments and calls the checkParser() function of parser class.
checkParser() calls on ObjectMapper to map the input JSON to an internal Java object, making it convenient to retrieve necessary information. After objectMapper control is passed to Server Generator and then Client Generator class respectively to handle automation of code.
Constants class keep note of all necessary information on template files, like their location, no. of lines. Utilities class provides us with functions to read or write from files. And StringGenerator a class newly introduced in this version handles all String manipulation in a single place to increase readability of code. ADL v1.2 is also a Gradle project supporting with its exhaustive libraries for String manipulation, File handling and JSON processing.
3.5.1 Parser
Parser class has only one function checkParser() and only 2 responsibility, first it converts the input spec file into a Java object using objectMapper and then calls ServerGenerator and Client Generator class. For objectMapper to work, we need a corresponding Java class for it to map to. And for that purpose, we have 2 java classes written, input.java and UI.java, both model the tags of input spec file. ObjectMappers readValue() function would parse through the input spec file and populate corresponding attributes in input and UI class.
3.5.2 Server Generator
After the input spec file is parsed, the input java object is passed to server generator. We have three important functions here but before, all template files are copied to the target directory. The template files are of 2 types, one which require no modification, like MainServerApplication, pom.xml and application properties. We then use Mustache to load the standard files as template and replace the customizable field using the values obtained from the input spec. The second
category is the files that requires heavy customization, which are Controller, Model and Repository files. Since, these files are to be created for each model present in the spec, we first create a copy with the appropriate naming convention for each model, and then we work on all 3 files corresponding to each model, one by one. The three functions overall working is described below. Additionally we also identify the login user from loginUser tag from ui and store it in a global variable, the login user model will be a special entity which will have additional API for /login and /registration. We also create a map to store all model pairs that have Many-to-Many or one to Many mapping.
### 3.5.2.1 Models
Name of the model class is the same model name, provided in spec. We annotate each model with @Entity and @Table. Then each attribute for a model is considered and we add a declaration of the same with @Column spring annotation, along with getters and setters. For file type attribute, we save it as String type, as we save the file in a server location, and then just store the absolute path of the file in database. For timestamp, we use Java Utils Date format. The time is stored as current system time in local time format in the database. Table 3.1 presents various data types supported by input spec and how they are automated in ADL.
<table>
<thead>
<tr>
<th>Data type of attribute parsed from spec, for a model M</th>
<th>Automation in Model</th>
</tr>
</thead>
<tbody>
<tr>
<td>String/file</td>
<td>String type, Only @Column annotation</td>
</tr>
<tr>
<td>Integer</td>
<td>Integer type, Only @Column annotation</td>
</tr>
<tr>
<td>X, where X is another model</td>
<td>X type, @OneToOne mapping along with @JoinColumn annotation</td>
</tr>
<tr>
<td>X[], where X is another model</td>
<td>List< X > type, @OneToMany mapping along with @JoinColumn annotation</td>
</tr>
<tr>
<td>X{}, where 'X' is another model</td>
<td>List< X > type, @ManyToMany mapping along with @JoinColumn annotation</td>
</tr>
<tr>
<td>timestamp/date</td>
<td>Date type, @Column annotation</td>
</tr>
</tbody>
</table>
Table 3.1 This table shows possible data types of attributes in model
3.5.2.2 Repository
Auto-generation of Repository is unchanged from previous version. ADL v1.2 follows the same naming convention, model name appended by string Repository. Each Repository is an interface and extends Spring JPARepository for hibernate, which provides function to interact with Database like find All, find by Id(pk) and save. If there any other feature specified in UI feature tag apart from these queries, then that function gets added in Repository.
3.5.2.3 Controller
<table>
<thead>
<tr>
<th>Feature (for model 'm')</th>
<th>Method</th>
<th>Endpoint Created</th>
<th>Automation in Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>getAll</td>
<td>GET</td>
<td>/m/all</td>
<td>A method and return the results of findAll method from Repository</td>
</tr>
<tr>
<td>getByX, where X is an attribute of m</td>
<td>GET</td>
<td>/m?X=a, a is query variable</td>
<td>A method and return the results of findByX method from Repository</td>
</tr>
<tr>
<td>getXs, where X is another model having a relationship with m</td>
<td>GET</td>
<td>/m/a/XList, a is query variable</td>
<td>A method to first get the row in m for id a, then return the list of X present in the object.</td>
</tr>
<tr>
<td>save</td>
<td>POST</td>
<td>/m</td>
<td>A method and return the results of save method from Repository</td>
</tr>
<tr>
<td>bulkUpload</td>
<td>POST</td>
<td>/mBulkUpload</td>
<td>A method that accepts an excel sheet with first row as header, and data in subsequent rows, along with another list of files as Multipart[], each file (eg: image or pdf) corresponds to each row in the same order</td>
</tr>
<tr>
<td>chat</td>
<td>Websocket</td>
<td>/websocket/x, x is username</td>
<td>WebsocketConfig and WebsocketServer to handle messages</td>
</tr>
</tbody>
</table>
Table 3.2 This table shows possible features in input spec and how they are automated in Controller
Since the focus of this version is on Security and we want to restrict access to server APIs by using sessions. The method signature for all function API is completely changed. For previous version Table 3.2 consists of various possible options for feature generation from ADL, it also shows
how the code is generated for the same. For the current version Table 3.3, all API take in additional argument HTTP Session to validate if the user has actually logged in, if an unauthorized user tries to access the API, then session will be NULL for such user and they would receive a response HTTP.404. This way we are protecting every endpoint and to facilitate sending different return types, for example, a response HTTP.404 for unauthorized user and say list of all entries for an authorized user, we will be using a wrapper class, ResponseEntity, enabling us to send both. Additionally, version 1.2 also supports Update and Delete API.
### 3.5.2.4 WebSockets
WebSockets feature of ADL v1.2 is same as ADL, no modification done to its functionality and its been tested in latest version to see its behavior in unaltered and no bugs are induced. WebSocket component is built only when UI-feature tag has value ”chat” for at least one of its screens. The backend gets built with standard WebSocketConfig and webocketServer classes along with Main Application, this is generated separately for simplicity and the design is open to additional modification for more advance chat features. But currently, it continues to build a group chat feature with methods, onOpen, onMessage and onClose. This is entirely built by mustache and template files with minimal customization.
### 3.5.3 Client Generator
Client-side is a web application, developed in HTML, CSS and JavaScript. First, all files related to login and registration gets generated, which now becomes the starting point of application. The design of front end is still restrictive, every style is preconceived and generated in one specific way. But it’s open to customization after generation. Login page has link to registration page, on successful sign up, control passes to login page again. After entering the right credentials, 2 things happen, first the session information for the user gets stored, second, we response from server, the type of user logged-in, based on this info, all the upcoming pages and permissions get set. After
login page, comes home.html which is landing page after successful login and has hyperlinks to all entity pages. CSS files are also fixed for model or entity pages.
### 3.5.3.1 Login Files
This functionality is an addition in ADL v1.2. As discussed in section 3.5.2.1, we have already identified the login entity and its variable. A login HTML file is generated by copying the template login.html file and login.js file gets generated by help of Mustache. login.js file has appropriate call to server API'S for login validation and storing Session. A registration page is also generated here, we parse through each attribute of this entity and generate an HTML input form. A save button is also generated and that is linked to appropriate back-end API for registration.
### 3.5.3.2 Model HTML Files
This function is also modified under v1.2, previously we would iterate through all models specified under ui components in spec file to create HTML for each model. But in this version in addition to iterating over each model, we also create versions of same model for different user types, based on their values set under permission tag, for example from Figure 3.4 under permission tag say for model Artist for user type Publisher the value provided is 01001, here 01001 stands for its CRUDL, so a publisher user type has no Create permission, so no save feature, no delete or update feature, only get by attribute and get ALL which corresponds to Read and List is permissible, therefore the version of page created for this particular entity will have only getBy and getAll components. Each model file creation follows the same naming convention user type appended by model name. While iterating each model, we also create a map for model pair that have Many-to-Many or one-to-many relationship.
The remaining process of how the file gets generated using mustache is same, the template file is copied to appropriate location in output directory, we add hyper-links to all other html pages as
buttons, including index to interlink the whole project. Apart from hyper-link buttons, we add the
div form element for each type of "post" feature present in the input spec, i.e. save and bulkUpload
as mentioned in Table-3.2. In case of save, we add the input HTML element for each attribute
present in the model, and in case of bulkUpload, we add input element for excel file and collection
of individual elements, in accordance with the server-side API. These div elements are initially
hidden and we expose them through buttons in the model page, and each feature corresponds to a
button. For GET features, as per Table-3.2, upon clicking the corresponding data is presented in tabular view. For POST features, it’s the corresponding form, which is loaded.
Further, we also search the integrations component in the spec to look for whether the same model
requires any 3rd party integrations, if so, we just add the div element accordingly. The config and
Google APIs are already integrated in the template.
3.5.3.3 JavaScript Files
Even in version v1.2 a single JS file is generated with methods corresponding to each operation
in all the model pages. Similar to other pages, we load it through a template file. The major task
of javascript file is to make Http calls to server side, thus we already have the basic functionality of
a GET and a POST http call through XmlHttpRequest implemented and added, additionally we
also have PUT and DELETE http requests for Update and delete actions. Now, as we consider
each model present in feature at a time, we go through all operations needed with the server-side
along with their permissions. Each button corresponding to a GET element in html is mapped
to make a Http GET request to the corresponding API and once the result are fetched, the data
is converted to tabular form by adding each JSON object from the JSON array to the table as a row.
Further, for each button corresponding to a POST element in html is first mapped to launch
the corresponding div form, and then once the data is filled out by user, input JSON request body
is compiled with the same data, including the files, if provided. Then the submit button in the form
is mapped to make a Http POST request to the corresponding API. As mentioned in section 3.5.3.2 we have map of all model pairs for Many-to-Many and one to many. Whenever we encounter post option on such models, for many to one, we display a list of all entries of other model with radio button, to save the join relationship between 2 models, for Many-to-Many model pair, similarly for post option instead of radio button, check boxes are displayed. The success and failure of the request is then notified. With the various mappings happening between model file and javascript file and then to the server-side APIs, it is quite evident that the naming convention has to be uniform, unique and properly defined for each type of operations, which we try to do by using the same names for operation as provided by user in feature to keep uniformity between client-side and server-side.
3.5.3.4 Web sockets
As discussed in Sever generation of webSockets, even client side feature of webSockets retains same behavior as ADL, websocket javascript file is only created if there is a requirement of chat feature in at least one model. This javascript file simply makes a Websocket call to the respective server-side method, which implements onOpen, onClose and onMessage methods. Further, the chat box in the concerned model is added, which is implemented via a simple html table. With the help of jquery, the chat table is made dynamic to enable live chat rendering.
<table>
<thead>
<tr>
<th>Feature (for model 'm')</th>
<th>Method</th>
<th>Endpoint created</th>
<th>Automation in Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>get All</td>
<td>GET</td>
<td>/m/all</td>
<td>A method and return the results of findAll method from Repository if session exists else return HTTP.404</td>
</tr>
<tr>
<td>getByX, where X is an attribute</td>
<td>GET</td>
<td>/m?X=a, a is query variable</td>
<td>A method and return the results of findByX method from Repository if session exists else return HTTP.404</td>
</tr>
<tr>
<td>getXs, where X is another model having a relationship with m</td>
<td>GET</td>
<td>/m/a/XList, a is a query variable</td>
<td>A method to first get the row in m for id a, then return the list of X present in object only if session exists else return HTTP.404</td>
</tr>
<tr>
<td>save</td>
<td>POST</td>
<td>/ml</td>
<td>A method and return the results of save method from Repository, i.e status HTTP.200 if session exists else return HTTP.404</td>
</tr>
<tr>
<td>bulkUpload</td>
<td>POST</td>
<td>/mBulkUpload</td>
<td>A method that accepts if session exists an excel sheet with first row as header, and data in subsequent rows, along with another list of files as Multipart[], each file (e.g., image or pdf) corresponding to each row in same order else return HTTP.404</td>
</tr>
<tr>
<td>chat</td>
<td>Websocket</td>
<td>/websocket/x, x is username</td>
<td>WebSocketConfig and WebsocketServer to handle messages</td>
</tr>
<tr>
<td>update</td>
<td>Update</td>
<td>/m/update</td>
<td>A method and return if session not null status HTTP.200 on successful updation else return HTTP.404</td>
</tr>
<tr>
<td>delete</td>
<td>Delete</td>
<td>/m/delete</td>
<td>A method and return if session not null status HTTP.200 on successful deletion else return HTTP.404</td>
</tr>
</tbody>
</table>
Table 3.3 ADL v1.2 Automation in Controller
CHAPTER 4. EXPERIMENTAL EVALUATION
4.1 Experiment Settings
All the experiments were performed in a standard IDE for Java with Java 1.8 virtual environment. The program seeks one input file and one output directory location with absolute path. The generated code has been tested by running the server-side code in a Windows 10 PC, with MySql server set-up in the same machine, and the client-side UI is tested in Google Chrome and Mozilla Firefox browsers.
4.2 Experiment 1
**Scenario:** An application for managing Albums, Artists and Songs published.
So, in detail, there would be User, Artist, Album, Songs and ArtistBio. An artist can have released many albums. And each Album could be released by multiple artists and each Album had many songs. Each Artist has a Bio page that describes him/her. And the application would have user who could be Publisher, Admin who maintains the site and normal user. Do note that we are also incorporating features that were supported in previous version, to demonstrate that current version had backward compatibility.
**Input JSON Spec:**
Figure 4.1 provides a comprehensive input spec file which captures almost all features; ADL v1.2 can automate at this moment.
**Parsing the input:**
As described in Section 3.4, the spec contains 5 models which are User, Artist, Album, Song and ArtistBio, with respective attributes. Artist has a One-to-One relationship with ArtistBio, Many-to-Many with Album and Album has one-to-Many relationship with Song. Also, each Artist can
Figure 4.1 Provided Input spec
have a corresponding file(image/pdf/doc etc).
Further, w.r.t features, in addition to simple getAll and save feature, we need bulkUpload and getAlbums for a given artist in Artist; get Song information by name in Album; and chat feature in Artist.

**Figure 4.2 Generated Output Directory Structure**
**Server-side:**
Figure 4.2 shows the autogenerated folder structure. It is generated in the mentioned output directory, with separate server, client and websocket directory.
Now, Figure 4.3 shows the project structure generated for the server-side code. We verify that pom.xml, application.properties and ServerApplication is generated in appropriate directory, and similarly Controllers, Models, and Repositories are generated for each model mentioned in spec.
Further, spring related config is verified in application.properties file, which is attached in Figure 4.5. Default properties are set for server port, database driver(MySql), and DDL. The database url is set for default local, which can be modified manually later in case of production or any other instance of mysql db.
Figure 4.3 Generated Server project structure
```
src/main/java
com.example
Album.java
AlbumController.java
AlbumRepository.java
Artist.java
ArtistBio.java
ArtistBioController.java
ArtistBioRepository.java
ArtistController.java
ArtistRepository.java
ServerApplication.java
Song.java
SongController.java
SongRepository.java
User.java
UserController.java
UserRepository.java
```
Figure 4.4 Auto generated Database Tables
```
server.port=8080
server.servlet.context-path=/music
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/music
spring.datasource.username=root
spring.datasource.password=
spring.jpa.hibernate.ddl-auto=update
```
Figure 4.5 Generated application properties file
public class AlbumController {
@Autowired
private AlbumRepository albumRepository;
@RequestMapping(method=RequestMethod.GET, path="/album")
public ResponseEntity getAlbumBy(name) String name, HttpSession session) {
String activeUser = (String) session.getAttribute("user");
if (activeUser==null)
return new ResponseEntity<>(HttpStatus.NOT_FOUND);
else
return new ResponseEntity<>(albumRepository.findById(name), HttpStatus.OK);
}
@RequestMapping(method=RequestMethod.GET, path="/album/{name}/artistlist")
public ResponseEntity getArtistsForAlbumName(@PathVariable("name") String name, HttpSession session) {
String activeUser = (String) session.getAttribute("user");
if (activeUser==null)
return new ResponseEntity<>(HttpStatus.NOT_FOUND);
else
return new ResponseEntity<>(albumRepository.findById(name).get().getArtistList(), HttpStatus.OK);
}
@RequestMapping(method=RequestMethod.GET, path="/album/{name}/songlist")
public ResponseEntity getSongsForAlbumName(@PathVariable("name") String name, HttpSession session) {
String activeUser = (String) session.getAttribute("user");
if (activeUser==null)
return new ResponseEntity<>(HttpStatus.NOT_FOUND);
else
return new ResponseEntity<>(albumRepository.findById(name).get().getSongList(), HttpStatus.OK);
}
Figure 4.6 A sample auto generated Controller Class
Figure 4.7 Generated Websocket project structure
Figure 4.4 shows the automatically generated tables in the database, upon running the Server-Application. The columns and types are verified and they align with the required data types.
Now, if we consider a generated Controller class say AlbumController in Figure 4.6, generated based on the features corresponding to Album in input spec in Figure 4.1. Notice how the method signature in this version’s controller are different from ADL.
Figure 4.7 shows the generated Websocket project, as the input spec requires a chat feature in Comment page. The project is a simple Websocket project in Spring framework.
**Client-side:**
Figure 4.8 shows the layout of client-side project, with html, css and javascript files. home.css file contains all the style elements of the project. For changes in UI/UX pattern, this css file can be further modified. The basic functionalities for main application is contained in generated control.js, which calls the server-side APIs for integration. The websocket related integration is done in websocket.js. Lastly, features corresponding to each model is implemented in remaining html files, with index.html being the login page of the application.
Notice how type is generated as radio button for selecting the type of users in 4.10. Once registration is successful control is redirected to login page Figure 4.9, where the user is prompted to enter their credentials once, the credentials are verified by backend, server returns the type of user, and based on the user type, the appropriate home page is called, whose view will be different from rest of the users. Even in each model HTML page, the actions for different users are restricted by their CRUDL values.
Now, if we consider the feature of saving an ArtistBio, Figure 4.11 shows a basic form element to capture the attributes of ArtistBio model, as mentioned in the corresponding model. Since Image
Figure 4.8 Generated Client project structure
Figure 4.9 A sample generated login page
Figure 4.10 Registration page for sign up
attribute is mentioned as file type, the auto-generated form seeks a file to be uploaded. Currently, the relationship saving is not supported in ADL v1.2, which is listed as a future work.
Similarly, getAll feature is verified by Figure 4.12. This table is populated by calling the GET API generated in server-side application for Get all Users from User table in database. The tables are formatted as Datatables (12), which provides a nice template for tables with in-built sorting, searching and pagination feature.
Another important feature is the bulk upload feature, which is again mentioned as a requirement for artist model in the input spec. This form for bulk upload shown in Figure 4.14 which seeks an excel file, each row represents a data entry in Team table in database, first row is reserved for header, the order of which needs to match with the attribute order.
Next, Figure 4.13 verifies the auto-integration of Google Maps API, mentioned under integration component in the input spec.
Upon successfully submitting the bulkUpload form, we query the database to verify the save, shown in Figure 4.15.
Last, but not least, we verify the working of chat feature via websocket, the chat button appears on Comment page, as we mentioned in the input spec. The chat is integrated via websocket.js file, and multi-users can chat together in the chat box.
4.3 Experiment 2
Scenario: ABET - Develop an application to display individual course information or a report of each course.
Outcome: Clearly the backend generated by ADL v1.2 is very robust and has capabilities untapped by UI. The UI generated is per model, but the requirement is to generate a report, that takes in field across multiple models. The server has APIs to support this functionality, but it is restricted by the template design of UI, which relies heavily on table format.
Figure 4.11 Upload ArtistBio detail
Figure 4.12 Get all records for Users
Figure 4.13 Auto-generated Google Maps integration in UI
Figure 4.14 A sample input excel file for bulk upload
Figure 4.15 Data entry in MySQL db post upload
Figure 4.16 A sample auto-generated working chat feature
```json
{
"title": "ABET",
"basePath": "/abet",
"description": "Iowa State University - ABET",
"models": {
"Course": {
"number": "integer",
"name": "string",
"creditandHours": "string",
"courseinfo": "string",
"preReq": "string",
"ForSorSE": "string",
"outcomes": "Outcomes"
},
"Outcomes": {
"number": "integer",
"one": "string",
"two": "string",
"three": "string",
"four": "string",
"five": "string",
"six": "string"
}
},
"ui": {
"platform": "web",
"features": {
"Course": "getAll, getName, save, bulkUpload, getForOutcomes"
}
}
}
```
Figure 4.17 Input spec file for ABET
4.4 Measure of Success
We define following measures of success to evaluate the performance of ADL.
4.4.1 Functionality
As we see, ADL v1.2 provides additional functionality like Login, Registration, authentication in server side, stores sessions to validate users, Many-to-Many relationship is also implemented. Definitely there is a huge scope of further addition to the features, which we talk about in Chapter 6. Moreover, the generated code provides an advanced starting point for developers, who might want to add more personalized features.
4.4.2 Efficiency
Understanding and writing the input spec is easy and definitely will get more faster with subsequent uses. The generation of whole desired application upon passing a correct input json takes few seconds, which is way more faster than designing and coding the whole application manually, which might take more than a month.
4.4.3 Maintainability
Both the App Description Language application and generated code is very easy to maintain due to the choice of technology used, standardized naming convention and code quality.
CHAPTER 5. RELATED WORK
Automatic Code generation is a relatively new topic, but there has been considerable amount of work and effort that has been put into it. Swagger Codegen (9) is the most popular code generators available. This project is inspired by Swagger codegen’s implementation idea. Swagger Codegen uses Mustache to create files from the existing template. It is a very general purpose automation tool, that supports more than 50 languages and frameworks, and the user has the option to choose from them to generate the server-side. Swagger doesn’t provide the client-side automation, but Stirewalt and Rugaber et. al (3) discusses a mechanism to automate the client-side using HTML and Javascript through a tags based specification.
Moreover, Swagger doesn’t provide any functionality in the generated code, it lays out the foundation and basic boiler plate code, albeit it can generate the layout for a really complex scenario. Once it generates the code, the developer has to further work on understanding the codebase and then further work on it to add the functionalities like use Hibernate to fetch records from the database.
Another area, which I believe this project focuses on is the fact that ADL can be used by a person with little to no programming experience, unless the requirement is highly complex. Whereas Swagger demands the user to have pre-knowledge about all possible options of development. It also requires to manually set endpoint paths, http method type, authorization tokens, and lot more. Although, these provide a great range of customizations, but sometimes it might get overwhelming for a person.
CHAPTER 6. CONCLUSION AND FUTURE WORK
In this project, we added more capabilities to ADL. We have improvised on security, provided default login and registration page, added sessions to protect every server API. Enhanced interaction with database by implementing many to many relationships. The front end accommodates more UI components than before. We also achieved different viewing and accessibility of application by considering permission values for CRUDL. Having added the above functionalities, we have made auto generation of application to include more complex features, greatly reducing overhead of a naive developer and leaving only business logic to incorporate for advance developers.
For future work, one immediate area of focus could be auto-generating front end using a framework like React or Angular to improve maintainability and usability of client-side code. Once the client-side generation gets ported to a framework architecture we can further explore options for providing user the facility to design their own look and feel for front end, currently user of ADL is restricted in designing front end. Additionally, we could also venture the option of scrapping the whole option of writing an input spec JSON, because even that involves some syntactical knowledge of how a JSON object should be written, we could possible provide further abstraction, by generating JSON file internally, by making ADL user interact with a UI. Lastly, options to automate Plugin architecture could also be considered, so that the generated app can directly interface with various IDEs or external APIs.
REFERENCES
[8] "Maven Documentation” https://www.maven.co/,
[12] "Datatables" https://datatables.net/
|
{"Source-Url": "https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1412&context=creativecomponents", "len_cl100k_base": 12389, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 73053, "total-output-tokens": 13720, "length": "2e13", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0003113746643066406, "__label__crime_law": 0.00021398067474365232, "__label__education_jobs": 0.0012083053588867188, "__label__entertainment": 6.103515625e-05, "__label__fashion_beauty": 0.00014495849609375, "__label__finance_business": 0.0001589059829711914, "__label__food_dining": 0.0002841949462890625, "__label__games": 0.0003676414489746094, "__label__hardware": 0.0005693435668945312, "__label__health": 0.0002225637435913086, "__label__history": 0.0001697540283203125, "__label__home_hobbies": 7.671117782592773e-05, "__label__industrial": 0.0002493858337402344, "__label__literature": 0.0001982450485229492, "__label__politics": 0.0001544952392578125, "__label__religion": 0.0003161430358886719, "__label__science_tech": 0.0020732879638671875, "__label__social_life": 9.447336196899414e-05, "__label__software": 0.0034027099609375, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00022161006927490232, "__label__transportation": 0.0003867149353027344, "__label__travel": 0.00017917156219482422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56696, 0.0296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56696, 0.13276]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56696, 0.8863]], "google_gemma-3-12b-it_contains_pii": [[0, 721, false], [721, 1466, null], [1466, 2617, null], [2617, 2617, null], [2617, 2617, null], [2617, 3139, null], [3139, 4393, null], [4393, 5361, null], [5361, 5973, null], [5973, 7366, null], [7366, 9080, null], [9080, 11145, null], [11145, 11942, null], [11942, 13471, null], [13471, 15606, null], [15606, 16285, null], [16285, 17164, null], [17164, 18223, null], [18223, 18259, null], [18259, 20513, null], [20513, 22619, null], [22619, 24559, null], [24559, 26496, null], [26496, 28898, null], [28898, 30939, null], [30939, 33043, null], [33043, 35041, null], [35041, 37231, null], [37231, 38693, null], [38693, 40720, null], [40720, 42242, null], [42242, 42274, null], [42274, 43405, null], [43405, 44220, null], [44220, 45771, null], [45771, 47673, null], [47673, 47807, null], [47807, 49668, null], [49668, 49807, null], [49807, 49967, null], [49967, 50805, null], [50805, 51898, null], [51898, 53541, null], [53541, 55150, null], [55150, 56243, null], [56243, 56696, null]], "google_gemma-3-12b-it_is_public_document": [[0, 721, true], [721, 1466, null], [1466, 2617, null], [2617, 2617, null], [2617, 2617, null], [2617, 3139, null], [3139, 4393, null], [4393, 5361, null], [5361, 5973, null], [5973, 7366, null], [7366, 9080, null], [9080, 11145, null], [11145, 11942, null], [11942, 13471, null], [13471, 15606, null], [15606, 16285, null], [16285, 17164, null], [17164, 18223, null], [18223, 18259, null], [18259, 20513, null], [20513, 22619, null], [22619, 24559, null], [24559, 26496, null], [26496, 28898, null], [28898, 30939, null], [30939, 33043, null], [33043, 35041, null], [35041, 37231, null], [37231, 38693, null], [38693, 40720, null], [40720, 42242, null], [42242, 42274, null], [42274, 43405, null], [43405, 44220, null], [44220, 45771, null], [45771, 47673, null], [47673, 47807, null], [47807, 49668, null], [49668, 49807, null], [49807, 49967, null], [49967, 50805, null], [50805, 51898, null], [51898, 53541, null], [53541, 55150, null], [55150, 56243, null], [56243, 56696, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56696, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56696, null]], "pdf_page_numbers": [[0, 721, 1], [721, 1466, 2], [1466, 2617, 3], [2617, 2617, 4], [2617, 2617, 5], [2617, 3139, 6], [3139, 4393, 7], [4393, 5361, 8], [5361, 5973, 9], [5973, 7366, 10], [7366, 9080, 11], [9080, 11145, 12], [11145, 11942, 13], [11942, 13471, 14], [13471, 15606, 15], [15606, 16285, 16], [16285, 17164, 17], [17164, 18223, 18], [18223, 18259, 19], [18259, 20513, 20], [20513, 22619, 21], [22619, 24559, 22], [24559, 26496, 23], [26496, 28898, 24], [28898, 30939, 25], [30939, 33043, 26], [33043, 35041, 27], [35041, 37231, 28], [37231, 38693, 29], [38693, 40720, 30], [40720, 42242, 31], [42242, 42274, 32], [42274, 43405, 33], [43405, 44220, 34], [44220, 45771, 35], [45771, 47673, 36], [47673, 47807, 37], [47807, 49668, 38], [49668, 49807, 39], [49807, 49967, 40], [49967, 50805, 41], [50805, 51898, 42], [51898, 53541, 43], [53541, 55150, 44], [55150, 56243, 45], [56243, 56696, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56696, 0.22005]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a91a833af8f0811ed9e654f4612540d7710b74a4
|
Insanity is repeating the same mistakes and expecting different results.
Calvin: There! I finished our secret code!
Hobbes: Let’s see.
Calvin: I assigned each letter a totally random number, so the code will be hard to crack. For letter “A”, you write 3,004,572,688. “B” is 28,731,569½.
Hobbes: That’s a good code all right.
Calvin: Now we just commit this to memory.
Calvin: Did you finish your map of our neighborhood?
Hobbes: Not yet. How many bricks does the front walk have?
— Bill Watterson, “Calvin and Hobbes” (August 23, 1990)
```c
int getRandomNumber()
{
return 4; // chosen by fair dice roll.
// guaranteed to be random.
}
```
[RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.]
— Randall Munroe, xkcd (http://xkcd.com/221/)
Reproduced under a Creative Commons Attribution-NonCommercial 2.5 License
12 Hash Tables
12.1 Introduction
A hash table is a data structure for storing a set of items, so that we can quickly determine whether an item is or is not in the set. The basic idea is to pick a hash function \( h \) that maps every possible item \( x \) to a small integer \( h(x) \). Then we store \( x \) in an array at index \( h(x) \); the array itself is the hash table.
Let’s be a little more specific. We want to store a set of \( n \) items. Each item is an element of a fixed set \( U \) called the universe; we use \( u \) to denote the size of the universe, which is just the number of items in \( U \). A hash table is an array \( T[1..m] \), where \( m \) is another positive integer, which we call the table size. Typically, \( m \) is much smaller than \( u \). A hash function is any function of the form
\[
h : U \rightarrow \{0, 1, \ldots, m - 1\},
\]
mapping each possible item in \( U \) to a slot in the hash table. We say that an item \( x \) hashes to the slot \( T[h(x)] \).
Of course, if \( u = m \), we can always just use the trivial hash function \( h(x) = x \); in other words, we can use the item itself as the index into the table. The resulting data structure is called a direct access table, or more commonly, an array. In most applications, however, this approach requires much more space than we can reasonably allocate. On the other hand, we rarely need to store more than a tiny fraction of \( u \). Ideally, the table size \( m \) should be roughly equal to the number \( n \) of items we actually need to store, not the number of items that we might possibly store.
The downside of using a smaller table is that we must deal with collisions. We say that two items \( x \) and \( y \) collide if their hash values are equal: \( h(x) = h(y) \). We are now left with two
different (but interacting) design decisions. First, how do we choose a hash function $h$ that can be evaluated quickly and that results in as few collisions as possible? Second, when collisions do occur, how do we resolve them?
### 12.2 The Importance of Being Random
If we already knew the precise data set that would be stored in our hash table, it is possible (but not particularly easy) to find a perfect hash function that avoids collisions entirely. Unfortunately, for most applications of hashing, we don’t know in advance what the user will put into the table. Thus, it is impossible, even in principle, to devise a perfect hash function in advance; no matter what hash function we choose, some pair of items from $\mathcal{U}$ must collide. In fact, for any fixed hash function, there is a subset of at least $|\mathcal{U}|/m$ items that all hash to the same location. If our input data happens to come from such a subset, either by chance or malicious intent, our code will come to a grinding halt. This is a real security issue with core Internet routers, for example; every router on the Internet backbone survives millions of attacks per day, including timing attacks, from malicious agents.
The only way to provably avoid this worst-case behavior is to choose our hash functions randomly. Specifically, we will fix a set $\mathcal{M}B^+$ of functions from $\mathcal{U}$ to $\{0, 1, \ldots, m-1\}$, and then at run time, we choose our hash function randomly from the set $\mathcal{M}B^+$ according to some fixed distribution. Different sets $\mathcal{M}B^+$ and different distributions over that set imply different theoretical guarantees. Screw this into your brain:
**Input data is **not** random!**
So good hash functions **must be** random!
In particular, the simple deterministic hash function $h(x) = x \mod m$, which is often taught and recommended under the name “the division method”, is **utterly stupid**. Many textbooks correctly observe that this hash function is bad when $m$ is a power of 2, because then $h(x)$ is just the low-order bits of $m$, but then they bizarrely recommend making $m$ prime to avoid such obvious collisions. But even when $m$ is prime, any pair of items whose difference is an integer multiple of $m$ collide with absolute certainty; for all integers $a$ and $x$, we have $h(x + am) = h(x)$. Why would anyone use a hash function where they know certain pairs of keys always collide? That’s just crazy!
### 12.3 ...But Not Too Random
Most theoretical analysis of hashing assumes ideal random hash functions. Ideal randomness means that the hash function is chosen uniformly at random from the set of all functions from $\mathcal{U}$ to $\{0, 1, \ldots, m-1\}$. Intuitively, for each new item $x$, we roll a new $m$-sided die to determine the hash value $h(x)$. Ideal randomness is a clean theoretical model, which provides the strongest possible theoretical guarantees.
Unfortunately, ideal random hash functions are a theoretical fantasy; evaluating such a function would require recording values in a separate data structure which we could access using the items in our set, which is exactly what hash tables are for! So instead, we look for families of hash functions with just enough randomness to guarantee good performance. Fortunately, most hashing analysis does not actually require ideal random hash functions, but only some weaker consequences of ideal randomness.
One property of ideal random hash functions that seems intuitively useful is uniformity. A family $\mathcal{M}^+$ of hash functions is uniform if choosing a hash function uniformly at random from $\mathcal{M}^+$ makes every hash value equally likely for every item in the universe:
$$\text{Uniform: } \Pr_{h \in \mathcal{M}^+} [h(x) = i] = \frac{1}{m} \text{ for all } x \text{ and all } i$$
We emphasize that this condition must hold for every item $x \in U$ and every index $i$. Only the hash function $h$ is random.
In fact, despite its intuitive appeal, uniformity is not terribly important or useful by itself. Consider the family $K$ of constant hash functions defined as follows. For each integer $a$ between $0$ and $m - 1$, let $\text{const}_a$ denote the constant function $\text{const}_a(x) = a$ for all $x$, and let $K = \{\text{const}_a | 0 \leq a \leq m - 1\}$ be the set of all such functions. It is easy to see that the set $K$ is both perfectly uniform and utterly useless!
A much more important goal is to minimize the number of collisions. A family of hash functions is universal if, for any two items in the universe, the probability of collision is as small as possible:
$$\text{Universal: } \Pr_{h \in \mathcal{M}^+} [h(x) = h(y)] \leq \frac{1}{m} \text{ for all } x \neq y$$
(Trivially, if $x = y$, then $\Pr[h(x) = h(y)] = 1$) Again, we emphasize that this equation must hold for every pair of distinct items; only the function $h$ is random. The family of constant functions is uniform but not universal; on the other hand, universal hash families are not necessarily uniform.¹
Most elementary hashing analysis requires a weaker versions of universality. A family of hash functions is near-universal if the probability of collision is close to ideal:
$$\text{Near-universal: } \Pr_{h \in \mathcal{M}^+} [h(x) = h(y)] \leq \frac{2}{m} \text{ for all } x \neq y$$
There’s nothing special about the number 2 in this definition; any other explicit constant will do.
On the other hand, some hashing analysis requires reasoning about larger sets of collisions. For any integer $k$, we say that a family of hash functions is strongly $k$-universal or $k$-uniform if for any sequence of $k$ disjoint keys and any sequence of $k$ hash values, the probability that each key maps to the corresponding hash value is $1/m^k$:
$$\text{$k$-uniform: } \Pr \left[ \bigwedge_{j=1}^{k} h(x_j) = i_j \right] = \frac{1}{m^k} \text{ for all distinct } x_1, \ldots, x_k \text{ and all } i_1, \ldots, i_k$$
Ideal random hash functions are $k$-uniform for every positive integer $k$.
### 12.4 Chaining
One of the most common methods for resolving collisions in hash tables is called chaining. In a chained hash table, each entry $T[i]$ is not just a single item, but rather (a pointer to) a linked
¹Confusingly, universality is often called the uniform hashing assumption, even though it is not an assumption that the hash function is uniform.
list of all the items that hash to $T[i]$. Let $\ell(x)$ denote the length of the list $T[h(x)]$. To see if an item $x$ is in the hash table, we scan the entire list $T[h(x)]$. The worst-case time required to search for $x$ is $O(1)$ to compute $h(x)$ plus $O(1)$ for every element in $T[h(x)]$, or $O(1 + \ell(x))$ overall. Inserting and deleting $x$ also take $O(1 + \ell(x))$ time.
Let’s compute the expected value of $\ell(x)$ under this assumption; this will immediately imply a bound on the expected time to search for an item $x$. To be concrete, let’s suppose that $x$ is not already stored in the hash table. For all items $x$ and $y$, we define the indicator variable
$$C_{x,y} = \begin{cases} 1 & \text{if } h(x) = h(y) \\ 0 & \text{otherwise} \end{cases}$$
(In case you’ve forgotten the bracket notation, $C_{x,y} = 1$ if $h(x) = h(y)$ and $C_{x,y} = 0$ if $h(x) \neq h(y)$.) Since the length of $T[h(x)]$ is precisely equal to the number of items that collide with $x$, we have
$$\ell(x) = \sum_{y \in T} C_{x,y}.$$
Assuming $h$ is chosen from a universal set of hash functions, we have
$$E[C_{x,y}] = \Pr[C_{x,y} = 1] = \begin{cases} 1 & \text{if } x = y \\ \frac{1}{m} & \text{otherwise} \end{cases}$$
Now we just have to grind through the definitions.
$$E[\ell(x)] = \sum_{y \in T} E[C_{x,y}] = \sum_{y \in T} \frac{1}{m} = \frac{n}{m}$$
We call this fraction $n/m$ the load factor of the hash table. Since the load factor shows up everywhere, we will give it its own symbol $\alpha$.
$$\alpha := \frac{n}{m}$$
Similarly, if $h$ is chosen from a near-universal set of hash functions, then $E[\ell(x)] \leq 2\alpha$. Thus, the expected time for an unsuccessful search in a chained hash table, using near-universal hashing, is $\Theta(1 + \alpha)$. As long as the number of items $n$ is only a constant factor bigger than the table size $m$, the search time is a constant. A similar analysis gives the same expected time bound (with a slightly smaller constant) for a successful search.
Obviously, linked lists are not the only data structure we could use to store the chains; any data structure that can store a set of items will work. For example, if the universe $\mathcal{U}$ has a total ordering, we can store each chain in a balanced binary search tree. This reduces the expected time for any search to $O(1 + \log \ell(x))$, and under the simple uniform hashing assumption, the expected time for any search is $O(1 + \log \alpha)$.

Another natural possibility is to work recursively! Specifically, for each \( T[i] \), we maintain a hash table \( T_i \) containing all the items with hash value \( i \). Collisions in those secondary tables are resolved recursively, by storing secondary overflow lists in tertiary hash tables, and so on. The resulting data structure is a tree of hash tables, whose leaves correspond to items that (at some level of the tree) are hashed without any collisions. If every hash table in this tree has size \( m \), then the expected time for any search is \( O(\log_m n) \). In particular, if we set \( m = \sqrt{n} \), the expected time for any search is constant. On the other hand, there is no inherent reason to use the same hash table size everywhere; after all, hash tables deeper in the tree are storing fewer items.
**Caveat Lector!** The preceding analysis does not imply bounds on the expected worst-case search time is constant. The expected worst-case search time is \( O(1 + L) \), where \( L = \max_x \ell(x) \). Under the uniform hashing assumption, the maximum list size \( L \) is very likely to grow faster than any constant, unless the load factor \( \alpha \) is significantly smaller than 1. For example, \( E[L] = \Theta(\log n / \log \log n) \) when \( \alpha = 1 \). We’ve stumbled on a powerful but counterintuitive fact about probability: When several individual items are distributed independently and uniformly at random, the resulting distribution is not uniform in the traditional sense! Later in this lecture, I’ll describe how to achieve constant expected worst-case search time using secondary hash tables.
### 12.5 Multiplicative Hashing
Arguably the simplest technique for near-universal hashing, first described by Lawrence Carter and Mark Wegman in the late 1970s, is called **multiplicative hashing**. I’ll describe two variants of multiplicative hashing, one using modular arithmetic with prime numbers, the other using modular arithmetic with powers of two. In both variants, a hash function is specified by an integer parameter \( a \), called a salt. The salt is chosen uniformly at random when the hash table is created and remains fixed for the entire lifetime of the table. All probabilities are defined with respect to the random choice of salt.
For any non-negative integer \( n \), let \([n]\) denote the \( n\)-element set \( \{0, 1, \ldots, n-1\} \), and let \([n]^+\) denote the \((n-1)\)-element set \( \{1, 2, \ldots, n-1\} \).
#### 12.5.1 Prime multiplicative hashing
The first family of multiplicative hash function is defined in terms of a prime number \( p > |U| \). For any integer \( a \in \mathbb{Z}_p^+ \), define a function \( \text{mult}_a : U \to [m] \) by setting
\[
\text{mult}_a(x) = (ax \mod p) \mod m
\]
and let
\[
\mathcal{MP} := \{ \text{mult}_a | a \in \mathbb{Z}_p^+ \}
\]
denote the set of all such functions. Here, the integer \( a \) is the salt for the hash function \( \text{mult}_a \).
We claim that this family of hash functions is near-universal.
The use of prime modular arithmetic is motivated by the fact that division modulo prime numbers is well-defined.
**Lemma 1.** For every integer \( a \in \mathbb{Z}_p^+ \), there is a unique integer \( z \in \mathbb{Z}_p^+ \) such that \( az \mod p = 1 \).
**Proof:** Fix an arbitrary integer \( a \in \mathbb{Z}_p^+ \).
Suppose \( az \mod p = az' \mod p \) for some integers \( z, z' \in \mathbb{Z}_p^+ \). We immediately have \( a(z-z') \mod p = 0 \), which implies that \( a(z-z') \) is divisible by \( p \). Because \( p \) is prime, the inequality
\[
a(z-z') = \frac{az'-az}{p} \cdot p = 0
\]
implies that \( a(z-z') = 0 \) and therefore that \( z = z' \).
Thus, to show that
Proof: Fix three arbitrary elements
Lemma 2. For any elements \(a, x, y \in [p]^+\), we have a collision \(\text{mult}_a(x) = \text{mult}_a(y)\) if and only if either \(x = y\) or \(\text{mult}_a((x - y) \mod p) = 0\) or \(\text{mult}_a((y - x) \mod p) = 0\).
Proof: Fix three arbitrary elements \(a, x, y \in [p]^+\). There are three cases to consider, depending on whether \(ax \mod p\) is greater than, less than, or equal to \(ay \mod p\).
First, suppose \(ax \mod p = ay \mod p\). Then \(x = a^{-1}ax \mod p = a^{-1}ay \mod p = y\), which implies that \(x = y\). (This is the only place we need primality.)
Next, suppose \(ax \mod p > ay \mod p\). We immediately observe that
\[
ax \mod p - ay \mod p = (ax - ay) \mod p = a(x - y) \mod p.
\]
Straightforward algebraic manipulation now implies that \(\text{mult}_a(x) = \text{mult}_a(y)\) if and only if \(\text{mult}_a((x - y) \mod p) = 0\).
\[
\text{mult}_a(x) = \text{mult}_a(y) \iff (ax \mod p) \mod m = (ay \mod p) \mod m \\
\iff (ax \mod p) - (ay \mod p) \equiv 0 \pmod{m} \\
\iff a(x - y) \mod p \equiv 0 \pmod{m} \\
\iff \text{mult}_a((x - y) \mod p) = 0
\]
Finally, if \(ax \mod p < ay \mod p\), an argument similar to the previous case implies that \(\text{mult}_a(x) = \text{mult}_a(y)\) if and only if \(\text{mult}_a((y - x) \mod p) = 0\). \qed
For any distinct integers \(x, y \in \mathbb{U}\), Lemma 2 immediately implies that
\[
\Pr_a[\text{mult}_a(x) = \text{mult}_a(y)] \leq \Pr_a[\text{mult}_a((x - y) \mod p) = 0] + \Pr_a[\text{mult}_a((y - x) \mod p) = 0].
\]
Thus, to show that \(MP\) is near-universal, it suffices to prove the following lemma.
Lemma 3. For any integer \(z \in [p]^+\), we have \(\Pr_a[\text{mult}_a(z) = 0] \leq 1/m\).
Proof: Fix an arbitrary integer \(z \in [p]^+\). Lemma 1 implies that for any integer \(h \in [p]^+\), there is a unique integer \(a \in [p]^+\) such that \((az \mod p) = h\); specifically, \(a = h \cdot z^{-1} \mod p\). There are exactly \([(p - 1)/m]\) integers \(k\) such that \(1 \leq km \leq p - 1\). Thus, there are exactly \([(p - 1)/m]\) salts \(a\) such that \(\text{mult}_a(z) = 0\). \qed
Our analysis of collision probability can be improved, but only slightly. Carter and Wegman observed that if \( p \mod (m+1) = 1 \), then \( \Pr_a[\text{mul}_{p_a}(1) = \text{mul}_{p_a}(m+1)] = 2/(m+1). \) (For any positive integer \( m \), there are infinitely many primes \( p \) such that \( p \mod (m+1) = 1 \).) For example, by enumerating all possible values of \( \text{mul}_{p_a}(x) \) when \( p = 5 \) and \( m = 3 \), we immediately observe that \( \Pr_a[\text{mul}_{p_a}(1) = \text{mul}_{p_a}(4)] = 1/2 = 2/(m+1) > 1/3. \)
### 12.5.2 Actually universal hashing
Our first example of a truly universal family of hash functions uses a small modification of the multiplicative method we just considered. For any integers \( x \), \( y \in [p] \) and \( b \in [p] \), let \( h_{a,b} : U \rightarrow [m] \) be the function
\[
h_{a,b}(x) = ((ax + b) \mod p) \mod m
\]
and let
\[
\mathcal{MB}^+ := \{ h_{a,b} \mid a \in [p]^+, b \in [p] \}
\]
denote the set of all \( p(p-1) \) such functions. A function in this family is specified by two salt parameters \( a \) and \( b \).
**Theorem 1.** \( \mathcal{MB}^+ \) is universal.
**Proof:** Fix four integers \( r,s,x,y \in [p] \) such that \( x \neq y \) and \( r \neq s \). The linear system
\[
ax + b \equiv r \pmod{p} \\
ay + b \equiv s \pmod{p}
\]
has a unique solution \( a, b \in [p] \) with \( a \neq 0 \), namely
\[
a = (r-s)(x-y)^{-1} \mod p \\
b = (sx - ry)(x-y)^{-1} \mod p
\]
where \( z^{-1} \) denotes the mod-\( p \) multiplicative inverse of \( z \), as guaranteed by Lemma 1. It follows that
\[
\Pr_{a,b}[(ax+b) \mod p = r \text{ and } (ay+b) \mod p = s] = \frac{1}{p(p-1)},
\]
and therefore
\[
\Pr_{a,b}[h_{a,b}(x) = h_{a,b}(y)] = \frac{N}{p(p-1)},
\]
where \( N \) is the number of ordered pairs \( (r,s) \in [p]^2 \) such that \( r \neq s \) but \( r \mod m = s \mod m \). For each fixed \( r \in [p] \), there are at most \( p/m \) integers \( s \in [p] \) such that \( r \neq s \) but \( r \mod m = s \mod m \). Because \( p \) is prime, we have \( [p/m] \leq (p-1)/m \). We conclude that \( N \leq p(p-1)/m \), which completes the proof. \( \square \)
More careful analysis implies that the collision probability for any pair of items is exactly
\[
\left(\frac{p - p \mod m}{m}\right) \left(\frac{p - (m - p \mod m)}{p - 1}\right)
\]
Because \(p\) is prime, we must have \(0 < p \mod m < m\), so this probability is actually strictly less than \(1/m\). For example, when \(p = 5\) and \(m = 3\), the collision probability is
\[
\frac{(5 - 5 \mod 3)(5 - (3 - 5 \mod 3))}{3 \cdot 5} = \frac{1}{5} < \frac{1}{3},
\]
which we can confirm by enumerating all possible values:
<table>
<thead>
<tr>
<th>(b = 0)</th>
<th>(b = 1)</th>
<th>(b = 2)</th>
<th>(b = 3)</th>
<th>(b = 4)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0</td>
<td>1 1 1 1 1</td>
<td>0 2 0 2 2</td>
<td>0 0 0 0 0</td>
<td>0 1 1 1 1</td>
</tr>
<tr>
<td>1 1 2 0 1</td>
<td>1 2 0 1 0</td>
<td>1 0 1 0 1</td>
<td>1 1 0 1 2</td>
<td>1 0 1 2 0</td>
</tr>
<tr>
<td>2 2 1 1 0</td>
<td>2 0 0 2 1</td>
<td>2 1 1 0 0</td>
<td>2 0 2 1 1</td>
<td>2 1 0 0 2</td>
</tr>
<tr>
<td>3 0 1 1 2</td>
<td>3 1 2 0 0</td>
<td>3 0 0 1 1</td>
<td>3 1 1 2 0</td>
<td>3 2 0 0 1</td>
</tr>
<tr>
<td>4 1 0 2 1</td>
<td>4 0 1 0 2</td>
<td>4 1 0 1 0</td>
<td>4 2 1 0 1</td>
<td>4 0 2 1 0</td>
</tr>
</tbody>
</table>
### 12.5.3 Binary multiplicative hashing
A slightly simpler variant of multiplicative hashing that avoids the need for large prime numbers was first formally analyzed by Martin Dietzfelbinger, Torben Hagerup, Jyrki Katajainen, and Martti Penttonen in 1997, although it was proposed decades earlier. For this variant, we assume that \(m = 2^\ell\) and that \(m = 2^\ell\) for some integers \(w\) and \(\ell\). Thus, our goal is to hash \(w\)-bit integers ("words") to \(\ell\)-bit integers ("labels").
For any odd integer \(a \in [2^w]\), we define the hash function \(\text{mulb}_a: \mathbb{U} \to [m]\) as follows:
\[
\text{mulb}_a(x) := \left(\frac{(a \cdot x) \mod 2^w}{2^w - \ell}\right)
\]
Again, the odd integer \(a\) is the salt.

If we think of any \(w\)-bit integer \(z\) as an array of bits \(z[0..w-1]\), where \(z[0]\) is the least significant bit, this function has an easy interpretation. The product \(a \cdot x\) is \(2w\) bits long; the hash value \(\text{mulb}_a(x)\) consists of the top \(\ell\) bits of the bottom half:
\[
\text{mulb}_a(x) := (a \cdot x)[w-1..w-\ell]
\]
Most programming languages automatically perform integer arithmetic modulo some power of two. If we are using an integer type with \( w \) bits, the function \( \text{mult}_a(x) \) can be implemented by a single multiplication followed by a single right-shift. For example, in C:
```c
#define hash(a,x) ((a)*(x) >> (WORDSIZE-HASHBITS))
```
Now we claim that the family \( \mathcal{MB} := \{ \text{mult}_a | a \text{ is odd} \} \) of all such functions is near-universal. To prove this claim, we again need to argue that division is well-defined, at least for a large subset of possible words. Let \( W \) denote the set of odd integers in \( [2^w] \).
**Lemma 4.** For any integers \( x, z \in W \), there is exactly one integer \( a \in W \) such that \( ax \mod 2^w = z \).
**Proof:** Fix an integer \( x \in W \). Suppose \( ax \mod 2^w = bx \mod 2^w \) for some integers \( a, b \in W \). Then \( (b-a)x \mod 2^w = 0 \), which means \( x(b-a) \) is divisible by \( 2^w \). Because \( x \) is odd, \( b-a \) must be divisible by \( 2^w \). But \( -2^w < b-a < 2^w \), so \( a \) and \( b \) must be equal. Thus, for each \( z \in W \), there is at most one \( a \in W \) such that \( ax \mod 2^w = z \). In other words, the function \( f_x : W \to W \) defined by \( f_x(a) := ax \mod 2^w \) is injective. Every injective function from a finite set to itself is a bijection. \( \square \)
**Theorem 2.** \( \mathcal{MB} \) is near-universal.
**Proof:** Fix two distinct words \( x, y \in \mathcal{W} \) such that \( x < y \). If \( \text{mult}_a(x) = \text{mult}_a(y) \), then the top \( \ell \) bits of \( a(y-x) \mod 2^w \) are either all 0s (if \( ax \mod 2^w \leq ay \mod 2^w \)) or all 1s (otherwise). Equivalently, if \( \text{mult}_a(x) = \text{mult}_a(y) \), then either \( \text{mult}_a(y-x) = \ell \mod 2^w = m-1 \). Thus,
\[
\Pr[\text{mult}_a(x) = \text{mult}_a(y)] \leq \Pr[\text{mult}_a(y-x) = 0] + \Pr[\text{mult}_a(y-x) = m-1].
\]
We separately bound the terms on the right side of this inequality.
Because \( x \neq y \), we can write \( (y-x) \mod 2^w = q2^r \) for some odd integer \( q \) and some integer \( 0 \leq r \leq w-1 \). The previous lemma implies that \( q2^w \mod 2^w \) consists of \( w-1 \) random bits followed by a 1. Thus, \( aq2^r \mod 2^w \) consists of \( w-r-1 \) random bits, followed by a 1, followed by \( r \) 0s. There are three cases to consider:
- If \( r < w-\ell \), then \( \text{mult}_a(y-x) \) consists of \( \ell \) random bits, so
\[
\Pr[\text{mult}_a(y-x) = 0] = \Pr[\text{mult}_a(y-x) = m-1] = 1/2^\ell.
\]
- If \( r = w-\ell \), then \( \text{mult}_a(y-x) \) consists of \( \ell-1 \) random bits followed by a 1, so
\[
\Pr[\text{mult}_a(y-x) = 0] = 0 \quad \text{and} \quad \Pr[\text{mult}_a(y-x) = m-1] = 2/2^\ell.
\]
- Finally, if \( r < w-\ell \), then \( \text{mult}_a(y-x) \) consists of zero or more random bits, followed by a 1, followed by one or more 0s, so
\[
\Pr[\text{mult}_a(y-x) = 0] = \Pr[\text{mult}_a(y-x) = m-1] = 0.
\]
In all cases, we have \( \Pr[\text{mult}_a(x) = \text{mult}_a(y)] \leq 2/2^\ell \), as required. \( \square \)
12.6 High Probability Bounds: Balls and Bins
Although any particular search in a chained hash tables requires only constant expected time, but what about the worst search time? Assuming that we are using ideal random hash functions, this question is equivalent to the following more abstract problem. Suppose we toss $n$ balls independently and uniformly at random into one of $n$ bins. Can we say anything about the number of balls in the fullest bin?
**Lemma 5.** If $n$ balls are thrown independently and uniformly into $n$ bins, then with high probability, the fullest bin contains $O(\log n/\log \log n)$ balls.
**Proof:** Let $X_j$ denote the number of balls in bin $j$, and let $\hat{X} = \max_j X_j$ be the maximum number of balls in any bin. Clearly, $E[X_j] = 1$ for all $j$.
Now consider the probability that bin $j$ contains at least $k$ balls. There are $\binom{n}{k}$ choices for those $k$ balls, and the probability of any particular subset of $k$ balls landing in bin $j$ is $1/n^k$, so the union bound ($\Pr[A \vee B] \leq \Pr[A] + \Pr[B]$ for any events $A$ and $B$) implies
$$\Pr[X_j \geq k] \leq \binom{n}{k} \left( \frac{1}{n} \right)^k = \frac{n^k}{k!} = \frac{1}{k!}$$
Setting $k = 2c \lg n / \lg \lg n$, we have
$$k! \geq k^{k/2} = \left( \frac{2c \lg n}{\lg \lg n} \right)^{2c \lg n / \lg \lg n} \geq \left( \sqrt{\lg n} \right)^{2c \lg n / \lg \lg n} = 2^{c \lg n} = n^c,$$
which implies that
$$\Pr \left[ X_j \geq \frac{2c \lg n}{\lg \lg n} \right] < \frac{1}{n^c}.$$
This probability bound holds for every bin $j$. Thus, by the union bound, we conclude that
$$\Pr \left[ \max_j X_j > \frac{2c \lg n}{\lg \lg n} \right] = \Pr \left[ X_j > \frac{2c \lg n}{\lg \lg n} \text{ for all } j \right] \leq \sum_{j=1}^{n} \Pr \left[ X_j > \frac{2c \lg n}{\lg \lg n} \right] < \frac{1}{n^{c-1}}. \quad \square$$
A somewhat more complicated argument implies that if we throw $n$ balls randomly into $n$ bins, then with high probability, the most popular bin contains at least $\Omega(\log n/\log \log n)$ balls.
However, if we make the hash table large enough, we can expect every ball to land in its own bin. Suppose there are $m$ bins. Let $C_{ij}$ be the indicator variable that equals 1 if and only if $i \neq j$ and ball $i$ and ball $j$ land in the same bin, and let $C = \sum_{i<j} C_{ij}$ be the total number of pairwise collisions. Since the balls are thrown uniformly at random, the probability of a collision is exactly $1/m$, so $E[C] = \binom{n}{2}/m$. In particular, if $m = n^2$, the expected number of collisions is less than $1/2$.
To get a high probability bound, let $X_j$ denote the number of balls in bin $j$, as in the previous proof. We can easily bound the probability that bin $j$ is empty, by taking the two most significant terms in a binomial expansion:
$$\Pr[X_j = 0] = \left( 1 - \frac{1}{m} \right)^n = \sum_{i=1}^{n} \binom{n}{i} \left( \frac{-1}{m} \right)^i = 1 - \frac{n}{m} + \Theta \left( \frac{n^2}{m^2} \right) > 1 - \frac{n}{m}$$
We can similarly bound the probability that bin $j$ contains exactly one ball:
$$\Pr[X_j = 1] = n \cdot \frac{1}{m} \left( 1 - \frac{1}{m} \right)^{n-1} = \frac{n}{m} \left( 1 - \frac{n-1}{m} + \Theta \left( \frac{n^2}{m^2} \right) \right) > \frac{n}{m} - \frac{n(n-1)}{m^2}$$
It follows immediately that \( \Pr[X_j > 1] < n(n-1)/m^2 \). The union bound now implies that \( \Pr[\hat{X} > 1] < n(n-1)/m \). If we set \( m = n^{2+\epsilon} \) for any constant \( \epsilon > 0 \), then the probability that no bin contains more than one ball is at least \( 1 - 1/n^\epsilon \).
**Lemma 6.** For any \( \epsilon > 0 \), if \( n \) balls are thrown independently and uniformly into \( n^{2+\epsilon} \) bins, then with high probability, no bin contains more than one ball.
We can give a slightly weaker version of this lemma that assumes only near-universal hashing. Suppose we hash \( n \) items into a table of size \( m \). Linearity of expectation implies that the expected number of pairwise collisions is
\[
\sum_{x < y} \Pr[h(x) = h(y)] \leq \binom{n}{2} \frac{2}{m} = \frac{n(n-1)}{m}.
\]
In particular, if we set \( m = cn^2 \), the expected number of collisions is less than \( 1/c \), which implies that the probability of even a single collision is less than \( 1/c \).
### 12.7 Perfect Hashing
So far we are faced with two alternatives. If we use a small hash table to keep the space usage down, even if we use ideal random hash functions, the resulting worst-case expected search time is \( \Theta(\log n/\log \log n) \) with high probability, which is not much better than a binary search tree. On the other hand, we can get constant worst-case search time, at least in expectation, by using a table of roughly quadratic size, but that seems unduly wasteful.
Fortunately, there is a fairly simple way to combine these two ideas to get a data structure of linear expected size, whose expected worst-case search time is constant. At the top level, we use a hash table of size \( m = n \), but instead of linked lists, we use secondary hash tables to resolve collisions. Specifically, the \( j \)th secondary hash table has size \( 2n_j^2 \), where \( n_j \) is the number of items whose primary hash value is \( j \). Our earlier analysis implies that with probability at least \( 1/2 \), the secondary hash table has no collisions at all, so the worst-case search time in any secondary hash table is \( O(1) \). (If we discover a collision in some secondary hash table, we can simply rebuild that table with a new near-universal hash function.)
Although this data structure apparently needs significantly more memory for each secondary structure, the overall increase in space is insignificant, at least in expectation.
**Lemma 7.** Assuming near-universal hashing, we have \( \mathbb{E} \left[ \sum_i n_i^2 \right] < 3n \).
**Proof:** let \( h(x) \) denote the position of \( x \) in the primary hash table. We can rewrite the sum \( \sum_i n_i^2 \) in terms of the indicator variables \([h(x) = i]\) as follows. The first equation uses the definition of \( n_i \); the rest is just routine algebra.
\[ \sum_i n_i^2 = \sum_i \left( \sum_x [h(x) = i] \right)^2 \]
\[ = \sum_i \left( \sum_x \sum_y [h(x) = i][h(y) = i] \right) \]
\[ = \sum_i \sum_x [h(x) = i]^2 + 2 \sum_{x < y} [h(x) = i][h(y) = i] \]
\[ = \sum_x \sum_i [h(x) = i]^2 + 2 \sum_{x < y} \sum_i [h(x) = i][h(y) = i] \]
\[ = \sum_x \sum_i [h(x) = i] + 2 \sum_{x < y} [h(x) = h(y)] \]
The first sum is equal to \( n \), because each item \( x \) hashes to exactly one index \( i \), and the second sum is just the number of pairwise collisions. Linearity of expectation immediately implies that
\[ E \left[ \sum_i n_i^2 \right] = n + 2 \sum_{x < y} \Pr[h(x) = h(y)] \leq n + 2 \cdot \frac{n(n-1)}{2} \cdot \frac{2}{n} = 3n - 2. \]
This lemma immediately implies that the expected size of our two-level hash table is \( O(n) \). By our earlier analysis, the expected worst-case search time is \( O(1) \).
12.8 Open Addressing
Another method used to resolve collisions in hash tables is called open addressing. Here, rather than building secondary data structures, we resolve collisions by looking elsewhere in the table. Specifically, we have a sequence of hash functions \( \langle h_0, h_1, h_2, \ldots, h_{m-1} \rangle \), such that for any item \( x \), the probe sequence \( \langle h_0(x), h_1(x), \ldots, h_{m-1}(x) \rangle \) is a permutation of \( \langle 0, 1, 2, \ldots, m-1 \rangle \). In other words, different hash functions in the sequence always map \( x \) to different locations in the hash table.
We search for \( x \) using the following algorithm, which returns the array index \( i \) if \( T[i] = x \), ‘absent’ if \( x \) is not in the table but there is an empty slot, and ‘full’ if \( x \) is not in the table and there no no empty slots.
\[
\text{OPENADDRESSSEARCH}(x) : \\
\text{for } i \leftarrow 0 \text{ to } m-1 \\
\text{if } T[h_i(x)] = x \\
\quad \text{return } h_i(x) \\
\text{else if } T[h_i(x)] = \emptyset \\
\quad \text{return ‘absent’} \\
\text{return ‘full’}
\]
The algorithm for inserting a new item into the table is similar; only the second-to-last line is changed to \( T[h_i(x)] \leftarrow x \). Notice that for an open-addressed hash table, the load factor is never bigger than 1.
Just as with chaining, we’d like to pretend that the sequence of hash values is truly random, for purposes of analysis. Specifically, most open-addressed hashing analysis uses the following assumption, which is impossible to enforce in practice, but leads to reasonably predictive results for most applications.
Strong uniform hashing assumption:
For each item \( x \), the probe sequence \( \langle h_0(x), h_1(x), \ldots, h_{m-1}(x) \rangle \) is equally likely to be any permutation of the set \( \{0, 1, 2, \ldots, m-1\} \).
Let’s compute the expected time for an unsuccessful search in light of this assumption. Suppose there are currently \( n \) elements in the hash table. The strong uniform hashing assumption has two important consequences:
- **Uniformity**: For each item \( x \) and index \( i \), the hash value \( h_i(x) \) is equally likely to be any integer in the set \( \{0, 1, 2, \ldots, m-1\} \).
- **Independence**: For each item \( x \), if we ignore the first probe \( h_0(x) \), the remaining probe sequence \( \langle h_1(x), h_2(x), \ldots, h_{m-1}(x) \rangle \) is equally likely to be any permutation of the smaller set \( \{0, 1, 2, \ldots, m-1\} \setminus \{h_0(x)\} \).
Uniformity implies that the probability that \( T[h_0(x)] \) is occupied is exactly \( n/m \). Independence implies that if \( T[h_0(x)] \) is occupied, our search algorithm recursively searches the rest of the hash table! Since the algorithm will never again probe \( T[h_0(x)] \), for purposes of analysis, we might as well pretend that slot in the table no longer exists. Thus, we get the following recurrence for the expected number of probes, as a function of \( m \) and \( n \):
\[
E[T(m, n)] = 1 + \frac{n}{m} E[T(m-1, n-1)].
\]
The trivial base case is \( T(m, 0) = 1 \); if there’s nothing in the hash table, the first probe always hits an empty slot. We can now easily prove by induction that \( E[T(m, n)] \leq m/(m-n) \):
\[
E[T(m, n)] = 1 + \frac{n}{m} E[T(m-1, n-1)]
\leq 1 + \frac{n}{m} \cdot \frac{m-1}{m-n} \quad [\text{induction hypothesis}]
< 1 + \frac{n}{m} \cdot \frac{m}{m-n} \quad [m-1 < m]
= \frac{m}{m-n} \sqrt{\text{[algebra]}}
\]
Rewriting this in terms of the load factor \( \alpha = n/m \), we get \( E[T(m, n)] \leq 1/(1-\alpha) \). In other words, the expected time for an unsuccessful search is \( O(1) \), unless the hash table is almost completely full.
### 12.9 Linear and Binary Probing
In practice, however, we can’t generate ideal random probe sequences, so we must rely on a simpler probing scheme to resolve collisions. Perhaps the simplest scheme is **linear probing**—use a single hash function \( h(x) \) and define
\[
h_i(x) := (h(x) + i) \mod m
\]
This strategy has several advantages, in addition to its obvious simplicity. First, because the probing strategy visits consecutive entries in the hash table, linear probing exhibits better cache performance than other strategies. Second, as long as the load factor is strictly less than 1,
the expected length of any probe sequence is provably constant; moreover, this performance is guaranteed even for hash functions with limited independence. On the other hand, the number of probes grows quickly as the load factor approaches 1, because the occupied cells in the hash table tend to cluster together. On the gripping hand, this clustering is arguably an advantage of linear probing, since any access to the hash table loads several nearby entries into the cache.
A simple variant of linear probing called binary probing is slightly easier to analyze. Assume that \( m = 2^\ell \) for some integer \( \ell \) (in a binary multiplicative hashing), and define
\[ h_i(x) := h(x) \oplus i \]
where \( \oplus \) denotes bitwise exclusive-or. This variant of linear probing has slightly better cache performance, because cache lines (and disk pages) usually cover address ranges of the form \([r2^k..(r + 1)2^k - 1]\); assuming the hash table is aligned in memory correctly, binary probing will scan one entire cache line before loading the next one.
Several more complex probing strategies have been proposed in the literature. Two of the most common are quadratic probing, where we use a single hash function \( h \) and set \( h_i(x) := (h(x) + i^2) \mod m \), and double hashing, where we use two hash functions \( h \) and \( h' \) and set \( h_i(x) := (h(x) + i \cdot h'(x)) \mod m \). These methods have some theoretical advantages over linear and binary probing, but they are not as efficient in practice, primarily due to cache effects.
\*12.10 Analysis of Binary Probing
**Lemma 8.** In a hash table of size \( m = 2^\ell \) containing \( n \leq m/4 \) keys, built using binary probing, the expected time for any search is \( O(1) \), assuming ideal random hashing.
**Proof:** The hash table is an array \( H[0..m - 1] \). For each integer \( k \) between 0 and \( \ell \), we partition \( H \) into \( m/2^k \) level-\( k \) blocks of length \( 2^k \); each level-\( k \) block has the form \( H[c2^k..(c + 1)2^k - 1] \) for some integer \( c \). Each level-\( k \) block contains exactly two level-\( (k - 1) \) blocks; thus, the blocks implicitly define a complete binary tree of depth \( \ell \).
Now suppose we want to search for a key \( x \). For any integer \( k \), let \( B_k(x) \) denote the range of indices for the level-\( k \) block containing \( H[h(x)] \):
\[ B_k(x) = \left[ 2^k [h(x)/2^k] .. 2^k [h(x)/2^k] + 2^k - 1 \right] \]
Similarly, let \( B'_k(x) \) denote the sibling of \( B_k(x) \) in the block tree; that is, \( B'_k(x) = B_{k+1}(x) \setminus B_k(x) \). We refer to each \( B_k(x) \) as an ancestor of \( x \) and each \( B'_k(x) \) as an uncle of \( x \). The proper ancestors of any uncle of \( x \) are also proper ancestors of \( x \).
The binary probing algorithm can be recast conservatively as follows:
```plaintext
BinaryProbing(x):
if H[h(x)] = x
return True
if H[h(x)] is empty
return False
for k = 0 to \ell - 1
for each index j in B'_k(x)
if H[j] = x
return True
if H[j] is empty
return False
```
For purposes of analysis, suppose the target item $x$ is not in the table. (The time to search for an item that is in the table can only be faster.) Then the expected running time of $\text{BinaryProbe}(x)$ can be expressed as follows:
$$E[T(x)] \leq \sum_{k=0}^{\ell-1} O(2^k) \cdot \Pr[B_k'(x) \text{ is full}].$$
Assuming ideal random hashing, all blocks at the same level have equal probability of being full. Let $F_k$ denote the probability that a fixed level-$k$ block is full. Then we have
$$E[T(x)] \leq \sum_{k=0}^{\ell-1} O(2^k) \cdot F_k.$$
Call a level-$k$ block $B$ popular if there are at least $2^k$ items $y$ in the table such that $h(y) \in B$. Every popular block is full, but full blocks are not necessarily popular.
If block $B_k(x)$ is full but not popular, then $B_k(x)$ contains at least one item whose hash value is not in $B_k(x)$. Let $y$ be the first such item inserted into the hash table. When $y$ was inserted, some uncle block $B_j'(x) = B_j(y)$ with $j \geq k$ was already full. Let $B_j'(x)$ be the first uncle of $B_k(x)$ to become full. The only blocks that can overflow into $B_j(y)$ are its uncles, which are all either ancestors or uncles of $B_k(x)$. But when $B_j(y)$ became full, no other uncle of $B_k(x)$ was full. Moreover, $B_k(x)$ was not yet full (because there was still room for $y$), so no ancestor of $B_k(x)$ was full. It follows that $B_j'(x)$ is popular.
We conclude that if a block is full, then either that block or one of its uncles is popular. Thus, if we write $P_k$ to denote the probability that a fixed level-$k$ block is popular, we have
$$F_k \leq 2P_k + \sum_{j > k} P_j.$$
We can crudely bound the probability $P_k$ as follows. Each of the $n$ items in the table hashes into a fixed level-$k$ block with probability $2^k/m$; thus,
$$P_k = \left( \frac{n}{2^k} \right) \left( \frac{2^k}{m} \right)^{2^k} \leq \frac{n^{2^k}}{(2^k)!} \cdot \frac{2^{2^k}}{m^{2^k}} \leq \left( \frac{en}{m} \right)^{2^k}$$
(The last inequality uses a crude form of Stirling’s approximation: $n! > n^n/e^n$.) Our assumption $n \leq m/4$ implies the simpler inequality $P_k < (e/4)^{2^k}$. Because $e < 4$, it is easy to see that $P_k < 4^{-k}$ for all sufficiently large $k$.
It follows that $F_k = O(4^{-k})$, which implies that the expected search time is at most $\sum_{k \geq 0} O(2^k) \cdot O(4^{-k}) = \sum_{k \geq 0} O(2^{-k}) = O(1)$.
\[\square\]
### 12.11 Cuckoo Hashing
Write this.
#### Exercises
1. Your boss wants you to find a perfect hash function for mapping a known set of $n$ items into a table of size $m$. A hash function is perfect if there are no collisions; each of the $n$ items
is mapped to a different slot in the hash table. Of course, a perfect hash function is only possible if \( m \geq n \). (This is a different definition of “perfect” than the one considered in the lecture notes.) After cursing your algorithms instructor for not teaching you about (this kind of) perfect hashing, you decide to try something simple: repeatedly pick ideal random hash functions until you find one that happens to be perfect.
(a) Suppose you pick an ideal random hash function \( h \). What is the exact expected number of collisions, as a function of \( n \) (the number of items) and \( m \) (the size of the table)? Don’t worry about how to resolve collisions; just count them.
(b) What is the exact probability that a random hash function is perfect?
(c) What is the exact expected number of different random hash functions you have to test before you find a perfect hash function?
(d) What is the exact probability that none of the first \( N \) random hash functions you try is perfect?
(e) How many ideal random hash functions do you have to test to find a perfect hash function with high probability?
2. (a) Describe a set of hash functions that is uniform but not (near-)universal.
(b) Describe a set of hash functions that is universal but not (near-)universal.
(c) Describe a set of hash functions that is universal but (near-)3-universal.
(d) A family of hash function is pairwise independent if knowing the hash value of any one item gives us absolutely no information about the hash value of any other item; more formally,
\[
\Pr_{h \in MB^+} [h(x) = i \mid h(y) = j] = \Pr_{h \in MB^+} [h(x) = i]
\]
or equivalently,
\[
\Pr_{h \in MB^+} [(h(x) = i) \wedge (h(y) = j)] = \Pr_{h \in MB^+} [h(x) = i] \cdot \Pr_{h \in MB^+} [h(y) = j]
\]
for all distinct items \( x \neq y \) and all (possibly equal) hash values \( i \) and \( j \).
Describe a set of hash functions that is uniform but not pairwise independent.
(e) Describe a set of hash functions that is pairwise independent but not (near-)uniform.
(f) Describe a set of hash functions that is universal but not pairwise independent.
(g) Describe a set of hash functions that is pairwise independent but not (near-)uniform.
(h) Describe a set of hash functions that is universal and pairwise independent but not uniform, or prove no such set exists.
3. (a) Prove that the set \( MB \) of binary multiplicative hash functions described in Section 12.5 is not uniform. [Hint: What is \( \text{mult}_a(0) \)?]
(b) Prove that \( MB \) is not pairwise independent. [Hint: Compare \( \text{mult}_a(0) \) and \( \text{mult}_a(2^{w-1}) \).]
(c) Consider the following variant of multiplicative hashing, which uses slightly longer salt parameters. For any integers $a, b \in [2^{w+\ell}]$ where $a$ is odd, let
$$h_{a,b}(x) := \left( (a \cdot x + b) \mod 2^{w+\ell} \right) \div 2^w = \left\lfloor \frac{(a \cdot x + b) \mod 2^{w+\ell}}{2^w} \right\rfloor,$$
and let $\mathcal{M}^+ = \{h_{a,b} \mid a, b \in [2^{w+\ell}] \text{ and } a \text{ odd} \}$. Prove that the family of hash functions $\mathcal{M}^+$ is strongly near-universal:
$$\Pr_{h \in \mathcal{M}^+} [(h(x) = i) \land (h(y) = j)] \leq \frac{2}{m^2}$$
for all items $x \neq y$ and all (possibly equal) hash values $i$ and $j$.
4. Suppose we are using an open-addressed hash table of size $m$ to store $n$ items, where $n \leq m/2$. Assume an ideal random hash function. For any $i$, let $X_i$ denote the number of probes required for the $i$th insertion into the table, and let $X = \max_i X_i$ denote the length of the longest probe sequence.
(a) Prove that $\Pr[X_i > k] \leq 1/2^k$ for all $i$ and $k$.
(b) Prove that $\Pr[X_i > 2 \lg n] \leq 1/n^2$ for all $i$.
(c) Prove that $\Pr[X > 2 \lg n] \leq 1/n$.
(d) Prove that $\mathbb{E}[X] = O(\log n)$.
|
{"Source-Url": "https://courses.engr.illinois.edu/cs498dl1/sp2015/notes/12-hashing.pdf", "len_cl100k_base": 14250, "olmocr-version": "0.1.48", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 78451, "total-output-tokens": 15567, "length": "2e13", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.00034880638122558594, "__label__crime_law": 0.00046944618225097656, "__label__education_jobs": 0.001552581787109375, "__label__entertainment": 9.292364120483398e-05, "__label__fashion_beauty": 0.00015246868133544922, "__label__finance_business": 0.0002968311309814453, "__label__food_dining": 0.0004436969757080078, "__label__games": 0.0009379386901855468, "__label__hardware": 0.0019025802612304688, "__label__health": 0.0005435943603515625, "__label__history": 0.00031304359436035156, "__label__home_hobbies": 0.000202178955078125, "__label__industrial": 0.0007185935974121094, "__label__literature": 0.00030612945556640625, "__label__politics": 0.0002419948577880859, "__label__religion": 0.0005450248718261719, "__label__science_tech": 0.097412109375, "__label__social_life": 0.0001239776611328125, "__label__software": 0.0099945068359375, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.00030350685119628906, "__label__transportation": 0.0005164146423339844, "__label__travel": 0.0002071857452392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45745, 0.02135]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45745, 0.76793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45745, 0.83029]], "google_gemma-3-12b-it_contains_pii": [[0, 2684, false], [2684, 6120, null], [6120, 9079, null], [9079, 11598, null], [11598, 15303, null], [15303, 17449, null], [17449, 19590, null], [19590, 21649, null], [21649, 24789, null], [24789, 28069, null], [28069, 30912, null], [30912, 33419, null], [33419, 36106, null], [36106, 39265, null], [39265, 41928, null], [41928, 44560, null], [44560, 45745, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2684, true], [2684, 6120, null], [6120, 9079, null], [9079, 11598, null], [11598, 15303, null], [15303, 17449, null], [17449, 19590, null], [19590, 21649, null], [21649, 24789, null], [24789, 28069, null], [28069, 30912, null], [30912, 33419, null], [33419, 36106, null], [36106, 39265, null], [39265, 41928, null], [41928, 44560, null], [44560, 45745, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45745, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 45745, null]], "pdf_page_numbers": [[0, 2684, 1], [2684, 6120, 2], [6120, 9079, 3], [9079, 11598, 4], [11598, 15303, 5], [15303, 17449, 6], [17449, 19590, 7], [19590, 21649, 8], [21649, 24789, 9], [24789, 28069, 10], [28069, 30912, 11], [30912, 33419, 12], [33419, 36106, 13], [36106, 39265, 14], [39265, 41928, 15], [41928, 44560, 16], [44560, 45745, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45745, 0.01961]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
bfe43091afa99aa7fc03acf2ef69f68fb50f9e21
|
Putting Declarative Meta Control to Work
Apollo Hogan
Department of Mathematics, University of California, Berkeley, California
Reinhard Stolle
Xerox PARC, Palo Alto, California
Elizabeth Bradley*
Department of Computer Science, University of Colorado, Boulder, Colorado
Abstract
As artificial intelligence techniques are maturing and being deployed in large applications, the problem of specifying control and reasoning strategies is regaining attention. Complex AI systems tend to comprise a suite of modules, each of which is capable of solving a different aspect of the overall problem, and each of which may incorporate a different reasoning paradigm. The orchestration of such heterogeneous problem solvers can be divided into two subproblems: 1. When and how are various reasoning modes invoked?, and 2. How is information passed between various reasoning modes? In this paper we present our solution to this problem. We describe a logic programming system that accomplishes three important goals: equivalence of declarative and operational semantics, declarative specification of control information, and smoothness of interaction with non-logic-based programs. Meta-level predicates are used to specify control information declaratively, compensating for the absence of procedural constructs that usually facilitate formulation of efficient programs. Knowledge that has been derived in the course of the current inference process can at any time be passed to non-logic-based program modules. Traditional SLD inference engines maintain only the linear path to the current state in the SLD search tree: formulae that have been proved on this path are implicitly represented in a stack of recursive calls to the inference engine, and formulae that have been proved on previous, unsuccessful paths are lost altogether. In our system, previously proved formulae are maintained explicitly and therefore can be passed to other reasoning modules. As an application example, we show how this inference system acts as the knowledge representation and reasoning framework of PRET—a program that automates system identification.
Keywords: automated reasoning; reasoning architectures; meta architectures; meta control; meta programming; reasoning strategies; logic programming; declarative programming.
1. Introduction
As artificial intelligence techniques are maturing and being used in large applications, the problem of specifying control and reasoning strategies is regaining attention. Furthermore,
* Supported by NSF N01 #CCR-9357740, ONR #N00014-96-1-0720, and a Packard Fellowship in Science and Engineering from the David and Lucile Packard Foundation. This research was carried out while the first two authors were research assistants at the University of Colorado.
as the complexity of applications is scaling up, it is becoming less feasible to capture all aspects of an AI system's functionality in a single reasoning paradigm. Rather, complex AI systems tend to comprise a suite of modules, each of which is capable of solving a different aspect of the overall problem and each of which may incorporate a different reasoning paradigm. There seems to be an emerging sense in the AI research community that the orchestration of such heterogeneous problem solvers is in itself a difficult problem that deserves to be solved using AI techniques (e.g., (Waltz, 1999; Buchanan, 2001)). The orchestration problem can be divided into two subproblems: 1. When and how are various reasoning modes invoked?, and 2. How is information passed between various reasoning modes? In this paper we present our solution to this problem, along with an application example.
Many languages that are designed for the declarative representation of domain knowledge are variants of first-order logic. One of the major advantages of logical representations is their clearly defined semantics: the domain knowledge can be interpreted as a logical theory. Logic programs can also be executed. Ideally, a logic program's declarative semantics (when interpreted as a logical theory) are equivalent to its operational semantics (when executed with respect to queries). In practice, the equivalence of declarative and operational semantics is often sacrificed for various reasons. Purely procedural constructs like the PROLOG cut, for example, are useful in the construction of efficient programs; however, their semantics cannot be described declaratively. Furthermore, control information is typically encoded implicitly in the static ordering of rules and goals. Finally, the commonly used principle of negation as failure confuses existential with universal quantification of non-ground goals.
This paper presents a logic system that accomplishes three important goals:
1. Declarative and operational semantics are equivalent.
2. Control information is represented explicitly, declaratively, and separately from domain knowledge.
3. Interaction with other programs is facilitated by an explicit representation of the theorem prover's state.
The first two goals are achieved by implementation of concepts developed as part of the "RISC" project (Reason Maintenance Based Inference System for Generalized Horn Clause Logic) at the University of Erlangen (Beckstein & Tobermann, 1992; Beckstein, Stolle, & Tobermann, 1996; Beckstein & Tobermann, 1997). The third goal was accomplished by allowing non-logic-based reasoning modules access to the current state of the theorem prover. This feature is particularly important for the design of heterogeneous systems that integrate and orchestrate a variety of domain-specific reasoning techniques. For example, the logic system presented in this paper is currently used as the knowledge representation and reasoning framework of PRET, an automated modeling tool that finds ordinary differential equations (ODEs) that model black-box dynamical systems (Bradley & Stolle, 1996; Stolle & Bradley, 1998; Stolle, 2001; Bradley, Easley, & Stolle, 2001). The achievement of the three goals listed above is crucial to the success of this modeling task, but the contributions described here generalize well beyond this particular application domain. The third goal, in particular, is significant for any automated reasoning system that integrates several different
reasoning modes. The various modules of such a hybrid reasoner typically must be able to access knowledge that has been generated before (either by themselves or by other modules). In our SLD-based system,\(^1\) invocation of different modules is triggered by the evaluation of subgoals of the currently active goal. Traditional SLD inference engines maintain only the linear path to the current state in the SLD search tree (Lloyd, 1987). Formulae that have been proved on this path are typically implicitly represented in a stack of recursive calls to the inference engine, and formulae that have been proved on previous, unsuccessful paths are lost altogether. In our system, previously proved formulae are maintained explicitly and therefore can be passed to other reasoning modules.
The language of the logic system presented in this paper is that of Generalized Horn Clause Intuitionistic Logic (GHCIL) (McCarty, 1988a, 1988b). The inference engine can be briefly characterized as a GHCIL reasoner with declarative meta-level control and explicit representation of previously derived knowledge. The next three sections describe the GHCIL language, the meta-level control, and the explicit representation of previously derived formulae. As an application example, we show how this inference system acts as PRET’s knowledge representation and reasoning framework. We conclude the paper with some pointers to related work.
2. The Language
GHCIL clauses are (implicitly) universally quantified implications of the following form.
1. Every definite Horn clause is a GHCIL clause.\(^2\)
2. If \(A\) is an atomic formula and \(B_1, \ldots, B_n\) are GHCIL clauses, then \(A \leftarrow B_1, \ldots, B_n\) is a GHCIL clause.
That is, GHCIL clauses are generalizations of Horn clauses that also allow embedded implications (other GHCIL clauses) in the body. For example,
\[
dedicated(P) \leftarrow \text{(working}(P) \leftarrow \text{assigned}(W, P), \text{unfinished}(W))
\]
is a GHCIL clause that is not a Horn clause. Informally, its meaning is:
For all people \(P\); \(P\) is considered a dedicated person if \(P\) is working under the assumption that there is some unfinished work \(W\) that is assigned to \(P\).
Thus, embedded implications can be seen as hypothetical statements. For a more detailed discussion of clausal intuitionistic logic, see (McCarty, 1988a, 1988b).
In our system, there are several distinguished predicates that may occur in GHCIL clauses. One of them is **falsum**: GHCIL clauses having **falsum** as their head indicate contradictory situations. **Negation as failure** is not suitable for our purposes because it destroys the equivalence of declarative and operational semantics. Instead, our intuitionistic
---
\(1\) The acronym SLD stands for Selecting a literal, using a Linear strategy, restricted to Definite clauses (Sterling & Shapiro, 1986).
\(2\) Recall that a definite Horn clause is a clause of the form \(A \leftarrow B_1, \ldots, B_n\ (n \geq 0)\) where \(A\) and \(B_i\) are all atomic formulae.
semantics uses negation as inconsistency (Gabbay & Sergot, 1986) and interprets not(p) as an abbreviation for falsum ← p. For example, consider the following rulebase:
1: falsum ← male(X), female(X).
2: male(john).
3: female(betty).
4: male(pat).
The query ?not(male(X)) succeeds with X bound to betty, consistent with the interpretation of the query: “is there an X such that X is not male?” With negation as failure, on the other hand, this query would fail; the interpretation in that case would be: “is it not the case that there is an X such that X is male?” or, in other words, “is it the case that, for every X, X is not male?” This behavior would be inconsistent with the usual existential quantification of free variables in queries.
3. Expressing Control Information
Traditionally, the control flow of a logic program is specified by the static ordering of rules and goals: the programmer expresses control knowledge implicitly by taking advantage of the inference machine’s properties, e.g., its depth-first-left-right strategy (Sterling & Shapiro, 1986). This approach conflicts with our goal of expressing all information in a declarative way. A program that relies on a certain evaluation strategy of the inference engine contains information—control information—that is not reflected by a purely logical interpretation of the program.
Other common non-logical programming means of achieving efficient control of the deduction process include the PROLOG cut or the “predicates” assert, retract, and if-then-else. Such procedural constructs have declarative semantics—if any—that are different from their operational semantics. They result in a more or less imperative programming style and destroy the equivalence of procedural and declarative semantics, which is one of the main reasons for logic programming in the first place.
Meta control is a much better solution. It allows specification of control without interfering with the declarative representation of knowledge. For example, suppose we have the following declarative knowledge about a small initial segment of the ordinal numbers:
1: ord(succ(X)) ← ord(X).
2: ord(0).
3: ord(ω).
If we were to use this knowledge in a PROLOG system, we would have to reorder the rules so that Rules 2 and 3 occurred before Rule 1 in order to avoid infinite loops for existential queries such as ?ord(succ(Z)). When adding new rules (e.g., ord(ω₁)), a programmer must pay close attention to how they interact with the rest of the rules—in this case ensuring that the new rule is added before Rule 1. That is, in addition to the declarative knowledge that 0, ω, and their successors are ordinals, we must also keep in mind the correct order for the rules and the control strategy of the inference engine. In other words, object-level
information (in this case, knowledge about the structure of ordinal numbers) is intertwined
with information about how to use object-level information (e.g., which rule should be used
first). We call the latter control-level information.
If, instead, we separate control-level information from object-level information, we can
specify the logical theory of ordinal numbers without worrying about the operational inter-
pretation, or execution, of the theory as a logic program. In a separate set of meta control
rules, we can then—again declaratively—specify the control information. The set of control
rules, together with the object rules, represents a logical theory about the control of the
logic program; we need only specify that Rule 1 is examined after any other rules, or we
might specify that ground clauses are always to be preferred over non-ground clauses for
the predicate ord/1.\(^3\)
3.1 Static Control: Abstraction Levels
One method for specifying meta control in a declarative fashion is abstraction levels. static
numeric annotations that describe the order in which clauses are considered to construct
proofs, enforcing the preference of abstract proofs over less-abstract ones. These impose
static global constraints on the search for a proof. To every rule, the programmer assigns an
abstraction level. For example, suppose that there are only two abstraction levels, low and
high. Then any proof that uses only clauses with high abstraction levels will be preferred
to any proof that uses a clause with a low abstraction level, even if the latter proof is much
shorter.\(^4\)
The implementation of this scheme is straightforward; the inference engine proceeds to
a less-abstract level only if the search for a proof at the more-abstract level fails. (This
means that bad choices for abstraction levels affect only speed, and not correctness or
completeness.)
Abstraction levels are a crude form of meta control. They are static and, though global,
have a granularity at the clause level. Because of this, abstraction levels are often not
general enough. The next section presents an example that calls for dynamic meta control.
3.2 Dynamic Control: Meta Rules
In Prolog, as in many other logic-based knowledge representation systems, control informa-
tion interferes with logical statements in order to achieve an efficient evaluation of
huge sets of unit clauses (Sterling & Shapiro, 1986). Consider the following example (from
(Beckstein et al., 1996)).
\[
\begin{align*}
grandparent(X, Y) &\leftarrow \text{var}(Y), !, \text{parent}(X, Z), \text{parent}(Z, Y). \\
grandparent(X, Y) &\leftarrow \text{parent}(Z, Y), \text{parent}(X, Z).
\end{align*}
\]
This example shows how efficiency considerations that have nothing to do with the declar-
ative meaning of the logic program complicate the code. Expressing efficient control strategies
for logical theories that are more complex than grandparent requires increasingly baroque
and hard-to-understand coding.
\(^3\) A predicate \(p\) of arity \(n\) is denoted \(p/n\).
\(^4\) However, the programmer will typically assign abstraction levels to the rules in such a way that short
proofs are also abstract proofs.
In our system, this kind of implicit control information is not necessary. We simply express the logical fact by the clause
\[
\text{grandparent}(X, Y) \leftarrow \text{parent}(X, Z), \text{parent}(Z, Y).
\]
In order to ensure an efficient evaluation, we specify that the subgoal that contains the ground argument must be evaluated before the subgoal that contains the variable:
\[
\text{before}(L_1, L_2) \leftarrow \text{goal}(L_1, \text{parent}(X, Y)),
\]
\[
\text{goal}(L_2, \text{parent}(Y, Z)),
\]
\[
\text{ground}(X), \text{var}(Z).
\]
\[
\text{before}(L_2, L_1) \leftarrow \text{goal}(L_1, \text{parent}(X, Y)),
\]
\[
\text{goal}(L_2, \text{parent}(Y, Z)),
\]
\[
\text{ground}(Z), \text{var}(X).
\]
At first sight, the PROLOG formulation seems shorter and simpler. We argue that the number of characters needed is not a good measure of complexity. The order of clauses and goals and—more importantly—the cut in the PROLOG program implicitly contain critical, complex information that is made explicit in our meta program. Furthermore, in our solution, the meta theory is conceptually and literally separated from the object-level theory. Moreover, operational semantics of the program are equivalent to the declarative semantics of the object-level theory.
The meta predicate \text{before}/2 allows us to specify control information in a clean fashion, separately from the logical theory about parents and grandparents. Other control predicates that our system makes available to the programmer are \text{notready}/1 and \text{hot}/1 for the selection of subgoals to be resolved and \text{clauseorder}/2 for the selection of the resolving clause. When the inference engine chooses the next subgoal to be resolved, it determines the minimal elements of the partial order defined by \text{before}/2. Subgoals that are proved to be \text{notready}/1 may not be chosen; within these constraints, \text{hot}/1 subgoals receive priority. The rule
\[
\text{clauseorder}(H, [N_1, \ldots, N_m]) \leftarrow B_1, \ldots, B_n.
\]
states that clauses whose names belong to \(N_1, \ldots, N_m\) must be selected in that order for the next inference step if the selected subgoal is an instance of \(H\). The meta predicates \text{clause}/2 and \text{goal}/2 establish names for clauses and currently active subgoals.\footnote{The meta predicates \text{var}/1 and \text{ground}/1 have the usual meaning. Since they have no first-order declarative semantics, they can, in fact, destroy the equivalence of declarative and operational semantics of the program if they appear outside the meta level and should therefore be used with care. See the discussion in the Related Work section.}
If the meta rules do not completely specify the control decisions, the default control is from left to right.
The semantics of all meta predicates follows that of (Beckstein et al., 1996); we refer the reader to that paper for the details of the declarative and procedural specification of the meta predicates.\footnote{An electronic copy of (Beckstein et al., 1996) is available at
http://www.ksl.stanford.edu/people/stolle/Papers/meta96.pdf, ps.}
4. Explicit Representation
If an inference engine is integrated in a multi-modal reasoning system, other—non-logic-based—reasoning modules must have access to previously derived knowledge: everything that has been successfully inferred so far. Resolution provers that do not remember previously derived knowledge only maintain the current root path of the search tree, which represents the (partial) proof tree of the current proof attempt. The advantage of maintaining only the root path is its linear space requirement; the disadvantage is that already-proven results must be rederived every time they occur on different root paths. Trading space for time, many problem solvers use some kind of caching of inferences in order to avoid duplication of effort, to generate explanations, and to guide backtracking or control (Forbus & de Kleer, 1993). This approach becomes particularly important when—as in the application example in Section 6 of this paper—the derivations of some formulae require the invocation of other reasoning modules and are therefore very expensive.
In this section, we describe, in some detail, the form of caching used in our logic system and the explicit representation that is necessary to achieve it.\footnote{A graphical user interface (GUI) also takes advantage of the explicit internal state of the inference engine: the engine is easily interrupted and restarted and the GUI allows for examination of the current state of the search.} Since our approach can be viewed as a very simplified form of truth maintenance, we briefly discuss the similarities and differences between this approach and traditional truth maintenance systems (TMSs) and the motivation behind our choices. Finally, we describe how caching and explicit representation of the inference state are integrated with abstraction levels and dynamic meta control.
4.1 Implementation
The system described in this paper is implemented in SCHEME. The state of the inference engine is encapsulated in a stack. The elements of this inference stack are either “choice points” or inference stacks themselves. A choice point is any place in the search tree where a decision must be made. For example, assume we are trying to prove the subgoal \( p(X) \). Furthermore, assume the following two clauses in the database are the only ones whose heads unify with \( p(X) \).
\[
\begin{align*}
1: & \quad p(Y) \leftarrow q(Y), r(Y). \\
2: & \quad p(a) \leftarrow q(a).
\end{align*}
\]
Then the system will create a choice point for the goal \( p(X) \) that contains the matching clauses 1 and 2. Choice points also keep track of the corresponding bindings.
The typical (and elegant) way of programming a resolution inference engine is to call the engine recursively to resolve the subgoals of the current goal. However, if we were to actually do a recursive SCHEME call, the explicit representation of the inference engine state would be lost, as it would be embedded in the SCHEME call stack. Instead, to keep the state explicit, we do a “pseudo-recursive” call by pushing a new inference stack onto the old one and then using only the new inference stack until this simulated recursive call succeeds or fails. In the latter case, we throw away the new stack; if the simulated recursive
call succeeds, we keep the new inference stack on the old one so we can backtrack into the
call if necessary. We then continue the inference process, using the old inference stack
and pushing new choice points (or inference stacks) above the other inference stack.\(^8\)
Our inference engine handles normal clauses in a straightforward Prolog fashion. To
handle embedded implications, we do a pseudo-recursive call to the inference engine, adding
the formulae in the body of the embedded implication to the current assumptions and setting
the current goal to be the head of the embedded implication.
Since the state of the inference engine is encapsulated in a single explicit data structure,
it is trivial to interrupt and resume the inference task: if the inference engine is interrupted,
it simply returns the inference stack. To restart, it is only necessary to call the inference
engine and pass the inference stack back in.
4.2 Sample Inference
A flow chart of the inference process is shown in Figure 1. The symbols 1, 2, 3, 4, and
5 identify the important internal states of the inference engine.
To illustrate how the inference engine operates, we step through the inference process
for a simple query. Let the database of clauses be given by
1: \( A \leftarrow (B \leftarrow C), D \).
2: \( A \leftarrow X \).
3: \( B \leftarrow F \).
4: \( B \leftarrow X \).
5: \( D \leftarrow E \).
6: \( E \).
7: \( F \leftarrow C \).
Suppose the query is \(?A\). The initial conditions of the inference engine are shown in Figure 2.
The top of the stack, \( S \), is empty and the goal set \( G \) contains only the query formula: \{ \( A \) \}. The
engine starts in State 1. In this state, the meta control\(^9\) for the engine selects \( A \) (the
only choice in this case) as the next goal to prove. A choice point (denoted c.p. in the figure)
is created from all clauses in the database and all assumptions in the current assumption
set that unify with the selected formula, \( A \), and this choice point is pushed onto \( S \). The
state becomes 2.
In Figure 3 the inference engine is in State 2, so a choice point is popped off the stack
and a clause is selected to try (again, the meta control makes this decision). In this case,
the chosen clause is \( A \leftarrow (B \leftarrow C), D \); the remainder of the choices are pushed back onto
\( S \), the state is changed back to 1, and the goal set \( G \) becomes \{ \( (B \leftarrow C), D \) \}. Again, the
---
8. There are other methods for handling the need for recursive calls. Another embedding of Prolog into
Scheme (Haynes, 1987) used the fact that Scheme has first-class continuations to enable backtracking
through recursive calls, and used continuations for non-blind backtracking or "lateral" control transfers.
We avoided using continuations because, although they do provide a handle into the Scheme control
stack, they are still not explicit enough—continuations are opaque. Our approach of reifying the control
explicitly also allows for non-blind backtracking, though we did not implement it in our system.
9. The meta control module selects goals and clauses according to the programmer's meta control rules,
which are described in Section 3.2 of this paper.
5
Set G to goal continuation
Set S to parent stack
3
If no unifying clauses
Create new stack
Push stack on S
Save assumptions, goal continuation
Set S to new stack
Set G to new goal
2
Is S empty?
No
Determine clause order
Push new c.p. on S
1
Is G empty?
No
Select formula from G
Meta-control invoked here
4
Set S to parent stack
If S has no parent stack
5
If S has no parent stack
SUCCESS
4
Set S to parent stack
If S has no parent stack
FAILURE
2
Is S empty?
Yes
Pop c.p. from S
Select clause from c.p.
Push remainder of c.p. on S
(if non-empty)
Merge G with clause body
3
If no unifying clauses
Create new stack
Push stack on S
Save assumptions, goal continuation
Set S to new stack
Set G to new goal
1
Implication?
Yes
2
Is S empty?
Yes
Pop c.p. from S
Select clause from c.p.
Push remainder of c.p. on S
(if non-empty)
Merge G with clause body
Figure 1: Flow-chart of the inference engine.
meta control selects a formula from the goal set, namely $B \leftarrow C$. Since this is an embedded implication, the engine proceeds to State 3 above.
In State 3, a new stack is pushed onto the old stack. The old goal set (called “goal continuation” in the figure), \{D\}, and a pointer to the old stack top are saved. The head of the embedded implication becomes the (only) goal in the new goal set $G$, and the formulae in the body of the implication (called “assumptions” in the figure) are temporarily added to the rule base. Finally $S$ is set to the top of the new stack and the state becomes 1.
Next (Figure 5), the (only) formula from the goal set is selected, a choice point is pushed and the current state becomes 2. Notice that this choice point is pushed onto the inner stack that was created in State 3 above.
In Figures 6–8, the inference engine progresses through States 1, 2, and 1 until $G$ becomes empty (Figure 9). Then, the inference engine proceeds to State 5. (This means that the subgoal $B \leftarrow C$, which caused the creation of the second inner stack, was successful.) Then $S$ is set back to the parent stack and $G$ is reset to the old goal continuation \{D\}. The current state becomes 1 as shown in Figure 10.
The meta control selects a goal from $G$ and a new choice point is pushed onto $S$. Note that the current stack is now equivalent to the original, outer-most one (Figure 11). The inference engine continues this process until either State 4 or State 5 is reached (failure or success, respectively), with $S$ pointing to the original outer-most stack.
The explicit representation of embedded call stacks described and illustrated in this section is important for two reasons. First, it allows previously derived knowledge to be reused and passed around to other (possibly non-logical) reasoning modules. Second, the graphical user interface has access to the complete search tree. These two features are of crucial importance for a multi-modal reasoning system, an example of which we describe in Section 6 of this paper.\footnote{A third advantage of an explicit representation is that the meta control can choose between all subgoals in the call stack rather than just the subgoals of the inner-most stack. In this case, nested implications are not necessarily evaluated before the goal in which they are embedded. However, our implementation}



Figure 5: Inference engine, state=1
Figure 6: Inference engine, state=2
Figure 7: Inference engine, state=1
Figure 8: Inference engine, state=2
Figure 9: Inference engine, state=1
Figure 10: Inference engine, state=1
Figure 11: Inference engine, state=2
In the next section we describe how available knowledge is reused in the inference process and how it is passed to non-logical reasoning modules.
4.3 Making Derived Knowledge Explicit
We maintain previously derived knowledge for two reasons. First, we want to be able to pass knowledge explicitly to other reasoning modules. Second, we want to avoid duplication of effort. Formulæ that have been proved are stored in a database for reuse in later proofs or subproofs. The second reason is particularly important where the proof of a formulæ involves calls to other modules; these calls are typically expensive and should not be done more often than necessary.
In order to attain both of these goals, we maintain a database (implemented as a hash table) of previously derived formulæ. This database contains formulæ that have been proved in the current inference process, even if these formulæ are not in the current proof tree, i.e., even if they are on a branch of the search tree that failed. However, because it is impractical to store everything that has been proved, we only cache predicates that the programmer declares as relevant, using the meta predicate relevant/1 (Beckstein & Tobermann, 1992). (This would typically include those in which multiple modules are interested and those that are expensive to evaluate.)
Every time the proof of a relevant formulæ is completed, the database is updated. If there are no active assumptions, the proven relevant (atomic) formulæ is simply added to the database as is. If, however, we are currently in the middle of the proof of an embedded implication (which means that the set of current assumptions is not empty), the proven formulæ might be true only relative to some of the active assumptions. Therefore, we collect the assumptions that have been used since the start of the inference process for the relevant goal. If this set of used assumptions is empty, the relevant formulæ is stored as an atomic formulæ in the database. If the set of used assumptions is non-empty, we store a non-atomic formulæ—an implication—built from the relevant formulæ and the used assumptions.
These cached formulæ are then used to speed up calls to the same subgoals in later proof attempts. They are used as if they were program clauses whenever a resolving clause must be chosen for a given subgoal (State 3). They are added at the front of the rule base, i.e., they receive priority unless the meta control decides otherwise.
In the database, we store only the most-general forms proved so far: if we prove \( A \) and \( B \) and \( A\theta = B \) for some substitution \( \theta \), then we store only \( A \). This amounts to \( \theta \)-subsumption (van der Laag, 1995) in the case of atomic formulæ. In the case of embedded goals (implications) this is only a crude form of caching: handling full \( \theta \)-subsumption in this general case is NP-complete (Garey & Johnson, 1979). However, our (seemingly ad-hoc) form of caching does exactly the right thing: since calls to expensive modules typically appear statically—in only a few rules, rarely is an expensive call subsumed by previous calls but not detected by a purely syntactic check for generalization or specialization. A full subsumption check would add much complexity with little gain.
---
does not take advantage of this possibility. Currently, the reasoner always finishes embedded subgoals before returning to the embedding goal.
A similar complexity trade-off motivated our decision not to use a full truth maintenance system (de Kleer, 1986; Forbus & de Kleer, 1993). In many problem solvers, TMSs provide an elegant solution to reasoning using beliefs, assumptions, and contexts. Maintaining labels (minimal sets of sufficient assumptions, in the case of an ATMS) brings complexity that is unnecessary for our purposes. Instead, we provide the programmer with the meta-predicate relevant/1, which is appropriate tool to maintain just enough information to be able to pass all relevant current knowledge to other modules while avoiding duplicated work in evaluating time-intensive predicates.\(^\text{11}\) The following example illustrates how the "caching technique" described in this section facilitates efficient interaction with non-logical reasoning modules. Consider the following program fragment from the domain of ODE theory.
\[
\begin{align*}
\text{falsum} & \leftarrow \text{time\_series}(T), \text{chaotic}(T), \text{periodic}(T). \\
\text{falsum} & \leftarrow \text{time\_series}(T), \text{chaotic}(T), \text{linear}(T). \\
\text{chaotic}(T) & \leftarrow \text{time\_series}(T), \text{expensive\_test}(T, \text{chaotic}). \\
\text{periodic}(T) & \leftarrow \text{time\_series}(T), \text{expensive\_test}(T, \text{periodic}). \\
\text{linear}(T) & \leftarrow \text{time\_series}(T), \text{expensive\_test}(T, \text{linear}). \\
\text{time\_series}(ts).
\end{align*}
\]
Suppose that \(ts\) is an experimental time series that happens to be chaotic (hence non-periodic). Consider the query \(\text{falsum} \leftarrow \text{linear}(ts)\) whose interpretation is: "is the time-series non-linear?"\(^\text{12}\) If we assume a depth-first-left-right strategy, the system evaluates the formulae in the following order:
\[
\begin{align*}
&\text{not}(\text{linear}(ts)) \\
&\text{falsum} \leftarrow \text{linear}(ts) \\
&\text{falsum} \\
&\text{chaotic}(ts) \\
&\text{falsum} \leftarrow \text{expensive\_test}(ts, \text{chaotic}) \\
&\text{periodic}(ts) \\
&\text{falsum} \leftarrow \text{expensive\_test}(ts, \text{periodic}) \quad \text{fails} \\
&\text{falsum} \\
&\text{chaotic}(ts) \\
&\text{expensive\_test}(ts, \text{chaotic}) \quad \text{succeeds} \\
&\text{linear}(ts)
\end{align*}
\]
The system does not do the numeric test for linearity because we are assuming \(\text{linear}(ts)\) in the query. Notice that the numeric test for chaoticity is evaluated twice, even though it only needs to be done once. For efficiency, we need to cache the result of this evaluation.
---
\(^{11}\) For the case of logic-based truth maintenance systems (LTMS), Everett and Forbus (Everett & Forbus, 1996) have shown that freeing facts for garbage collection can often be used to find the right space/time trade-off. As an alternative solution, we are investigating the notion of "sparse truth maintenance."
\(^{12}\) The formula \(\text{linear}(ts)\) represents the fact that all data points of the time series \(ts\) lie on a line (modulo some specified resolution). The ODE rule used in this example is: a linear \textit{behavior} is neither a periodic behavior nor a chaotic behavior. This ODE rule is much narrower and much more limited than the more general rule that a linear \textit{ODE system} (represented by the formula \textit{linear-system}(current-model)) cannot be chaotic.
after the first call so that on the second call the inference system can simply report failure or success without actually doing the expensive numeric test a second time.
Caching intermediate results is crucial in order to avoid duplication of effort if a formula appears multiple times in a search tree. The overhead of the cache is negligible compared to the saved computation time. Storing or retrieving a formula from a hash table takes a fraction of a second, but the computation that establishes such a formula (e.g., an expensive numerical test) may take several seconds or even minutes.
This mechanism is critical to the efficiency of any system that uses this framework. The program in which we have tested this inference system, for example, incorporates a large variety of heterogeneous reasoning modes: symbolic reasoning, geometric reasoning, qualitative simulation, parameter estimation, and numerical simulation. Geometric reasoning and qualitative simulation are orders of magnitude more expensive than simple symbolic checks, and parameter estimation and numerical simulation are even more expensive. Therefore, the term “caching” may be misleading for the inference engine’s technique of storing and reusing previously derived formulae. The caching mechanism is not merely a matter of making the program more efficient by a small percentage. It makes heterogeneous reasoning feasible.
4.4 Integration of the Three Goals
The previous section explains how our implementation maintains derived knowledge, thereby allowing that knowledge to be passed to other modules. In this section, we describe where and how the solutions that achieve the other goals of the work described in this paper—equivalence of declarative and operational semantics, and declarative representation of control information—fit into this picture.
Relevant formulae are handled by the inference engine in the same way as embedded implications are. Conceptually, a new incarnation of an inference process tries to finish a proof of the relevant subgoal before other subgoals receive attention. In State ① of Figure 1, for example, if the selected formula is deemed relevant, the inference engine passes to a State ③ (similar to State ③), where a new stack is pushed onto the old stack. The new goal set contains only the relevant formula, and the engine goes back to State ①. Later, when State ⑤ or ④ is reached (success or failure in proving the relevant formula, respectively), the engine will, before resuming the inference, store the result of the pseudo-recursion, as described in Section 4.1. This means that declaration of relevance takes priority over control decisions that are specified by meta rules. The advantage of this approach is that it is easy to keep track of when a relevant formula has been proved.
The inference engine handles the abstraction levels by iterating from the most-abstract level to less-abstract levels. Abstraction levels are identified by the programmer, who assigns a natural number (an “abstraction level number”) to each clause. For example, in the domain of ODE modeling, the abstraction levels are used to express static control knowledge of the type: “In general, try to build proofs involving qualitative properties of candidate ODE models before building proofs involving numeric properties.” First, only the clauses on the most-abstract level are considered. If this proof attempt fails, the clauses from the next abstraction level are added, and so on, until the proof succeeds or all levels are exhausted.
13. Gallaire and Lasserre (Gallaire & Lasserre, 1982) achieve a similar effect using the predicate finish.
Maintaining the database of derived knowledge reduces duplication of effort that would occur when knowledge has to be rederived in later iterations. Avoiding duplication of effort is, in general, crucial to all inference tasks that involve expensive proofs or repeated calls to time-consuming reasoning modules.
The meta-level control strategy is integrated into the inference engine at two points: when a subgoal is selected to be resolved (in State 1) and when a resolving clause is selected (in State 2). These two points are marked "Meta-control invoked here" in the inference engine flow-chart (Figure 1). In order to select a subgoal, the inference engine is called recursively\(^\text{14}\) to evaluate all notready-, before-, and hot-rules (see Section 3.2) that apply to the current situation. We call the facts that are proved by these evaluations of meta rules the current control facts. From the current goal, the meta control chooses a subgoal that meets all constraints imposed by the current control facts. In order to select a clause, the meta control is only consulted the first time the inference engine reaches this choice point. Again, the meta control evaluates all clauseorder-rules that apply to the current situation in order to derive the current control facts. The meta control then determines an ordering of all matching clauses that meets all constraints that are expressed by the current control facts. The first clause in this ordering is chosen to resolve the current subgoal of the object-level proof. If the same choice point is reached again later via backtracking, the other clauses can be used in the already-determined order; meta control need not be invoked again. Please consult (Beckstein et al., 1996) for a formal description of the semantics of the control predicates.
5. Correctness and Completeness
Generalized Horn Clause Logic is intuitionistically equivalent to a certain subset of McCarty's Clausal Intuitionistic Logic (McCarty, 1988a, 1988b). According to Tobermann (Tobermann, 1994), the calculus of generalized Horn clauses upon which our theorem prover is based is logically sound and complete. Since the prover performs depth-first search, it is combinatorially incomplete in the same way as PROLOG is: it cannot effectively find a proof for a logical consequence of the theory represented by the program if its derivation is hidden by an infinite path in the search tree. The introduction of control rules into generalized Horn clause logic does not affect the soundness of the proof procedure. Control rules cannot "generate" new solutions that are not logical consequences of the logic program.
Control rules for the selection of subgoals preserve not only correctness but also completeness. Tobermann (Tobermann, 1994) has also shown that the selection function for a RISC-type prover may perform arbitrary computations. The only condition that the selection function has to meet in order to preserve completeness is that it must be a total function that selects one of the current subgoals. Ordering of clauses does not affect the logical completeness.\(^\text{15}\) It does, however, affect combinatorial completeness; a different order may make the prover follow an infinite path before it finds some logical consequence of the
---
\(^{14}\) A recursive call to the inference engine allows the full generality of the theorem prover to be used for meta control in a simple and elegant fashion.
\(^{15}\) In our system, clause ordering only decides when a clause is applied, not whether it is applied. This stands in contrast to other information prioritization systems, in which prioritization amounts to an exclusive choice between possibly conflicting pieces of information (Pradhan, Minker, & Subrahmanian, 1995).
program. One of the intended usages of the meta predicate `clauseorder/2` is—in addition to efficiency considerations—to (dynamically) determine a combinatorially complete clause order. Given the meta control predicates described in Section 3.2, this can be effected by the programmer in an easy and intuitive way.
The calculus that results from adding the abstraction level mechanism to the RISC-type prover is also correct and complete. First, consider correctness. More-abstract reasoning only takes away solutions of the program; it never adds new solutions. Thus, the resulting calculus is correct. Completeness is somewhat more subtle. The completeness of the underlying inference engine implies that the reasoning process is complete relative to the set of rules that are in use. However, reasoning performed at a more-abstract level is typically incomplete with respect to a less-abstract level. This is exactly our intention: to mask out logical consequences of the program that lead to too-detailed reasoning too early. Since queries ultimately fail only after the inference engine has considered all rules at all abstraction levels, the overall process is complete.
Both correctness and completeness are also preserved by the caching mechanism. A formula is only stored if it has been proved. Since the knowledge base does not change during the evaluation of a query, a stored formula remains true for the whole evaluation process and can thus be reused. A formula that is true only with respect to an extended context (that is, a set of assumptions) is stored as an implication whose body consists of those assumptions. These conditional formulae are also valid and do not affect correctness. Likewise, completeness remains unaffected by the caching mechanism since no rules are removed from the logic program; rather, the cache is added in front of the logic program. Solutions may be found in a different order, however; currently, they are also found multiple times if the theorem prover first uses cached results and later also uses the corresponding original rules. This only poses a problem if the user asks for several proofs of a query, or if excessive backtracking occurs within a proof. A version of the cache manager that avoids even these duplication problems is currently under construction. In that version, every cached formula will maintain a pointer to the rules from which it was derived, along with some other book-keeping information.
In the application example described in the next section, the inference engine’s task is to find the first proof of the query `falsum`.
6. An Example
The logic system presented in this paper has successfully been used as a knowledge representation and reasoning framework in the domain of ODE theory. The program PRET (Stolle & Bradley, 1998; Bradley et al., 2001) automates system identification (Ljung, 1987): given hypotheses, observations, and specifications, it constructs an ODE model of a black-box dynamical system. PRET uses the given hypotheses to construct a sequence of candidate models and checks each candidate against the observations. The first candidate that passes this check is returned as the answer. In this section, we describe how PRET employs our logic system to perform this model check.
PRET’s knowledge base encodes ODE theory in GHCIL clauses. The person who implements or maintains this knowledge base will presumably be an expert in engineering—not logic programming—so the declarative representation of knowledge without the use of “hack
type” efficiency side effects is crucial. The concept of negation as inconsistency is ideal for this application: the candidate model checker combines the observations about the target system, the observations about the candidate model, and the ODE theory into one set of clauses and then checks that set for consistency, i.e., tries to derive falsum from it. This instantiates PRET’s opportunistic paradigm: a candidate that provides no reason for an inconsistency is considered a good model.
Checking candidate models against the given observations poses a difficult reasoning control problem, but one that can be solved elegantly using the framework described in this paper. The model checker makes use of several non-logic-based modules, e.g., the commercial symbolic algebra package Maple (Char, Geddes, Gonnet, Leong, Monagan, & Watt, 1991), a simple qualitative envisioning module, a nonlinear numerical parameter estimator (Bradley, O’Gallagher, & Rogers, 1998), and a geometric reasoner for intelligent data analysis (Bradley & Easley, 1998). Calls to these modules require knowledge to be passed to them explicitly. By declaring the appropriate predicates as “relevant,” the PRET knowledge engineer instructs the inference engine to make the appropriate pieces of knowledge available. Different reasoning techniques vary considerably in their cost. Symbolic techniques are usually quick and cheap; the order of an ODE, for example, can be established within a fraction of a second. Semi-numeric and numeric techniques take much longer. The time taken by a call to PRET’s parameter estimation module, for example, ranges between a couple of seconds and several minutes. What PRET needs in order to manage the complexity of its task—finding an ODE model for a given dynamic system—is the ability to dynamically orchestrate application of its ODE rules and the various reasoning modes that are triggered by the resulting evaluations, all in a manner that leads to the quickest possible test of a given model.
The three techniques described in this paper—abstraction levels, dynamic meta control, and reuse of previously derived formulae—achieve exactly this intelligent orchestration of reasoning modes. We use the concept of abstraction levels (see Section 3.1) to direct the search for an inconsistency toward a quick, abstract proof. For example, qualitative reasoning rules are assigned a more-abstract level than rules that encode numerical reasoning. As a result, PRET tries to discard models by purely qualitative means before resorting to numerical techniques. In other qualitative reasoning systems that work with different abstraction levels (e.g., (Yip & Zhao, 1996; Mosterman & Biswas, 1997)), the levels are implicitly defined by the system’s architecture and data structures. In PRET, every rule is explicitly assigned an abstraction level number. Similarly, the logic engine’s dynamic control is used in PRET to guide the search toward a cheap and quick proof of falsum. Rules that are likely to lead to a contradiction are chosen before other rules, and subgoals that are likely to fail quickly are evaluated before other subgoals. As an example, consider the following (simplified) program.
\[
\text{stable} \leftarrow \text{linear, all roots in left half plane.}
\]
\[
\text{stable} \leftarrow \text{non-linear, stable in all basins.}
\]
\[
\text{hot}(L) \leftarrow \text{linear, goal}(L, \text{stable}).
\]
In this example, the control rule specifies that reasoning about the system’s stability should be done early on if that reasoning is known to be cheap, e.g., if the system is known to be
linear. Stability reasoning does not get priority, however, in the nonlinear—expensive—case. The domain-specific reasoning behind this control flow is as follows: A linear dynamical system has a unique equilibrium point, and the stability of that point—and therefore of the system as a whole—can be determined by examining the system's eigenvalues, a simple symbolic manipulation of the coefficients of the equation. Nonlinear systems can have arbitrary numbers of equilibrium sets. These attractors are expensive to find and evaluate. Thus, if a system is known to be linear, its overall stability is easy to establish, whereas evaluating the stability of a nonlinear system is far more complicated and expensive. The framework described in this paper not only makes it easy for a domain expert to specify this kind of knowledge, but also turns that knowledge to advantage in an elegant and powerful way. Pret's meta theory, which captures key concepts in differential equations and dynamical systems, allows the inference system to take advantage of the dynamic dependencies described above. The major advantage of this approach is that Pret's control knowledge is separated from the ODE theory and does not interfere with the ODE theory's declarative semantics. This example illustrates how control information that originates in a domain expert's understanding of the application domain can be expressed cleanly and intuitively using the logic system described in this paper.
Pret's graphical user interface (GUI) also takes advantage of the explicit state representation of the inference engine. For example, the GUI allows the user to interrupt the inference engine at any time. Having interrupted the computation, the user is able to examine the inference engine's state, restart the computation, or even save the computation in progress and start a new computation. The GUI displays the knowledge that has been derived so far, proof trees for this derived knowledge, and the proof tree in progress. Whereas in traditional logic systems proof trees are usually built by meta interpreters, in Pret this task is trivially implemented using the explicit state representation.
Even though the computational complexity of Pret's model checker has not yet been formally analyzed, experiments (e.g., [Easley & Bradley, 1999; Bradley et al., 2001]) show that it performs well on engineering textbook problems. The recursive call of the inference engine that evaluates the bodies of control rules may be viewed as a potential source of complexity, or even infinite loops. In practice, however, the proofs of control rules bottom out quickly. The only use of embedded implication so far was the interpretation of not/1 as negation as inconsistency.
Recently, there has been an interesting discussion in the AI community about the need for domain-dependent control information in any application. Theoretically, there is no need for domain-dependent control because control knowledge can be factorized into domain-independent control information and domain-dependent modal information (Mendelson, 1964) that encodes the structure of the search space (Ginsberg & Geffes, 1991). While this elegant result is true for logic programming in general, the Pret project (and others projects as well, e.g., [Minton, 1996]) is a prime example of an application that requires a different approach. Having to think about control in terms of the structure of the search space is exactly what we want to avoid. The implementer of the knowledge base should instead approach it from the viewpoint of his/her domain: which rules are more abstract than others, which rules or goals trigger expensive calls to other packages, and so on.
7. Context and Related Work
The work described in this paper draws upon ideas and techniques from several areas of mathematics, engineering, and computer science; citing more than the few most important and/or most closely related publications in each of these areas would yield an excessive bibliography. In this section, we mention only the most closely related publications from the large body of literature on meta-level systems and control. References to related work that appear in the body of the paper will not be repeated here.
Some of the earliest work on meta control includes (Gallaire & Lasserre, 1979; Davis, 1980; Gallaire & Lasserre, 1982; Dincbas & Le Pape, 1984; Devanbu, Freeland, & Naqvi, 1986). More recently, implemented logic programming languages (e.g., (Hill & Lloyd, 1994; Beckstein et al., 1996; Grosif, 1997)) have been influenced by these ideas. Furthermore, automated planning systems (e.g., (Stefik, 1981; Carbonell, Blythe, Etzioni, Gil, Joseph, Kahn, Knoblock, Minton, Perez, Reilly, Veloso, & Wang, 1992; Barrett, Christianson, Friedman, Kwok, Golden, Penberthy, Sun, & Weld, 1995)) typically employ meta-level decision-making. The planning system TLPlan uses temporal logic to express control information (Bacchus & Kabanza, 2000). The constraint-based framework of Satplan and Graphplan allows such rules to be compiled into these planners (Huang, Selman, & Kautz, 1999). An answer-set programming approach to the domain-dependent control of planners was presented in (Son, Baral, & McIlraith, 2001). Moreover, the notion of specifying control has also been applied to the situation calculus (Lin, 1999).
In a different, but related, branch of the literature—called strategic proof planning—strategies that guide the proof search are explicitly represented as plans and dynamically refined during the theorem proving process (Bundy, 1996). This approach adds a form of explicit global control to the low-level, local, tactical control decisions of a theorem prover. Work in this area spans from the earliest research by Bundy (Bundy, 1987) to contemporary theorems provers and integrated mathematical assistants (e.g., (Benzmüller, Cheikhrouhou, Fehr, Fiedler, Huang, Kerber, Kohlhase, Konrad, Melis, Meier, Schaarschmidt, Siekmann, & Sorge, 1997; Melis & Siekmann, 1999; Castro & Borovansky, 2000; Hutter, 2000; Melis & Meier, 2000)).
The Automated Deduction community has produced a large body of systems and literature on tactical and strategic control of deduction (see, for example, (Gramlich, Kircner, & Pfenning, 2000)). Denzinger, Fuchs and Fuchs (Denzinger, Fuchs, & Fuchs, 1997) describe a system that learns to re-enact previously successful proof attempts in the domain of purely equational theorem proving. The system finds solved problems that are analogous to the current problem and adapts the corresponding known proof. The search is distributed across a multi-agent architecture whose selection strategies and heuristics are expressed declaratively, using domain-specific terms.
There are important differences between the control of theorem provers in the field of automated deduction and the work presented in this paper. In automated deduction, proving theorems is the primary aim of the system. Our system, on the other hand, is designed to facilitate the expression of control knowledge in the context of a particular application domain. Theorem proving is just the vehicle, not the goal. In the application domain example in the previous section, for example, the formulation of meta-level control knowledge is crucial to the effective orchestration of a heterogeneous reasoning process: automated
system identification. The underlying mathematical theory of system identification—ODE theory—is expressed as a logical theory; hence, the control of PRET’s reasoning is an instance of control of automated deduction. However, PRET employs only one theorem prover. Unlike automated deduction systems that orchestrate a suite of theorem provers (e.g., (Denzinger & Fuchs, 1999)), PRET orchestrates a suite of non-logic-based reasoning modules by deciding when to invoke which one and by making the results available to other modules in form of logical formulae.
In PRET’s logical paradigm (Stolle & Bradley, 1996), the invocation of modules is triggered by the evaluation of resolution goals. Therefore, some of the control information that is expressed through the dynamic ordering of goals and clauses corresponds to what in automated deduction is called strategic information. This uniform framework of expressing all control as control of a resolution prover has proven useful and intuitive in our application domain. The combination of static abstraction levels and dynamic meta level control rules allows for an effective orchestration of the automated system identification task.
Meta languages have a long history in logic and logic programming (Subrahmanian, 1988). Meta language constructs whose semantics are similar to the constructs in our system were suggested by Gallaire and Lasserre (Gallaire & Lasserre, 1979, 1982); however, their specification of the semantics was vague. Declaration of relevancy has a similar effect as Gallaire and Lasserre’s finish predicate. The idea of establishing a relationship between clauses and their names also stems from (Gallaire & Lasserre, 1982). Our notready/1 predicate is also similar to Nu-Prolog’s wait (Naish, 1985) and Gödel’s delay (Hill & Lloyd, 1994). Amalgamated meta-level inference for SLDNF resolution was presented by (Bowen & Kowalski, 1982; Yalcinalp, 1991).
A terminology for meta-level systems was suggested by van Harmelen (van Harmelen, 1991). According to that classification, our system is a bilingual object-level inference system with a ground representation of object-level goals and clauses on the meta-level. Unlike some other systems in that category (Beckstein et al., 1996), our system provides no guards to express directionality. If guards were available, meta predicates like var/1 and ground/1 could be used to choose appropriate clauses without interfering with the declarative semantics of the clauses. In our system, however, such meta predicates appear in the body of the clause and must therefore be used with care. Also, as discussed in Section 4.3, we do not make use of a full ATMS. In the bodies of meta rules we allow the full GHCIL language instead of restricting the meta language to Horn clauses. To evaluate meta-level control clauses, the GHCIL inference engine simply calls itself pseudo-recursively instead of switching to an Earley theorem prover (Earley, 1970; Beckstein & Kim, 1991). We also added the notion of abstraction levels, as described in previous sections.
In the Foreword to (Hill & Lloyd, 1994), Robinson calls the difference between pure logic programming and applied logic programming “a gap that has plagued the relational logic programming community since the birth of PROLOG in the early 1970s.” In a perfect world, “programs are first-order theories, and computations are deductions from them.” Recently, several papers (for example, (Lin, 1997)) have assigned declarative semantics to procedural constructs like the cut or negation as failure by stratifying programs or restricting program models. Our solution to this problem is to disallow procedural constructs and to restrict negation syntactically to negation as inconsistency with intuitionistic semantics.
The problem of ordering query subgoals—and optimizing queries in general—has been studied in both the database and the logic programming communities (e.g., (Warren, 1981; Smith & Genesereth, 1985)). The continuing attempts of the logic programming community to make applied logic programs more declarative and thus more readable and comprehensible has a strikingly similar counterpart in the database community. Relational query languages (Ullman, 1988) allow the desired data to be specified declaratively. Query optimizers, however, are typically programmed in procedural terms. One might argue that query optimizers in databases correspond to control components in logic programs. Cherniack has developed a system that expresses the information as to how queries are optimized declaratively as well, namely as declarative rewrite rules (Cherniack & Zdonik, 1998). In a sense, the concept of declarativeness is moving down the food chain. Naturally, this has to stop somewhere: Cherniack’s system specifies the information “which rewrite rule should be applied when” in procedural terms. Similarly, our system executes the bodies of control rules from left to right, i.e., procedurally.
8. Conclusion
We have presented an implemented logic system whose language is that of generalized Horn clause intuitionistic logic with negation as inconsistency. The system achieves three important goals: equivalence of declarative and operational semantics, explicit and declarative representation of control information, and smooth interaction among various heterogeneous reasoning modes.
These goals have been accomplished by integrating and implementing several carefully chosen techniques. Static abstraction levels and dynamic meta control rules explicitly specify the deduction strategy of the inference engine, thereby allowing the reasoner to intelligently navigate in the search tree. An explicit representation of the theorem prover’s state allows information to be passed between various logical and non-logical reasoning modules. The abstraction levels and meta control rules specified by the programmer orchestrate the calls to these reasoning modules. Furthermore, an intelligent caching mechanism stores relevant formulae and makes them available for reuse in later proof attempts. Typical examples of such relevant formulae are intermediate results of expensive calls to various reasoning modules.
As an example, we have incorporated our system into PRET, an automated modeling tool that reasons about ordinary differential equations (ODEs). The logic system described in this paper is an effective and efficient reasoning core for this process. Its design allows a domain expert to express knowledge about dynamic systems and ODEs in a natural, declarative manner. The control information, which specifies how the domain knowledge is to be processed, is also formulated declaratively, but separately from the domain knowledge. This approach facilitates correctness and clarity of the domain knowledge because the expert need not be concerned with control strategies when formulating knowledge about mathematical truths about dynamic systems and ODEs. As demonstrated in the PRET system, an appropriate set of control rules leads to the desirable behavior of the reasoner, which—in this case—amounts to an efficient search for an ODE: one that prioritizes cheap, abstract-level reasoning over expensive low-level reasoning whenever possible. Finally, the system identification task draws on a variety of heterogeneous reasoning modules. The logic
system described in this paper allows Pret to smoothly integrate these modules with each other, orchestrating them through careful control of its first-order theorem prover.
The system described in this paper implements an approach that can be viewed as a hybrid between what is known as tactical and strategic control in automated theorem proving. Rather than using different control mechanisms for tactical and strategic control, a Pret knowledge engineer expresses all control as control of a resolution prover. Combined with the abstraction hierarchy of more and more refined domain theories, this approach provides—in our experience—just the right tools for an effective orchestration of the automated system identification task. More generally, we expect that our approach will prove useful in other complex AI tasks that require the integration of heterogeneous reasoning modules that employ different reasoning paradigms. The conceptually clear separation between object-level domain knowledge and dynamic meta-level control knowledge, along with the domain theory’s abstraction levels, allows for a formulation of control information that corresponds directly to the user’s intuitions about more-abstract and less-abstract concepts and about more-expensive and less-expensive reasoning techniques in the application domain. The main goal of the design presented in this paper is not to give the logic system the means to learn or improve its reasoning strategies. Rather, it is to give the user (or knowledge engineer) the means to formalize domain-dependent information about various degrees of abstraction and about various degrees of reasoning cost in a way that is conceptually clear and that corresponds to a body of knowledge and expertise in the application domain. For this reason, an numerical comparison of Pret’s performance with and without meta-control (along the lines of (Minton, 1990), for example) would be besides the point. Pret’s control information does not merely make its reasoning more efficient; it makes it feasible. Nevertheless, it would be interesting to better understand the computational cost of Pret’s meta-control module, and we are currently working on an empirical evaluation.
Acknowledgments
Matt Easley and Tom Wrensch contributed ideas and code to this project. Much of the work described in this paper is built on concepts developed by Clemens Beckstein and Gerhard Tobermann. The second author, in particular, wishes to thank them for many helpful discussions and continuing support.
References
Buchanan, B. G. (2001). Creativity at the meta-level. AI Magazine, 22(3). Presidential address at the Seventeenth National Conference on Artificial Intelligence (AAAI-00), Austin, Texas.
|
{"Source-Url": "http://www.cs.colorado.edu/users/lizb/papers/putting.pdf", "len_cl100k_base": 14491, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 160079, "total-output-tokens": 19923, "length": "2e13", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.0006241798400878906, "__label__crime_law": 0.00052642822265625, "__label__education_jobs": 0.0024127960205078125, "__label__entertainment": 0.00015115737915039062, "__label__fashion_beauty": 0.0002434253692626953, "__label__finance_business": 0.00047516822814941406, "__label__food_dining": 0.0004887580871582031, "__label__games": 0.0008311271667480469, "__label__hardware": 0.001125335693359375, "__label__health": 0.0007877349853515625, "__label__history": 0.00048613548278808594, "__label__home_hobbies": 0.00022399425506591797, "__label__industrial": 0.000911712646484375, "__label__literature": 0.0008492469787597656, "__label__politics": 0.000457763671875, "__label__religion": 0.0007185935974121094, "__label__science_tech": 0.2384033203125, "__label__social_life": 0.00016117095947265625, "__label__software": 0.01099395751953125, "__label__software_dev": 0.7373046875, "__label__sports_fitness": 0.0003459453582763672, "__label__transportation": 0.0010585784912109375, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79442, 0.04559]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79442, 0.46844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79442, 0.88164]], "google_gemma-3-12b-it_contains_pii": [[0, 2782, false], [2782, 6296, null], [6296, 9350, null], [9350, 12149, null], [12149, 15348, null], [15348, 18483, null], [18483, 21760, null], [21760, 25004, null], [25004, 25911, null], [25911, 28464, null], [28464, 28611, null], [28611, 28723, null], [28723, 32175, null], [32175, 35575, null], [35575, 39230, null], [39230, 43015, null], [43015, 46557, null], [46557, 50181, null], [50181, 53898, null], [53898, 57561, null], [57561, 61353, null], [61353, 64911, null], [64911, 68085, null], [68085, 70857, null], [70857, 73553, null], [73553, 76111, null], [76111, 78976, null], [78976, 79442, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2782, true], [2782, 6296, null], [6296, 9350, null], [9350, 12149, null], [12149, 15348, null], [15348, 18483, null], [18483, 21760, null], [21760, 25004, null], [25004, 25911, null], [25911, 28464, null], [28464, 28611, null], [28611, 28723, null], [28723, 32175, null], [32175, 35575, null], [35575, 39230, null], [39230, 43015, null], [43015, 46557, null], [46557, 50181, null], [50181, 53898, null], [53898, 57561, null], [57561, 61353, null], [61353, 64911, null], [64911, 68085, null], [68085, 70857, null], [70857, 73553, null], [73553, 76111, null], [76111, 78976, null], [78976, 79442, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79442, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79442, null]], "pdf_page_numbers": [[0, 2782, 1], [2782, 6296, 2], [6296, 9350, 3], [9350, 12149, 4], [12149, 15348, 5], [15348, 18483, 6], [18483, 21760, 7], [21760, 25004, 8], [25004, 25911, 9], [25911, 28464, 10], [28464, 28611, 11], [28611, 28723, 12], [28723, 32175, 13], [32175, 35575, 14], [35575, 39230, 15], [39230, 43015, 16], [43015, 46557, 17], [46557, 50181, 18], [50181, 53898, 19], [53898, 57561, 20], [57561, 61353, 21], [61353, 64911, 22], [64911, 68085, 23], [68085, 70857, 24], [70857, 73553, 25], [73553, 76111, 26], [76111, 78976, 27], [78976, 79442, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79442, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6529f6c89897815152688b8089f5c1a7e48f00fd
|
A Parallel Architecture for the Generalized Travelling Salesman Problem: Mid-Year Report
Max Scharrenbroich, maxfs at umd.edu
Dr. Bruce Golden, R. H. Smith School of Business, bgolden at rhsmith.umd.edu
Abstract:
The goal of this project is to develop a parallel implementation of a serial heuristic to attack large instances of the generalized travelling salesman problem (GTSP). By leveraging more computational resources the parallel version of the heuristic is expected to produce higher-quality solutions in less time. A significant portion of this project will involve the development of a parallel architecture that can be extended to host a selected serial heuristic and the GTSP problem class. The extension of the architecture to host the serial heuristic will involve the identification and implementation of different methods of parallel cooperation and levels of parallelism. The parallel heuristic will be tested on a database of problem instances and the performance will be compared to published results of the serial heuristic. In addition, the parallel heuristic will be tested to determine how performance scales with the number of processors used.
1 - Project Background and Introduction
Problem
The generalized traveling salesman problem (GTSP) is a variant of the well-known traveling salesman problem (TSP). Like the TSP, it is a combinatorial optimization problem and has important applications in the field of routing. In the GTSP, a set of nodes or vertices in the plane is grouped into a number of clusters. The goal is to find the shortest tour that visits each cluster exactly once. More formally, let $\mathbf{G}(V, A)$ be a graph where $V$ is the set of vertices and $A$ is the set of arcs. A distance matrix $C = (c_{ij})$ is defined on $A$. If $C$ is symmetric, the arcs are undirected and can be replaced with edges. In the GTSP, $V$ is partitioned into a set of clusters, $V = \{V_1, V_2, \ldots, V_m\}$, each containing a subset of the nodes from $G$. The goal is to determine the shortest Hamiltonian tour visiting each cluster exactly once. If the distance matrix is not symmetric, it may be cheaper to visit more than one node in a...
cluster. For this project we propose the symmetric version of the GTSP, where \( V \) is partitioned into a set of node-disjoint clusters and the distance matrix is symmetric, hence, exactly one node in each cluster is visited. The following figure is an illustration of the problem (Figure 1).
![Illustration of the GTSP for a problem with 6 clusters.]
**Context**
Below are real-world examples of GTSP applications:
- Post-box collection and stochastic vehicle routing (G. Laporte, 1996) [5].
- Routing of welfare clients through government agencies (J.P. Saksena, 1970) [8].
- Warehouse order picking with multiple stock locations (C.E. Noon, 1988) [6].
- Airport selection and routing for courier planes (C.E. Noon, 1988) [6].
**Mathematical Formulation**
The symmetric GTSP can be formulated as the 0-1 Integer Linear Program (ILP): Given a graph \( G(V,E) \), where the set \( \{V_1, V_2, ..., V_m\} \) is a partition of \( V \) into \( m \) clusters, and a distance matrix \( C \), where \( c_{ij} \) is the Euclidean distance associated with edge \( e \in E \) find:
\[
\min \sum_{e \in E} c_{ij} x_{ij}
\]
Subject to:
\[
\sum_{i \in V_k} y_i = 1, \quad k = 1, 2, ..., m, \quad (1)
\]
Constraint (1) imposes the requirement that each cluster be visited exactly once. The degree equations (2) stipulate that if a vertex \( v \) is part of the solution its degree must be equal to two. The subtour elimination constraints (3) ensure the solution does not contain any sub-tours. Constraints (4-5) are the 0-1 integer constraints on the selection of vertices and edges in the solution. \( \delta(S) \) is a function defining the edge cut set that partitions the vertex sets \( S \) and \( \bar{S} \).
**Existing Solutions/Algorithms**
Like the TSP, the GTSP is NP-hard, and it is conjectured that problems in this class are inherently intractable. Thus, one cannot expect to find “good” or polynomial-time algorithms for solving them. Despite this, there exist exact algorithms for solving the GTSP to optimality. One exact algorithm for solving the GTSP is a branch-and-cut (B&C) algorithm proposed by M. Fischetti in 1997 [4]. Branch-and-cut is a method of combinatorial optimization for solving integer linear programs. The method is a hybrid of branch-and-bound and cutting plane methods.
While B&C techniques drastically reduce the size of the solution space and perform well on small problem instances, these techniques are not polynomial time algorithms. As the size of the problem instance grows, the exponential nature of the problem becomes apparent and B&C algorithms do not terminate in a reasonable amount of time. For example, the run times for the Fischetti B&C algorithm start approaching one day for GTSP problem instances with close to 90 clusters [4].
Heuristic algorithms have been developed to solve larger GTSP problem instances. Heuristic algorithms are search techniques that find approximate solutions to hard
combinatorial optimization problems. The following are three heuristic algorithms that have been successfully applied to the GTSP:
- A Random-Key Genetic Algorithm (L. Snyder and M. Daskin, 2006) [3].
- Generalized Nearest Neighbor Heuristic (C.E. Noon, 1998) [6].
- mrOX Genetic Algorithm (J. Silberholz and B. L. Golden, 2007) [9].
2 - Approach
We propose a parallel approach to assailing the GTSP. Specifically, we will create a parallel architecture and extend the architecture’s framework to implement a known and tested serial heuristic algorithm for attacking the GTSP. A new genetic algorithm proposed by J. Silberholz and B.L. Golden in [9], referred to as the mrOX Genetic Algorithm (mrOX GA) [1], has shown promising results and is the chosen heuristic for this project.
In this section an overview of genetic algorithms is given so the reader has some background before giving a description of the mrOX GA. Motivation for parallelizing serial heuristics for combinatorial optimization is outlined, followed by an overview of parallel meta-heuristic classifications. Several methods of parallel cooperation are discussed. A high-level investigation of parallelism in the mrOX GA is given. And finally, the approach for attacking the GTSP and the objectives of the parallel architecture are described.
Overview of Genetic Algorithms
A genetic algorithm is a stochastic search technique commonly used to find approximate solutions to combinatorial optimization problems. Genetic algorithms are a class of evolutionary algorithms that are inspired by the process of natural selection and the theory of evolutionary biology. These algorithms mimic the process of evolution and natural selection by simulating a population of individuals (also known as chromosomes). An iteration of a genetic algorithm is analogous to evolving the next generation of a population. During the iteration a small subset of the fittest individuals (i.e. least cost) are mated to produce offspring with new traits. Since the resulting population is larger than the original, to maintain constant population size a simulated process of natural selection removes individuals that are found to be unfit. This process is iterated through a number of generations until stopping criteria are met.
Initialization:
Initialization is the first step in any genetic algorithm and involves randomly generating many individual solutions to form an initial population. The initial population covers a range of possible solutions (the search space). The population size is typically kept constant from generation to generation and depends on the nature of the problem.
Selection:
A genetic algorithm simulates the evolution of a population from generation to generation and mating of individuals is an important step in this process. Pairs of individuals known as parent chromosomes are selected for breeding from the
population based on fitness and offspring are produced by applying a crossover operator to the pair of chromosomes.
**Recombination:**
Recombination (crossover) involves the random selection of traits from each parent chromosome for insertion into the child chromosome. A crossover is required to produce viable offspring (feasible solutions for the problem instance). Depending on the structure of the chromosome and the nature of the problem, the crossover by itself is not guaranteed to produce feasible offspring. Thus following the actual crossover, heuristics must be applied to infeasible solutions to ensure that mating always produces feasible offspring.
**Local Search:**
After recombination there is usually room for additional improvement. It is typical that meta-heuristics perform local search improvement techniques to further improve the offspring. By using local search methods the solutions are guided into the local optimum of the local search neighborhood.
**Mutation:**
After crossover a small percentage of offspring are selected to be mutated. Mutation involves randomly perturbing parts of an individual’s chromosome. As in the case of crossover, mutation must also maintain a solution’s feasibility. Mutation ensures diversity in the population and prevents the algorithm from prematurely converging on a poor solution.
**Termination:**
Due to the combinatorial nature of the problems genetic algorithms are used to solve, there is no convergence analysis that can aid in determining when to terminate the algorithm. There are, however, many types of stopping criteria that can be used for terminating genetic algorithms. A typical stopping criterion is to stop after a fixed number of generations (or after an elapsed time). One method stops the algorithm after the best solution found so far does not change within a fixed number of generations. Another method is to stop after some minimum cost is exceeded.
**Overview of the mrOX Genetic Algorithm**
The modified rotational ordered crossover genetic algorithm (mrOX GA), proposed by J. Silberholz and B. L. Golden in [9], is a serial genetic algorithm that is specially tailored to the GTSP problem. At its heart is the mrOX crossover operator, which performs a crossover between two parents. In the rest of this section an overview of the mrOX GA is given. For a more detailed treatment of the algorithm and computational results the reader is referred to [9].
It is best to describe the mrOX crossover operator before describing the rest of the mrOX genetic algorithm. First, a description of the ordered crossover (OX) portion of the mrOX is given and then the rotational (r + OX) and modified (m + rOX) portions are discussed so the reader may gain a better understanding of the crossover operator.
**Chromosome Representation:**
A natural way to represent feasible solutions to the GTSP is with an ordered sequence of nodes (path representation). For example, the sequence \{1, 4, 2\} represents the cycle visiting node 1, then node 4, then node 2 and finally back to node 1 to complete the cycle. The path representation lends itself nicely to the idea of a chromosome. Path representations for solutions to the GTSP are also referred to as chromosomes.
**OX:**
The ordered crossover (OX) operator is based on the TSP ordered crossover proposed by Davis in [3]. The TSP’s OX operator randomly selects two cut points on one of two parent chromosomes. The order of the nodes between the two cut points on the first parent is maintained. The remaining non-duplicate nodes from the second parent are placed, in order, starting to the right of the second cut point with wrap-around if necessary. For the GTSP this method is modified so that clusters being added from the second parent do not coincide with clusters from the first parent (i.e. we want to ensure that each cluster is visited only once). Figure 2 shows an illustration of the OX operator as applied to a solution for a hypothetical GTSP.

In Figure 2 the OX procedure starts with two parent chromosomes, P1 and P2. The square brackets with sequences of numbers represent a chromosome, or solution for a
A Parallel Architecture for the Generalized Travelling Salesman Problem: Mid-Year Progress Report
hypothesis GTSP problem. The numbers represent an ordered pair, \((c_i, n_j)\), where the base number represents a cluster and the superscript indicates the node that is being visited. Initially, cut points are randomly generated on the parent chromosomes (A). In the figure, cut points on the chromosomes are represented by vertical bars and the segmented parent chromosomes are represented by P1' and P2'.
The child chromosome is initialized with the sub-path from the first parent (B). Cluster-node pairs from the second parent, moving left to right, are then added to the empty slots of the child chromosome while avoiding duplicate clusters (C). The curly brackets are a visual aid and show the order in which cluster-node pairs from the second parent are added to the child chromosome. The list of cluster-node pairs from the second parent represents a sub-path to be connected to the first parent’s sub-path.
\textbf{rOX:}
Next, the OX is modified with a rotational component yielding the rOX \((r + OX)\). The rotational component acts on the sub-path (from the second parent) to be added to the child chromosome. This sub-path is used to create two sets of sub-paths. One set of sub-paths is generated by applying a shift operator to the original sub-path. The other set of sub-paths is the mirror image of the first set. As an example, assume that after the OX the following sub-path is generated: \{1, 2, 3\}. Applying a shift operator to this sub-path yields the set of sub-paths:
\[
\{1, 2, 3\} \rightarrow \{ \{1, 2, 3\} \{2, 3, 1\} \{3, 1, 2\} \}
\]
The second set of sub-paths is the mirror image of the first:
\[
\{ \{1, 2, 3\} \{2, 3, 1\} \{3, 1, 2\} \} \rightarrow \{ \{3, 2, 1\} \{1, 3, 2\} \{2, 1, 3\} \}
\]
\textbf{mrOX:}
The rotational component is further modified resulting in the mrOX \((m + rOX)\). For each sub-path generated in the rOX, every combination of nodes in the clusters at the end points of the sub-path is generated, resulting in an augmented set of sub-paths to be tested. As an example, suppose one of the sub-paths from the rOX procedure is: \{1^{(A,B)}, 3, 2^{(C,D)}\}. Creating the combinations of different nodes at the end points yields the following set of possible sub-paths:
\[
\{1, 3, 2\} \rightarrow \{ \{ 1^A, 3, 2^C \} \{ 1^A, 3, 2^D \} \{ 1^B, 3, 2^C \} \{ 1^B, 3, 2^D \} \}
\]
An example of the full mrOX crossover is illustrated in Figure 3.
Complexity of the mrOX Crossover:
The complexity of the mrOX crossover operator can be calculated using the following equation:
\[
N(S_{p2}) = 2 \sum_{i=1}^{m-1} n(i, S_{p2})n((i + 1) \mod (m), S_{p2})
\]
Where \( N(S_{p2}) \) is the number of comparisons required in an mrOX GA crossover operation, \( S_{p2} = (v_0, v_1, \ldots, v_m) \) is the ordered set of cluster/node pairs added from the second parent, \( m = |S_{p2}| \) is the size of the set, and \( n(i, S_{p2}) \) is a function that returns the number of nodes in the cluster at index i in the ordered set \( S_{p2} \). The number of comparisons is bounded by:
\[
N(S_{p2}) \leq 2mn_{\text{max}}^2
\]
Where \( n_{\text{max}} \) is the maximum number of nodes in a cluster. If the number of nodes per cluster is constant throughout all the clusters in the problem, the equation reduces to:
\[
N(S_{p2}) = 2mn_n^2
\]
Where \( n_n \) is the number of nodes in a cluster.
Outline of the mrOX GA:
Having described the mrOX operator, an outline the mrOX GA can now be given.
- **Initialization**: The mrOX GA starts by initializing seven isolated randomly generated populations (islands) containing 50 individuals each. During the evolution of the isolated populations a light-weight version of the mrOX crossover operator (rOX) followed by local improvement heuristics are applied to quickly generate reasonable solutions. The local improvement involves one full cycle of two-opt followed by one-swap and is applied only to the new best solution in each population.
- **Population Merge**: After none of the isolated populations produced a new best solution for 10 consecutive generations, the seven isolated populations are merged by selecting the 50 best solutions out of the combined population of 350 solutions.
- **Continued Evolution**: Post-merge, each generation is evolved using the full mrOX crossover operator followed by local improvement heuristics. The local improvement involves carrying out multiple cycles of two-opt followed by one-swap until no improvements are found. Local improvements are only carried out on child solutions that have better fitness than both parents. Local improvements are also made to a randomly selected 5% of new chromosomes to preserve diversity.
- **Reproduction and Death**: In each generation a subset 30 individuals are randomly selected using a spinner procedure (based on individual fitness) for reproduction. Each pair of parent chromosomes produces two offspring, yielding a total of 30 child chromosomes. After reproduction, in order to maintain the population size of 50 individuals, 30 individuals are randomly selected for death using a similar procedure to that used for parent selection.
- **Mutation**: Before and after the merge each chromosome has a 5% probability of being selected for mutation to preserve diversity. The mutation consists of randomly selecting two cut points in the interior of an individual’s chromosome and reversing the order of the nodes in between these two points.
- **Termination**: The algorithm is terminated after the merged population does not produce a better solution for 150 consecutive generations.
Local Search in the mrOX GA:
Local improvement heuristics (also known as local search) are used to find local optima within a neighborhood of a solution and significantly improve the performance of genetic algorithms [10]. In the mrOX GA, local improvement heuristics are applied after the crossover operation. In the initial (light-weight) phase of the mrOX GA, one cycle of 2-opt followed by 1-swap is applied only if the crossover produces a new best solution. In the post-merge phase, full cycles of 2-opt followed by 1-swap are applied only if the crossover produces a solution that is better than both parents. By being selective about applying the local search, the mrOX GA improves run-time by avoiding improvement of solutions that do not appear promising.
The 2-opt improvement heuristic checks every possible two-edge exchange and selects the best one. This is equivalent to uncrossing two crossed paths. The 1-swap inserts a node in every possible position for each of the nodes and picks the best one. Both heuristics have complexity $O(n^2)$. Figure 4 illustrates the two heuristics.

**Mutation in the mrOX GA:**
As mentioned earlier mutation ensures diversity in the population and prevents the algorithm from prematurely converging on a poor solution. Mutation in the mrOX GA consists of randomly selecting two cut points in the interior of an individual's chromosome and reversing the order of the nodes in between these two points. Figure 5 illustrates a mutation operation.
Motivation for Parallelization
Traditionally, the goal when designing parallel algorithms is to reduce the time required to solve the problem. For exact solution methods a useful performance measurement is the speedup, computed as the ratio of the wall-clock time required to solve the problem in parallel with \( p \) processors and the corresponding solution time taken by the sequential algorithm.
Performance measures such as speedup are harder to define for heuristic methods that are not guaranteed to reach the optimal solution. Thus, the goal of an effective parallel heuristic is to outperform its sequential counterpart in terms of solution quality and computational efficiency [2].
Below are several motivations for parallelizing serial heuristics for combinatorial optimization:
**Speedup:**
Speedup is an important motivation for parallelizing any algorithm. Simply put, if idle computational resources exist, then they could be put to work producing results faster. An example would be if users needed results in real-time. A parallel implementation may be able to produce results in a matter of seconds instead of minutes or hours.
**Increased Problem Size:**
Another motivation for parallelization is that by leveraging more computational resources the parallel heuristic can handle larger problem instances.
Robustness with Parameter Exploration:
Many of the meta-heuristics applied to combinatorial optimization have multiple parameters that influence the success of the algorithm on a specific problem or class of problem instances. This can make tuning the parameters to specific problems time consuming, especially if run times are long.
By running different parameterizations on different processes the parameter space can be explored, avoiding the need for manual tuning. In addition, this approach avoids the need for re-tuning when the algorithm is applied to a different problem instance. It is expected that a parallel version of an algorithm using parameter exploration will exhibit robustness and perform consistently on a range of problem instances.
Cooperation:
Parallelization allows cooperation among processes. It is believed that cooperation can improve the solution quality by guiding the search to more promising regions of the search space.
Classification of Parallel Meta-Heuristics
An important step in creating a parallel implementation of a heuristic is in determining what aspects of the heuristic under consideration are amenable to parallelization. In 1998 Crainic and Toulouse proposed three types of classifications for parallel meta-heuristics [1].
- **Type-1: Low-Level Parallelism**: Attempts to speed up processing within an iteration of a heuristic method. For example, if there is a task within a heuristic that has a high computational burden and can be parallelized then low-level parallelism can be implemented to speed up that portion of the heuristic.
- **Type-2: Partitioning of the Solution Space**: Partitions the solution space into subsets to explore in parallel. At the end of processing the results are combined in some way to produce the final solution.
- **Type-3: Concurrent Exploration**: Multiple concurrent explorations of the solution space. Genetic algorithms are particularly amenable to this type of parallelism since these heuristics operate on populations of solutions. In concurrent exploration cooperation among processes can be implemented.
Methods of Cooperation
As mentioned above, hosting a serial heuristic in a parallel architecture allows cooperation to further improve the convergence and quality of a solution. Although there are many ways for cooperation to be implemented, the following three methods of cooperation will be investigated in the course of this project:
No Cooperation:
The case where processes do not use cooperation is a useful benchmark for testing whether or not other methods of cooperation are yielding improvements. In this case there is no exchange of information between the processes. When the stopping criterion is reached the best solution is picked from among the all the processes. Conceptually, this is equivalent to running multiple instances of the serial implementation.

Figure 6 Illustration of no-cooperation scheme.
Solution Warehouse:
The solution warehouse method is a basic architecture for cooperation among worker processes running in parallel. In this method a worker process (solution warehouse) is selected to be the mediator of information between the other worker processes. The solution warehouse collects problem solutions periodically from the worker processes and manages them in a list according to cost (i.e. it keeps track of the best solutions found so far). Due to performance limitations the list is kept to a manageable size. In accordance with a predefined schedule or scheme the solution warehouse sends a subset of the solutions back to the worker processes for further processing. The following is one implementation scheme for the solution warehouse method:
1. Each process sends the best solution to the warehouse after a number of k iterations (or period of time).
2. The warehouse collects the solutions and adds them to a list sorted by the cost, maintaining the top t solutions in memory.
3. The warehouse then assigns the best solution (or subset of solutions) to a subset of the worker processes and then randomly assigns solutions from the list to each remaining processes (with no repeats) for continued processing.
The scheme described above maintains diversity by allowing some of the workers to continue processing solutions that are not necessarily the best found so far. Maintaining diversity prevents premature convergence to poor local optima.
Inter-Worker Cooperation:
Inter-worker cooperation is a general method of cooperation where workers exchange solutions based on a pre-defined communication topology. Workers are only allowed to communicate with their neighbors. An example of a possible communication topology is a unidirectional ring topology. In a unidirectional ring topology each worker sends information to one neighbor. Figure 8 illustrates the ring topology method of communication.
Parallelism in the mrOX GA
The first step in parallelizing an algorithm is in identifying subroutines that are amenable to parallelization. In this section we investigate the ability to exploit different levels of parallelism in the mrOX GA.
Low-Level Parallelism in the mrOX GA:
Recall that low-level parallelism, also known as type-1 parallelism, seeks to speed up processing by parallelizing a computationally intensive subroutine within an iteration of
an algorithm. An ideal candidate for low-level parallelism is a subroutine where the workload can be divided evenly among a number of processors. If the workload can be divided evenly, the computational work will be completed at approximately the same time on all processors, leaving few processors idle as they wait for other processes to complete.
In the mrOX GA there are two computationally intensive subroutines that are candidates for low-level parallelism: the mrOX crossover operation, and the local search improvement heuristics. It was described earlier that the computational complexity of the mrOX crossover operation could be calculated a priori. Therefore, the computational load can be estimated for each crossover operation and the work can be spread across a number of processors in such a way that all processes complete at approximately the same time.
The complexity of the mrOX crossover is a function of the number of cluster/node pairs in $S_{ps}$ and the number of nodes in each of the associated clusters. Unfortunately, the loading for each crossover will be different. This is itself an example of the NP-complete multiprocessor scheduling problem, where given $m$ processors and $n$ jobs, and some time $t$ associated with each crossover job, find the schedule that minimizes the amount of idle time. Solving the scheduling problem will take up valuable CPU time, and the resulting schedule will likely leave CPU cycles idle. One idea to overcome the scheduling problem is to allow jobs at the end of the schedule to be terminated early so that all the processes finish at the same time (see Figure 9).

On the other hand, the local improvement heuristic (full cycles of 2-opt followed by 1-swap) is not an ideal candidate for low-level parallelism because the number of cycles of 2-opt and 1-swap are non-determinate. Timing tests of the serial mrOX GA show that a majority of the time is spent in the local search subroutine so even if the crossover operation is parallelized, local search would still be implemented serially (see Figure 10).
While providing a speedup, parallelizing only a subset of computationally intensive subroutines creates a bottleneck and introduces large inefficiencies in a parallel implementation because of idle processors (see Figure 11).
This investigation into low-level parallelism in the mrOX GA suggests that it is not amenable to this kind of parallelism. The main avenue for parallelization in the mrOX GA will be through concurrent exploration.
**Concurrent Exploration:**
Type-3 parallelization amounts to multiple concurrent explorations of the solution space. This is equivalent to running multiple copies of a serial heuristic in parallel. Genetic algorithms are particularly amenable to this type of parallelism since these heuristics operate on populations of solutions. In a type-3 parallel implementation cooperation...
among processes can be implemented. A study of fine-grained or cellular genetic algorithms (cGAs) was undertaken as motivation for parallel cooperation schemes.
Cellular genetic algorithms are genetic algorithms where the population is structured on an N-dimensional toroidal mesh (N is typically 2). Each individual can only reproduce with other members of its neighborhood (e.g. up, down, left and right – see Figure 12). In this type of population structure neighborhoods overlap, giving cGAs an implicit mechanism of migration. Thus, more successful solutions diffuse more rapidly through the population. Diffusion of solutions in cGAs is much slower than in panmictic GAs, and it is reasoned that because of slower diffusion, cGAs foster the formation of niches. The formation of niches preserves diversity (exploration) in the overall population while promoting specialization (exploitation) within these areas [11].

Implementing the mrOX GA using the population structure of a cGA would significantly alter it so as to make it a different serial algorithm. Applying the mesh communication topology at a process level could be successful, and would allow solutions to migrate (diffuse) between neighboring processes, providing a similar mechanism of niche formation as in cGAs (Figure 13). The mesh communication topology will be further considered for the parallel mrOX implementation.
Some further investigation and testing will be required to determine a reliable communication scheme over the mesh topology. Some important parameters that will be considered will be communication interval as well as parameters influencing the selection of neighbor solutions.
**Method of Approach**
The following list outlines the method of approach we will take for creating a parallel heuristic for the GTSP:
1. Develop a general parallel architecture for hosting sequential heuristic algorithms.
2. Extend the framework provided by the architecture to host the mrOX GA heuristic and the GTSP problem class.
3. Since genetic algorithms are well suited for the type 3 parallelization (concurrent exploration) the parallel implementation will consist of concurrent processes running the mrOX GA.
4. Type 1 or low-level parallelism will be considered in addition to the type 3 parallelism mentioned above.
5. Implement several different methods of parallel cooperation.
**Parallel Architecture Objectives**
The following is a list of objectives of the proposed parallel architecture:
- Provide a layer of abstraction from Message Passing Interface (MPI) so application developers do not need to be aware of the MPI implementation details.
- Provide a framework of interfaces, classes and event handlers for extensibility.
• Provide parallel cooperation using the selected cooperation scheme.
• Utilize multi-threading for handling I/O and framework related tasks on idle CPUs to prevent processing interruptions.
• Provide a capability for reporting process resource usage, status, debug and timing information.
3 – Implementation
Hardware and Software:
Initial Development and Testing: Single processor PC running Linux O/S, then move to multi-core PC.
Final Testing: UMD’s Deepthought Cluster, Linux O/S, with up to 64 nodes where each node has at least 2 processors.
Preliminary Design:
The following is a preliminary design for the parallel architecture and framework.
Summary of Architecture:
The architecture is made up of three levels:
1. MPI/communications layer
2. Framework layer
3. Extension of framework
MPI/Communications Layer:
Message – a base class for passing objects and data structures to/from processes. This class is like a serialization interface that creates messages to/from a byte stream.
Process – a base class for all other process types.
RootProcess – a root process manages a group of worker processes. The root process is responsible for collecting messages and information produced by worker processes.
WorkerProcess – a process that takes initialization parameters, periodic input, produces periodic output and a final output. Worker processes expose virtual functions for handling initialization, periodic input, and periodic output. An important function is the WorkerThread function that is executed after initialization. This function is defined by child classes.
**WorkGroup** – a class that encapsulates a RootProcess and a number of WorkerProcesses. This class has a static factory function for creating WorkGroup instances.
**InterWorkerTopology** – a class that manages the inter-worker communication topology for WorkerProcesses within a WorkGroup. An example is the Cartesian MPI communicator pattern.
**Neighborhood** – a class that defines the communication neighborhood of a specific WorkerProcess.
**Framework Layer:**
**ProblemInstance** – a base class for encapsulating specific problem instances. Functions to set whether the problem is a minimization or maximization problem type.
**SerialHeuristic** – a base class for encapsulating a specific serial heuristic. A serial heuristic has a collection of Phases – or execution states. Any preexisting serial heuristic must be broken down into at least one Phase. A Phase consists of an execution of a single iteration of the specific phase of a serial heuristic.
**Solution** – a base class for encapsulating an individual solution. This class exposes a function for computing or reporting the solution's objective function value.
**SolutionSet** – a collection of solutions (population of solutions). This provides functions for sorting and collecting statistics on the solutions within the set. Functions like BestSolution(), WorstSolution(), MaxObjectiveValue(), MinObjectiveValue(), MeanObjectiveValue(), MedianObjectiveValue(), and StddevObjectiveValues(). In addition, the class provides the ability to select solutions randomly using several methods.
**Extension of Framework:**
**ProblemInstance, Message::GTSP** – Generalized traveling salesman problem instance.
**Solution, Message::GTSPSolution** – An instance of a solution to the GTSP.
**SolutionSet<GTSPSolution>** - A collection of GTSPSolutions.
**WorkerProcess [SerialHeuristic::mrOXGA]** – The mrOX genetic algorithm heuristic.
- Phase1 – Population isolation phase.
- Phase2 – Post-merge population evolution.
- Cooperation1 – Exchange solutions with neighbors.
Cooperation2 – If the current worker process stagnates and other workers not part of the current neighborhood are making improvements this cooperation phase makes a drastic population shift to incorporate solutions from more successful processes.
4 - Databases
The database for testing the parallel algorithm will be based on a subset of TSP instances from the well-known TSPLib\(^1\), a library of TSP instances that can be found online. We shall use an existing code that implements the method described in Section 6 of [4] to cluster nodes of a TSP instance. This method clusters nodes based on proximity to each other, iteratively selecting \( n = \lceil n/5 \rceil \) centers of clusters such that each center maximizes its distance from the closest already-selected center. Then, all nodes are added to the cluster whose center is closest.
Fischetti et al.\'s branch-and-cut algorithm provides exact values for TSPLib datasets where the number of nodes ranged between 48 and 400 nodes and the number of clusters is between 10 and 80 respectively [4]. The serial heuristic run times for these problem instances are fairly short (all less than 10 seconds) and we don\'t expect the parallel implementation to perform better than the serial one due to lack of search depth and parallelization overhead. In [9] the mrOX GA was tested against another genetic algorithm, Snyder and Daskin\'s Random-Key Genetic Algorithm [10], on problem instances where the number of nodes is between 400 and 1084 and the number of clusters is between 80 and 200 respectively. In this set of instances the run time for the serial algorithm ranged from 10 to 131 seconds. It is for this set of instances that we will test performance and where we expect to see improvement using the parallel implementation.
5 – Validation and Testing
Validation and testing will consist of several phases.
Validation
Validation is important step in verifying that the behavior of the software code matches what it is intended to do. The following procedure will be used to validate the code.
1. Validate the parallel architecture using a simple test algorithm and generate several test-cases to test the functionality of the parallel architecture.
---
\(^1\) [http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/](http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/)
2. Test the parallel implementation using one processor over a number of runs for a subset of problem instances and compare those results to published ones. It is expected that run times and results should match closely to the published ones.
3. Test the parallel implementation with more than one processor over a number of runs for the same subset of problem instances used in part 2.
**Testing**
After validation we will test the performance of the parallel implementation to the serial one. As mentioned earlier, comparing a parallel heuristic to its serial counterpart is not so straightforward. We propose the following set of tests to measure performance improvements due to parallelization. For the parallel implementation and the selected cooperation scheme run the following tests:
1. For one set of tests use the published results of solution costs for runs of the serial algorithm in [9] as a stopping criterion for the parallel implementation.
2. Run the parallel implementation with different numbers of processors and measure the processing times using the above stopping criteria.
3. Compare the processing times to the ideal processing time as a function of the number of processors. The ideal processing time is computed as the ratio of the serial processing time and the number of processors.
4. For testing the efficacy of cooperation run the above tests using a parallel implementation and the non-cooperative scheme. Compare the results to the cooperative scheme. Conceptually, this is equivalent to the serial implementation.
5. Reduce the population size and/or the number of parents selected for crossover and measure the processing time and solution quality.
**6 - Project Schedule/Milestones**
**October 16-30:** Start design of the parallel architecture.
**November:** Finish design and start coding and testing of the parallel architecture.
**December and January:** Continue coding parallel architecture and extend the framework for the mrOX GA algorithm and the GTSP problem class.
**February 1-15:** Begin test and validation on multi-core PC.
**February 16-March 1:** Move testing to Deepthought cluster.
**March:** Perform final testing on full data sets and collect results.
April-May: Generate parallel architecture API documentation, write final report.
7 – Deliverables
- Parallel architecture code, scripts and API documentation.
- Tables of results.
- Final report.
8 – References
|
{"Source-Url": "http://www.math.umd.edu/~rvbalan/TEACHING/AMSC663Fall2008/PROJECTS/P6/A%20Parallel%20Architecture%20for%20the%20GTSP%20Mid-Year%20Report%20Max%20Scharrenbroich.pdf", "len_cl100k_base": 8467, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 46103, "total-output-tokens": 10092, "length": "2e13", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.00039505958557128906, "__label__crime_law": 0.0006074905395507812, "__label__education_jobs": 0.002170562744140625, "__label__entertainment": 0.00010138750076293944, "__label__fashion_beauty": 0.00021851062774658203, "__label__finance_business": 0.0006532669067382812, "__label__food_dining": 0.0004811286926269531, "__label__games": 0.0010061264038085938, "__label__hardware": 0.00200653076171875, "__label__health": 0.000934123992919922, "__label__history": 0.0006113052368164062, "__label__home_hobbies": 0.00019502639770507812, "__label__industrial": 0.0015363693237304688, "__label__literature": 0.0002970695495605469, "__label__politics": 0.00043702125549316406, "__label__religion": 0.0006089210510253906, "__label__science_tech": 0.412353515625, "__label__social_life": 0.00014638900756835938, "__label__software": 0.0108795166015625, "__label__software_dev": 0.5615234375, "__label__sports_fitness": 0.0004892349243164062, "__label__transportation": 0.00171661376953125, "__label__travel": 0.0003044605255126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42412, 0.01539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42412, 0.58174]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42412, 0.88444]], "google_gemma-3-12b-it_contains_pii": [[0, 2179, false], [2179, 3381, null], [3381, 5131, null], [5131, 5466, null], [5466, 8028, null], [8028, 10474, null], [10474, 12526, null], [12526, 15034, null], [15034, 15970, null], [15970, 18967, null], [18967, 19777, null], [19777, 21107, null], [21107, 23551, null], [23551, 25553, null], [25553, 26471, null], [26471, 28659, null], [28659, 29484, null], [29484, 30962, null], [30962, 32288, null], [32288, 33941, null], [33941, 35983, null], [35983, 38350, null], [38350, 40570, null], [40570, 42412, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2179, true], [2179, 3381, null], [3381, 5131, null], [5131, 5466, null], [5466, 8028, null], [8028, 10474, null], [10474, 12526, null], [12526, 15034, null], [15034, 15970, null], [15970, 18967, null], [18967, 19777, null], [19777, 21107, null], [21107, 23551, null], [23551, 25553, null], [25553, 26471, null], [26471, 28659, null], [28659, 29484, null], [29484, 30962, null], [30962, 32288, null], [32288, 33941, null], [33941, 35983, null], [35983, 38350, null], [38350, 40570, null], [40570, 42412, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42412, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42412, null]], "pdf_page_numbers": [[0, 2179, 1], [2179, 3381, 2], [3381, 5131, 3], [5131, 5466, 4], [5466, 8028, 5], [8028, 10474, 6], [10474, 12526, 7], [12526, 15034, 8], [15034, 15970, 9], [15970, 18967, 10], [18967, 19777, 11], [19777, 21107, 12], [21107, 23551, 13], [23551, 25553, 14], [25553, 26471, 15], [26471, 28659, 16], [28659, 29484, 17], [29484, 30962, 18], [30962, 32288, 19], [32288, 33941, 20], [33941, 35983, 21], [35983, 38350, 22], [38350, 40570, 23], [40570, 42412, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42412, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
dc6c7c0876cd978c762e8311f21385c9665a43f4
|
The challenges that challenge: Engaging with agile practitioners’ concerns
How to cite:
Gregory, Peggy; Barroca, Leonor; Sharp, Helen; Deshpande, Advait and Taylor, Katie (2016). The challenges that challenge: Engaging with agile practitioners’ concerns. Information and Software Technology, 77 pp. 92–104.
Link(s) to article on publisher’s website:
http://dx.doi.org/doi:10.1016/j.infsof.2016.04.006
The Challenges That Challenge: Engaging With Agile Practitioners’ Concerns
Peggy Gregorya*, Leonor Barrocb, Helen Sharpc, Advait Deshpande, Katie Taylora
a University of Central Lancashire, Preston PR1 2HE, UK
b The Open University, Walton Hall, Milton Keynes MK7 6AA, UK
cCorresponding author
Abstract
Context: There continues to be concern that research is not addressing the challenges that practice faces. For the benefit of academia and industry, researchers need to be aware of practitioners’ challenges and their context so that relevant and applicable research is undertaken.
Objective: This paper investigates two research questions: what challenges do agile practitioners face? and, how do practitioner challenges manifest themselves in an organisational setting? It aims to map the practitioner challenge landscape, explore challenge characteristics, compare findings with previous literature and identify implications for research that is relevant to practice.
Method: A combination of methods was used: elicitation of practitioner challenges collected using a Challenge Wall at a series of practitioner events; organisational Case Study using interviews, document analysis and observation; and online Survey. Findings were then compared to previous publications.
Results: Challenges collected from the Challenge Wall were grouped under 27 subthemes and seven themes: Claims and Limitations, Organisation, Sustainability, Culture, Teams, Scale, and Value. Investigating one challenge in the Case Study uncovered a set of new challenges, which were inter-related. Over 50% of survey respondents experienced challenges highlighted in the Case Study.
Conclusion: The landscape of agile practitioner challenges is complex and intertwined. Some challenges, such as doing agile in a non-agile environment, are multi-dimensional, affect many aspects of practice, and may be experienced simultaneously as business, organisational, social and adaptation problems. Some challenges, such as understanding cultural change or measuring agile value, persist and are hard to address, while others, such as adoption, change focus over time. Some challenges, such as governance and contracts, are under-researched, while others, such as business and IT transformation, have been researched but findings have not had the expected impact. Researchers wishing to address practitioner challenges need to treat them in context rather than in isolation and improve knowledge transfer.
Keywords
Agile Methods, Agile Software Development, Challenges, Evidence-based Software Engineering, DSDM
1 Introduction
Successfully adopting and using agile approaches within an organisation is challenging. As agile approaches mature and their use becomes more widespread [1], the nature of the challenges that practitioners and organisations face is changing. New challenges are emerging and the focus of existing challenges is shifting, reflecting the current state of practice. Some challenging activities, for example setting up a Scrum team, have been the subject of research and are now better understood [2, 3]. There is a growing body of research literature, experience reports, books and guidelines providing suggestions for those seeking help. Even so, some known challenges still pose problems in practice. Additionally, new challenges are emerging as organisations push the boundaries of existing techniques and try new approaches or move into unknown territory.
Agility is a very broadly understood concept that is difficult to define clearly [4]. The Agile Manifesto declaration and principles remain the de facto delineation of agile [5] to which authors frequently refer. In practice, discussion often focuses on specific methods, but even then there are many options for organisations to choose from. A recent survey [1] confirms that a wide variety of methods are used in industry; the most common being Scrum, followed by: ‘home-grown’ approaches, Extreme Programming (XP), Scrum/XP Hybrid, Lean Software Development, Feature Driven Development and Dynamic Systems Development Method (DSDM). In this paper the term ‘agile practitioners’ is defined widely to mean anyone working in an organisation who is involved in either making decisions about agile or using an agile approach. This covers a wide range of roles from executive directors to software developers. We have not imposed our own definition of agile on this work, but follow Lyytinen and Rose [6], in relying on research participants to use their own definitions of agility.
If academic research is to be relevant to practice, researchers need to keep abreast of practitioner challenges, have a grounded understanding of them, and be able to tackle the changing landscape of practitioner challenges as it evolves. This paper seeks to address two research questions:
RQ1: What challenges do agile practitioners face?
RQ2: How do practitioner challenges manifest themselves in an organisational setting?
The first research question explores the landscape of practitioners’ agile challenges. The aim is to capture an overview of challenges - the breadth rather than the depth. This question is addressed by the Challenge Wall study described in section 3.1. The second research question explores challenges within organisations and aims to uncover rich detail about those challenges. This is addressed by the Case Study described in section 3.2 and the Survey described in section 3.3.
In answering these research questions we also compare the current landscape with previous reports of practitioner challenges, consider what challenges have been tackled in the research literature, and discuss the nature of agile practitioner challenges.
This paper is an extended version of Gregory et al. [7], which presented a snapshot of challenges gathered from practitioners using a Challenge Wall and discussed how the challenge landscape has changed. This extended paper continues and deepens that discussion by bringing in empirical data from an organisational Case Study and a Survey and exploring challenge characteristics.
In this paper, Section 2 presents previous literature on the need for industrial engagement with academic research and previous investigations of practitioner challenges. Section 3 has three sections. The first introduces the Challenge Wall and presents the results of a thematic analysis of challenges collected during 2013 and 2014; the second presents a Case Study and reports a detailed investigation into practitioner challenges in one organisation; and the third presents results from an online Survey undertaken to expand insights from the Case Study. Section 4 returns to the research questions and discusses the implications of the findings and Section 5 presents conclusions.
2 Related Work
A series of papers has charted the progress of agile research since its early days. In 2003 Abrahamson et al found ‘a jungle of emerged software development methods’, and a lack of empirical evidence to support ideas [8]. In 2008, Dingsøyr et al [9], stated that the primary challenge for agile research was to combine academic rigour with industrial relevance, suggesting that researchers could use research methods such as action research as a way to increase relevance and impact. In a systematic review in the same year, Dybå and Dingsøyr concluded that there was a need for more empirical research, and suggested that researchers should aim to understand the drivers of agile adoption as well as its effects [10]. The call for more research continued in 2009 by Abrahamson et al. [11], who also identified a need for more rigour and industrial studies, as well as highlighting a lack of clarity about what was meant by agility. More recently the research landscape has changed. Both Dingsøyr et al in 2012 [12] and Chuang et al in 2014 [13] have reported an increase in published research, indicating a maturing field.
Research in agile has addressed a number of specific topic areas highlighted in several systematic literature reviews; these include reviews of the state of research, synthesis of research themes and identification of challenges. For example, a 2011 review of agile global software engineering literature [14] concluded that research was increasing but there was a predominance of industrial experience reports which report on modifications to practice based on contextual factors. A 2014 review of
communication in agile global software development identified seven research themes, and reported that about half of the chosen papers were experience reports [15]. The predominance of industrial experience reports in the agile literature has been noted by a number of authors [14-16]. Experience reports are extremely useful as they tell a contextualised story about an organisation, and in doing so describe practice, suggest practical techniques, and provide guidelines. However, there are limitations to this type of literature. They rarely use theory or try to develop a deeper understanding of the phenomena and situations they report on. They also usually tell positive stories of problems solved rather than describing persistent difficulties, worsening situations or failures. As a result they provide snapshots of successful practice, but almost certainly do not represent the state-of-the-practice. Indeed, few papers describe major unresolved problems or failures, resulting in a general publication bias towards only reporting success. Since many lessons are learnt in response to mistakes and failures, this bias, although unsurprising, is not helpful. This problem is not specific to the agile area, and has been noted in other disciplines [17].
Industrial experience reports have limitations but so does academic research, in particular, guaranteeing its relevance to practice. To address this limitation, other approaches have been used to identify research questions. For example, during a panel discussion at XP2010 (http://xp2010.org/) practitioners said researchers did not always address questions they wanted answering. During the rest of the conference delegates were asked to suggest and vote on topics that should be researched, in order to create a prioritised list of 'burning issues' for the agile research community [18] (see Table 5). During an XP2013 workshop Dingsøyr and Moe elicited and ranked eight research challenge topics for large-scale agile software development [19] from a mixture of practitioners and academics. Topics were, in ranked order, Inter-team coordination; Large project organisation/portfolio management; Release planning and architecture; Scaling agile practices; Customer collaboration; Large-scale agile transformation; Knowledge sharing and improvement and Agile contracts. Taking this approach to identifying research questions is a more direct way of ensuring research relevance, but how relevant the challenges are to practice depends on who is suggesting them.
The work presented in this paper explores challenges with agile development. Several attempts have been made to categorise challenges faced in the application of agile. Gandomani et al. [20] identified four categories of challenges faced by organisations when migrating to agile: organisation and management; people; process; and tools related challenges. This classification is based solely on existing literature. Using grounded theory, van Waardenburg and van Vliet [21] investigated the challenges caused by the co-existence of agile methods and plan-driven development, and discussed mitigation strategies for those challenges. This work is based on 21 interviews with agile practitioners from two large enterprises in the Netherlands. They organised the challenges under two categories:
'Increased landscape complexity' and 'Lack of business involvement'. The paper exposes consequences of the former category as 'Problems with communication', 'Dependent definition of done', and 'Difficulties to create change'. The consequences of the latter category are 'Problems with requirements gathering', 'Slow reaction to change', 'Problems with requirements prioritisation' and 'Limited feedback from the business'. For both challenge categories, mitigation strategies were proposed that focused on communication between the agile and traditional part of the organisation, and the communication timing.
Conboy et al [22] identified nine themes for challenges experienced by 17 large multinational organisations using agile methods. The research focused on challenges encountered by people involved in the agile development process. The themes were: developer fear as a result of the transparency of skill deficiencies; the need for developers to be “master of all trades”; dependency on social skills; deficiency of developers' business knowledge; the need to understand and learn values and principles of agile, not just the practices; lack of developer motivation to use agile methods; implications of devolved decision-making; the need for agile compliant performance evaluation; and absence of specific recruitment policies and absence of trained IT graduates for agile.
These related works are discussed further in section 4.1 and 4.3 in the light of our findings.
3 Investigating Practitioner Challenges
This section presents three studies undertaken to investigate agile practitioners' challenges. Section 3.1 reports the Challenge Wall study, undertaken to answer RQ1, what challenges do practitioners face? Section 3.2 reports a Case Study investigation of challenges experienced by the London office of a large financial organisation, undertaken to answer RQ2, how do practitioner challenges manifest themselves in an organisational setting? Section 3.3 reports an online Survey developed to investigate further the Case Study findings, and to further explore RQ2.
3.1 Challenge Wall
This section reports how a Challenge Wall (Figure 1) was used to collect challenges from agile practitioners and presents a thematic analysis of the findings.
3.1.1 Approach
A Challenge Wall was deployed at five Agile Conferences and events between October 2013 and October 2014: the Agile Business Conference, London, October 2013 (www.agileconference.org); DSDM Members Day, Manchester, November 2013 (www.dsdm.org); XP, Rome, May 2014 (www.xp2014.org); AgileNorth, Preston, June 2014 (www.agilenorth.org); and the Agile Business Conference, London,
October 2014 (www.agileconference.org). Attendees were mostly agile practitioners and business representatives, except for the XP Conference in 2014 that was attended by a mixture of practitioners and academics. Four of the events were based in the UK; the XP Conference was based in Rome. Practitioner and business attendees represented a range of organisational roles. The job roles of attendees at the 2013 and 2014 ABC Conferences and the DSDM Manchester Members Day 2014, the only events about which we had access to such data, are shown in Figure 2.
The Challenge Wall was set up by positioning a poster in a visible place in the conference or event venue and providing a stack of pens and small blank challenge cards. Delegates were encouraged to fill out the cards anonymously (Figure 3), and these were then attached to the wall next to the poster for others to read. Delegates wrote one challenge per card, and could fill in as many cards as they wished. The Challenge Wall gradually grew throughout the event, and became a trigger for discussions between delegates and the authors about the nature and context of the challenges identified.
A thematic analysis approach was used for data analysis [23]. Three researchers each completed an independent thematic analysis of the challenges. These analyses were done separately, with no initial list of themes. Each researcher used their own individual approach. Two researchers worked by
identifying codes first which they then grouped into subthemes, the third focused solely on identifying subthemes. The two different researchers reviewed the independent lists. They verbally clarified the meaning of the descriptors used by the other researchers, and created a merged list of subthemes. Merging started by looking at subtheme names and their associated challenges, and went on to identify high-level themes, which were used as grouping mechanisms for the more detailed subthemes. Discussion focussed on whether to merge or split subthemes, finding appropriate names for subthemes, and identifying broad themes at the right level of granularity. For example, 'Culture' and 'Changing Mindsets/Culture' had been identified as subthemes by two of the independent analysts, but a distinction between organisational culture and national culture had not been made, so the reviewers created these as themes. They regrouped the data into nine themes and 27 subthemes. This set of themes was revised again through discussions between all researchers using Skype calls and emails, and the final set of seven themes and 27 subthemes was agreed. For example, through group discussion at this final stage it was decided that 'Organisational Culture' and 'National Culture' would be more appropriate as subthemes grouped under the broader theme of 'Culture'.
Figure 3: A Challenge Card
3.1.2 Findings
One hundred and ninety-four challenge cards were collected. Four were disregarded as inappropriate, because they were too difficult to understand. As a result of the thematic analysis described above the remaining 190 challenges determined the seven themes and 27 subthemes. Table 1 shows the themes and subthemes along with a description and an example challenge from each subtheme (excerpts of collected data are shown in italics). The table is ordered, largest first, by the number of challenges in the themes and subthemes, with the number of challenges in each group provided in brackets.
The thematic analysis of the Challenge Wall data highlighted two striking groups containing a high number of challenges: Claims and Limitations with 46 challenges (with the dominance of misconceptions, shortcomings, and hype), and Organisation with 44 challenges (with most highlighting business concerns, management buy-in and understanding, stakeholder commitment and engagement and a concern for agile within a non-agile environment). Culture, Teams and Sustainability were
medium-sized groupings. Scaling and Value were the two smallest thematic groups. Value only contained seven challenge cards.
The challenge statements were diverse, as shown through the identification of 27 subthemes. Some challenge statements were brief, for example:
‘Getting started’; and
‘Tailoring’.
Others were lengthy, for example:
‘Agile is just an umbrella term. So agile itself cannot be a problem. Adoption of agile approaches depends on understanding of the many techniques and ability or willingness to adopt an appropriate set of them. Flexibility or lack of it is the main problem’.
There was some repetition of content, for example four agile in a non-agile environment challenges were almost identical:
‘Using agile in waterfall environments’; and
‘It is agile but business still have a waterfall thinking’; and
‘It is agile but most of the business are not’; and
‘It is agile, rest of business isn’t’.
Nevertheless there was a lot of variety. Challenges were expressed at different granularities, so some were quite high level, for example:
‘Fully engaging the business’.
Others, however, were more detailed and at a lower level of granularity, for example:
‘The system team having a different opinion of “done criteria” compared to a sub-team. The sub-team counted “done” as working in the system, whereas the system-team only called each ‘unit’ as “done” prior to integration. We (the sub-team) felt this was naïve.
Some of the challenge statements covered many topics, and were therefore difficult to classify, for example a challenge such as:
“Changing from a command and control/mechanistic worldview to a future of autonomous, self-managed agents in a systemic organisation is too much if the system does not change itself – including leaders”.
This was classified under the subtheme Organisational culture in the Culture theme but could also have been classified under Commitment/engagement in the Organisation theme or Team practices in the Teams theme or Process improvement in the Sustainability theme.
<table>
<thead>
<tr>
<th>Main Theme</th>
<th>Subthemes</th>
<th>Description of Subthemes</th>
<th>Example Challenge</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Claims and Limitations (46)</td>
<td>Misconceptions (23)</td>
<td>The multi-faceted aspects of agile are open to many different interpretations</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Shortcomings (14)</td>
<td>Areas where information is sparse, limited or where methods are used inappropriately</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Hype (8)</td>
<td>Misleading or excessive claims about agile approaches</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Failure (1)</td>
<td>Only limited evidence is available about failures</td>
</tr>
<tr>
<td>2</td>
<td>Organisation (44)</td>
<td>Business & IT transformation (11)</td>
<td>Requires business and IT to collaborate to establish agility throughout the entire value chain</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Management buy-in & understanding (10)</td>
<td>Traditional management may see agile as just another IT method that can be implemented and structured to ‘fit’ existing organisational norms</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Agile in a non-agile environment (10)</td>
<td>Teams successfully adopt agile but operate in an environment where wider organisational structures are more traditional</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Commitment/Engagement (7)</td>
<td>Success can be challenged by lack of awareness or commitment from other stakeholders</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Adoption (4)</td>
<td>Concerns around ‘how to’ introduce agile ways of working either into teams or into the wider organisation</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Fear (2)</td>
<td>Fear of change and the unknown as agile appears less structured with people ‘doing their own thing’ whilst using a whole new set of jargon</td>
</tr>
<tr>
<td>3</td>
<td>Culture (31)</td>
<td>Organisational culture (13)</td>
<td>The organisation requires a philosophical belief in people over process</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Changing mindsets (7)</td>
<td>Agile is more than a set of practices used by IT requiring wide ranging change to work patterns</td>
</tr>
<tr>
<td></td>
<td></td>
<td>National culture (5)</td>
<td>Differences in national culture, particularly between East and West, compound issues with organisational culture</td>
</tr>
<tr>
<td>Culture contd.</td>
<td>Challenges</td>
<td>Description</td>
<td></td>
</tr>
<tr>
<td>---------------</td>
<td>------------</td>
<td>-------------</td>
<td></td>
</tr>
<tr>
<td>Distributed teams (5)</td>
<td>Business realities are often contrary to the agile need for co-located teams, with teams distributed across the UK, Europe or worldwide</td>
<td>‘It requires co-location in a digital world, where travel is too expensive’</td>
<td></td>
</tr>
<tr>
<td>Trust (1)</td>
<td>Providing a safe environment to develop and innovate</td>
<td>‘What is the cost for not investing in trust?’</td>
<td></td>
</tr>
<tr>
<td>Teams (24)</td>
<td>Team practices (11)</td>
<td>Uncertainty and perhaps lack of training in specific practices or techniques</td>
<td>‘How to estimate/ better estimate the effort to support planning?’</td>
</tr>
<tr>
<td>Leadership (5)</td>
<td>Traditional project management approaches of ‘command and control’ need to be replaced by a facilitation style of leadership</td>
<td>‘That the manifesto lacks Leadership over Management’</td>
<td></td>
</tr>
<tr>
<td>Finding good people (4)</td>
<td>Agile requires skilled, self-directed and motivated team players</td>
<td>‘Getting the right people interested- decision makers and users’</td>
<td></td>
</tr>
<tr>
<td>Individual motivation (4)</td>
<td>Agile philosophies are often at odds with organisational reward structures that value individuals</td>
<td>‘It sometimes marginalises lonely problem solvers’</td>
<td></td>
</tr>
<tr>
<td>Sustainability (23)</td>
<td>Process improvement (15)</td>
<td>Once adopted, agile requires on-going change and commitment in order to become sustainable and embedded within teams and the organisation</td>
<td>‘If it is codified it becomes “bureaucratic” and if not it is too diverse to be taken seriously’</td>
</tr>
<tr>
<td>Documentation (4)</td>
<td>Tensions arise when management sees documentation as a way to demonstrate control whilst developers focus on code over documents</td>
<td>‘That it has become an excuse not to do any documentation or planning beyond the sprint and product backlog’</td>
<td></td>
</tr>
<tr>
<td>Contracts (3)</td>
<td>Standard contracts require detailed upfront specifications that are contrary to the evolving approach of agile</td>
<td>‘Some think they need a contract’</td>
<td></td>
</tr>
<tr>
<td>Knowledge sharing (1)</td>
<td>Needs a positive learning environment to motivate individual commitment in order to establish effective knowledge sharing</td>
<td>‘We innovate but we don’t really share innovations’</td>
<td></td>
</tr>
<tr>
<td>Scaling (15)</td>
<td>Large projects (10)</td>
<td>Working at programme level where team practices need to scale across multiple teams in large complex projects</td>
<td>‘Agility in large projects effecting several applications, platforms, techniques’</td>
</tr>
<tr>
<td>Governance (5)</td>
<td>Traditional mechanisms that ensure projects achieve regulatory or legal compliance are often process driven and bureaucratic</td>
<td>‘Have not yet found any clear view on how the ‘governance’ at Business Case level works or could work in relation to outcomes, costs and benefits’</td>
<td></td>
</tr>
<tr>
<td>Value (7)</td>
<td>Business value (4)</td>
<td>To counter criticism of waterfall approaches where organisations tended to focus on process rather than product, agile projects must demonstrate value</td>
<td>‘Ensuring that projected value is achieved’</td>
</tr>
<tr>
<td>Measurement (3)</td>
<td>Many organisation use wide ranging metrics but these are not always appropriate or necessary to agile projects</td>
<td>‘The lack of well formulated and defined measurement practices’</td>
<td></td>
</tr>
</tbody>
</table>
3.1.3 Reliability and Limitations
We follow Lincoln and Guba [24] in using credibility, transferability, dependability, and confirmability to discuss the reliability and validity of qualitative research. Credibility assesses whether the study measures what is intended [25]. It involves adopting appropriate methods, developing familiarity with the research context and establishing that results are believable from the perspective of the research participants. Transferability refers to the applicability of study results to other situations [25]. It is addressed through accurate descriptions of the research context and assumptions brought to the research. Dependability assesses whether the same results would be achieved if the study were repeated. It is addressed by reporting detailed descriptions of the research process to enable future researchers to repeat the work [25]. Confirmability assesses whether findings reflect research participants’ experiences not the preferences of the researchers, and is related to the issue of objectivity [25]. This is addressed through the corroborations of research analysis by more than one researcher and recognition of the study shortcomings. These terms are used below and also in the other Reliability and Limitations sections: 3.2.4 and 3.3.3.
Credibility, transferability and dependability of the Challenge Wall findings are enhanced by the detailed descriptions of the data collection approach and context provided in this section. Collecting data through face-to-face contact with respondents at several events enhances credibility as the researchers are able to confirm that challenges were a genuine expression of respondents’ views. There were several limitations to the data collection approach. Few software developers or testers attend such events therefore their views were not well represented in the dataset. The data capture occurred at conferences, and as participants were outside their normal working environment it is possible that this influenced the challenges captured. The challenge cards were filled in anonymously so they could not be linked to job roles. The events at which data was collected were all 'pro-agile', and respondent sampling was potentially skewed by this context. The data set only provides an insight into challenges at a particular time in a continually evolving landscape of challenges.
Confirmability of the analysis was strengthened by the researchers using a process of first working independently, and then collaboratively, to develop an understanding of the data. There were some disagreements between the researchers about theming choices, resulting in the need for negotiated decisions. The credibility of the analysis was not checked with participants, as data was collected anonymously.
3.2 Case Study
The challenge cards presented in the previous section provide an overview of the landscape of practitioner challenges, but they cannot capture any detail or context. Investigating challenges in more
depth requires a different approach that allows the context and detail to be accounted for. A case study
is “an empirical method aimed at investigating contemporary phenomena in their context” [26], and
hence is appropriate for this purpose. Specifically, as the aim is to investigate practitioner challenges in
context, an exploratory case study approach was taken.
Following an email sent to the DSDM members’ mailing list asking for companies facing agile challenges
to get in touch with us, the research team were approached in March 2013 by the London-based Office
of a large multinational organisation in the finance sector (whom we shall call BigBank). Initially, the
challenge they were facing was presented as one of reporting from the agile teams in the London Office
to the Head Office, which was based in a different country. Over the course of the case study, it became
clear that there were many different challenges being faced by staff in different roles, and working with
the London Office staff, the essence of the challenge became characterized as “Agile projects in a non-
agile environment”.
3.2.1 Context
About two years before the case study started, the London office of BigBank decided to adopt the
Dynamic Systems Development Method (DSDM). DSDM is an end-to-end framework for agile project
management and delivery, whose underlying philosophy is to align projects with strategic business
goals and deliver early benefits. DSDM does not prescribe any specific engineering practices and so the
teams adopted a Scrum-based approach working within timeboxes (the DSDM term for sprints) and
using a Prioritised Requirements List (the DSDM term for a product backlog). The DSDM framework
also provides guidance for Project Management Office (PMO) management. The PMO is a department,
commonly found in large organisations, whose function is to facilitate IT project management across
the whole organization by making strategic decisions about projects, programmes and portfolios and
ensuring repeatable processes. By the time of the study the London office had adopted DSDM in its
entirety, however, the use of DSDM was not mature.
Influential in the decision to adopt DSDM, was the fact that several projects had failed in recent years,
which had left uncertainty in the relationship between the Head Office and the London office. This
background exacerbated the challenge and brought it into sharp focus. BigBank chose DSDM because
they were operating in a regulated environment, and they needed to ensure that the approach they
chose provided structure and governance mechanisms as well as agile processes. The agile transition
was supported by management, and was achieved by a mix of employee training and help from
consultants. Agile culture was gradually embraced and software was being delivered in regular
increments. However their projects were approved, budgeted and monitored by Head Office who
continued to use a waterfall approach and had a hierarchical structure.
It soon became apparent that the use of agile practices was starting to cause problems between the London office and the Head Office. Representatives from the Head Office appeared sceptical about the agile approach. They always wanted to know more detail and asked questions which seemed to the London office to be irrelevant, or asked for an unnecessary amount of detail. It seemed that they suspected the London Office was not really in control of their projects. As our main contact at BigBank said "There are challenges when reporting to <Head Office>. They expect detailed plans up front, and seem to believe that we are ‘making it up as we go along’." The London office tried to address these concerns by training Head Office staff in agile practices and the DSDM framework, mapping the agile process to the waterfall governance process, and using a range of different reporting styles and approaches to communication. However, concerns and challenges remained.
3.2.2 Approach
The aim of the case study was to address RQ2: How do practitioner challenges manifest themselves in an organisational setting? To this end, the investigation focused on exploring in depth the challenge being faced by BigBank, and understanding it in context.
Over the course of three months the research team regularly visited the company and collected data using interviews, group meetings and document review. Initially, the researchers were presented with a wide range of issues, which were gradually reviewed and refined. The focus was narrowed onto one project to scope the investigation (that we shall call FireFly). Firefly's aim was to migrate existing systems from a mainframe environment to a framework more commonly used within the domain. Data collection was undertaken when FireFly was in the last quarter of its planned timescale.
Table 2: List of job roles of Case Study interviewees
<table>
<thead>
<tr>
<th>Jobs roles interviewed</th>
<th>Interview questions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Manager of Project Management Office (PMO) (the gatekeeper)</td>
<td>Semi-structured interview, based around the following questions</td>
</tr>
<tr>
<td>Head of Project Management Office (PMO)</td>
<td>1. From your experience, what are the biggest problems with reporting between London and Head Office?</td>
</tr>
<tr>
<td>Agile Coach</td>
<td>2. What feedback do you get internally (in London)?</td>
</tr>
<tr>
<td>Visionary for FireFly</td>
<td>3. What feedback do you get from Head Office</td>
</tr>
<tr>
<td>Project Manager for FireFly</td>
<td>4. What is your role in the process of reporting?</td>
</tr>
<tr>
<td>Head of Regulatory Department</td>
<td>5. Could you please tell us a little bit about your background?</td>
</tr>
<tr>
<td>Deputy Head of Technology Department</td>
<td>6. Have you worked with agile teams before?</td>
</tr>
<tr>
<td>FireFly Sponsor, interface with senior management in Head Office</td>
<td>7. Is there anything else you’d like to add?</td>
</tr>
<tr>
<td>Head of Risk Regulation and Data Technology (a developer role)</td>
<td></td>
</tr>
<tr>
<td>Head of Internal Audit</td>
<td></td>
</tr>
</tbody>
</table>
The main data collection instrument was a set of semi-structured interviews, conducted during August 2013, with the group meetings and document review providing background and supporting information. Nine interviews were conducted with staff members who were identified by our contact at BigBank as being particularly impacted by the challenge initially identified. Interviews ranged between 20 minutes and an hour. The aim was to get a wide perspective on how the challenge manifested itself within the organisation, directly addressing RQ2. Initially, developers were not represented by any of the interviewees and one was added at our request. However, his was the shortest interview as he was not impacted by the challenges we were investigating. The majority of communication with Head Office was conducted through project managers and the PMO, leaving developers to focus on their daily tasks. Protecting developers from this kind of challenge is a fundamental practice for agile software development. The ten roles represented by the interviewees and the questions asked during the interview are summarised in Table 2; Firefly’s visionary and its project manager were interviewed together, at their request.
Four researchers were involved in data collection and analysis (three of the co-authors and one other researcher). The approach of affinity diagramming [27] was adopted for analysis (see Figure 4). Following this approach, each researcher individually extracted a list of sub-challenges from the data collected and a one-day analysis session was held in order to discuss and consolidate the issues raised. Related issues were collected into groups and the groups were described according to the underlying sub-challenge.
Figure 4: Affinity diagram of Case study sub-challenges (some labels are greyed out to avoid identification)
These sub-challenges were then considered in the context of the themes and subthemes identified through the Challenge Wall study.
3.2.3 Findings
The four key sub-challenges that emerged through the affinity diagram analysis were:
1. How can different parts of the organisation communicate effectively when Head Office management values formal, written communication while the London Office agile teams value informal, verbal communication and minimal documentation.
2. How to demonstrate control when Head Office management expects certainty of time, budget, and specifications from the beginning of the project but agile emphasises the delivery of fit-for-purpose, business value.
3. How to overcome Head Office management perception of re-prioritisation and de-scoping as lack of control.
4. How to respond to Head Office management who expect consistency in reporting right from the start even when the use of agile is still maturing.
In order to verify these sub-challenges and the main challenge characterisation, these findings were discussed with our collaborators at BigBank in December 2013. They welcomed the analysis and agreed with the findings. The fact that the original challenge evolved into many different sub-challenges was not a surprise to them, indicating that they recognised the complexity of the situation. In addition, it was clear that they perceived two parts to the first challenge: how to communicate effectively, and what needs to be reported. Building on this analysis and discussion, the main challenge widened its focus from only reporting, and became characterised as “Agile projects in a non-agile environment”.
According to the categories presented in Table 1, this is a subtheme of the Organisation theme, but the challenges were also linked to other themes and subthemes. To investigate this further, these sub-challenges are considered in the context of the other themes and subthemes in Table 1.
The first challenge can be linked to the Organisational Culture subtheme of Culture. The Head Office wanted detailed, formal, hierarchical communication that was fully documented, but the London office preferred to use team-based, face-to-face, verbal communication to discuss detail and to produce minimal, ‘just-enough’ documentation for record-keeping. The documentation overhead demanded by Head Office took additional time for the London team. It undermined their agile approach and indicated a lack of trust. Head Office worked in a different language to the London office so some communication needed to be translated. This took time and introduced the potential for misunderstandings, which in turn sometimes led to additional queries for more information. Also, at Head Office employees rotated positions every two years, which involved not only a role change but also a Department change. This increased the challenge of establishing a mutual understanding of agile.
The second challenge links to the Business Value subtheme, of the Value theme. As the Head Office environment did not fully accept and recognise the agile principles that the London office were using it was difficult to demonstrate progress and achievement of business value. From Head Office's point of view ‘value’ was associated with projects being on-time, on-budget and delivered according to specification. In contrast, the London office focused on delivering fit-for-purpose products that provided business value. The London office wanted to demonstrate control without retrofitting agile progress reports into templates designed for a waterfall environment, find the right level of detail for measuring and reporting progress and ensure that the information provided was interpreted correctly.
The third and fourth challenges link to the Organisation theme: the third to Management buy-in and understanding, and the fourth to Adoption. The London office was evolving their agile practice to suit their environment, and their approach was not yet mature. Head Office needed educating about the new approach. The Project Management Office (PMO) was still building up their agile knowledge base and introducing new processes. The reporting for different projects was not yet fully consistent and project report templates were still evolving. This compounded the perceived lack of full control and the likelihood that inconsistencies would be seen as problems. Because of lack of buy-in and understanding, Head Office considered some agile techniques such as reprioritisation and de-scoping as indicators a lack of control, rather than legitimate processes. Even small changes to a project’s scope were considered to indicate a lack of control and hence failure to deliver.
The challenges found in the context of BigBank added rich detail to one of the subthemes in Table 1, but they also highlighted the interconnectedness of the themes identified from the Challenge Wall.
3.2.4 Reliability and Limitations
Credibility, transferability and dependability of the findings are related to the data collection approach and context, and the detailed descriptions provided of them. Limitations were the short period of time over which the study was conducted, and the fact that it was undertaken in one organisation. Many of the details described were specific to the organisation itself, as no other organisation would experience exactly the same set of conditions. The work was only done with staff at the London office and there was no access to Head Office staff.
Confirmability was achieved through four researchers working on and discussing the analysis of the data. The analysis was presented to and discussed with several members of BigBank towards the end of the study period to ensure credibility by research participants.
3.3 Online Survey
An online Survey was developed to investigate further the Case Study findings. Specifically the aim was to understand whether others experienced the sub-challenges, and if there were other sub-challenges related to the main challenge of running agile projects in a non-agile environment.
3.3.1 Approach
The Survey asked for the respondent’s role/job, which agile methods the respondent’s organisation had adopted and the extent to which that organisation had adopted agile. The Survey went on to ask whether respondents had experienced the sub-challenges from the Case Study. These were modified from their previous form to better reflect the findings presented in section 3.2, and to be suitable for a survey format. The questions are reproduced in Table 3. Responses were elicited in the form of 'Yes', 'No', and 'Don't know'. The 'Don't know' option was included to capture situations in which respondents could not answer the question because their role meant that they did not have access to that information. The survey was aimed at respondents who were working in the context of "Agile projects in a non-agile environment", but not in exactly the same setting as BigBank. The Survey was developed iteratively and piloted by some staff from BigBank before being released online.
An invitation to complete the Survey was sent to more than 20 agile forums/message boards, LinkedIn and Meetup groups. The forums included yahoo on specific methods (Extreme Programming, Scrum Development, Agile Testing, Agile Project Management, Agile Usability and Lean Agile) and local and international forums (Agile Scotland, AgileNorth, Agile Staffordshire, Agile Wales, BCS Agile, Agile Alliance and Scrum Alliance); the LinkedIn groups were mostly of practitioners (Scrum Practitioners, Agile Coaching, DSDM, Agile Project Management, Agile Project Managers, Lean Agile Software Development Community, Agile .net Practitioners, Agile Coaching, Agile Managers Forum, Agile Project and Program Portfolio Management); the Meet-up groups reached were worldwide (Agile Sydney, Agile London Discussion Group, Agile Project Management, Agile Denver, The Chicago Agile Methodology Group, Agile Auckland, Agile-evangelists, Agile UX and Agile Testing). The Survey was conducted through SurveyMonkey over a period of nine months (June 2014 to February 2015).
Data was analysed using summary statistics and thematic analysis. Answers to closed questions were counted and collated as percentages. Answers to the open question were classified using the thematic classification scheme developed from the Challenge Wall data. Three researchers were involved in the data analysis.
3.4.2 Findings
There were 181 distinct responses to the Survey. The roles of respondents are shown in Figure 5. There was considerable variation in responses describing which agile methods were used. Scrum was used by 43% of respondents, a mix of Scrum and Kanban by 15%, DSDM by 6%, and a mix of Scrum and XP by
4%. Of the other responses 14% mentioned using an ad hoc mix of methods including examples such as 'Scrum/Kanban/XP' and 'Scrum/SAFe'; 4% used a single method including Kanban (2), XP (1), AWD (1), Lean (1), SAMBA (1), SAFe (1); and 13% did not answer the question. Scrum was the most widely mentioned method, included in 69% of respondents’ answers. When asked the extent to which their organisation had adopted agile 76% stated that their organisation had only partially adopted agile, 14% stated their organisation had fully adopted agile and 10% stated their organisation had not adopted agile at all.
Figure 5: Job roles of survey respondents
Summary statistics shown in the three right hand columns of Table 3 show that more than 50% of respondents had faced one or more of the “agile projects in a non-agile environment” sub-challenges identified through the Case Study, providing evidence that they are experienced in a range of settings.
Table 3: Have you experienced any of the following challenges while working with agile methods that you are currently using?
<table>
<thead>
<tr>
<th>Challenge</th>
<th>Yes</th>
<th>No</th>
<th>Don’t know</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agile teams struggle to identify how to communicate their progress to management</td>
<td>51.53%</td>
<td>41.10%</td>
<td>7.36%</td>
</tr>
<tr>
<td>Agile teams struggle to identify what needs to be reported to management</td>
<td>53.61%</td>
<td>36.75%</td>
<td>9.64%</td>
</tr>
<tr>
<td>Management expects certainty of time, budget, and specifications from the beginning of the project</td>
<td>75.31%</td>
<td>18.52%</td>
<td>6.17%</td>
</tr>
<tr>
<td>Re-prioritisation and de-scoping is perceived by management as lack of control</td>
<td>50.62%</td>
<td>38.27%</td>
<td>11.11%</td>
</tr>
<tr>
<td>Management expects consistency in reporting right from the start even when the teams are still in the process of transitioning to Agile</td>
<td>53.99%</td>
<td>33.13%</td>
<td>12.88%</td>
</tr>
<tr>
<td>Other (please specify)</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Thirty-six responses were given to the ‘Other (please specify)’ part of the question. These identified ‘them and us’ issues, relating to process, mandatory procedures, micro-management, expectations, documentation, reporting, lack of experience and understanding of agile. Although these challenges
were identified in the context of ‘agile in a non-agile environment’, they also can be classified within different themes and subthemes (see Table 4).
Table 4: Challenges identified in the context of ‘agile in a non-agile environment’ can be classified in different themes and subthemes
<table>
<thead>
<tr>
<th>Main Theme</th>
<th>Subthemes</th>
<th>Example Challenge</th>
</tr>
</thead>
<tbody>
<tr>
<td>Claims and Limitations</td>
<td>Misconceptions</td>
<td>‘Management uses the term “agile” to account for pushing the teams harder, even towards constant overtime’</td>
</tr>
<tr>
<td>Organisation</td>
<td>Business & IT transformation</td>
<td>‘Setting proper expectations within other parts of the organization’</td>
</tr>
<tr>
<td></td>
<td>Management buy-in & understanding</td>
<td>‘During agile adoption, management occasionally micromanages until the Scrum teams have demonstrated a consistent delivery process. Once teams hit their strides with consistent velocities, Management tends to better understand and appreciate agile philosophies’</td>
</tr>
<tr>
<td></td>
<td>Agile in a non-agile environment</td>
<td>‘Communication requirements for agile team not met by management/non-agile teams’</td>
</tr>
<tr>
<td>Commitment/Engagement</td>
<td></td>
<td>‘Customers do not have agile experience’</td>
</tr>
<tr>
<td>Adoption</td>
<td></td>
<td>‘Nobody knows agile except for me. One of my challenges is bringing a large number of people up to speed not just with the ‘rules’ but also the ‘why’ so that they can make pragmatic decisions’</td>
</tr>
<tr>
<td>Culture</td>
<td>Organisational culture</td>
<td>‘People are uncomfortable with the team members being accountable for the outcomes and results’</td>
</tr>
<tr>
<td></td>
<td>Changing mindsets</td>
<td>‘Parts of the team still want to work according to the Waterfall process’</td>
</tr>
<tr>
<td>Sustainability</td>
<td>Process improvement</td>
<td>‘Inter-sprint priority changes’</td>
</tr>
<tr>
<td></td>
<td>Contracts</td>
<td>‘We struggle to produce documentation that is part of the contract’</td>
</tr>
</tbody>
</table>
These findings illustrate how challenges are not only multi-faceted, but also complex and interrelated.
3.3.3 Reliability and Limitations
Credibility, transferability and dependability of the findings are related to the data collection approach and context, and the detailed descriptions provided of them. Limitations come from the convenience sampling approach used, the use of an online survey which made it impossible to check the credentials of respondents or the accuracy of the answers, and the lack of contextual information gathered about respondents. Confirmability was established by the analysis being undertaken by two researchers, however this could not be checked with participants as data was collected anonymously.
4 Discussion
The work presented in the previous section focussed on answering two research questions. This section returns to those research questions and discusses the implications of our findings. Section 4.1 summarises findings in answer to RQ1 and discusses how the landscape of challenges has changed over time by comparing our findings with previous research on agile practitioners’ challenges. Section 4.2
summarises findings in answer to RQ2 and discusses why challenges are difficult to characterise by considering thematic perspectives and the systemic nature of challenges in organisational settings. Section 4.3 compares the landscape of challenges identified in this paper with topics investigated in the research literature and discusses the implications for research that is relevant to practice. Section 4.4 discusses overall limitations and generalisability.
4.1 What Challenges Do Agile Practitioners Face?
In answer to RQ1, ‘What challenges do agile practitioners face?’ 27 subthemes and seven themes were identified. It was found that practitioners face a wide range of challenges, which can be expressed at different levels of granularity, many of which are multi-faceted and interlinked. The Challenge Wall data provided an opportunistic snapshot of the agile practitioner community’s current concerns. But change is inevitable, and we now discuss how practitioner concerns have evolved over time by comparing the Challenge Wall data, collected from 2013 to 2015, with challenges collected at XP2010 and reported by Freudenberg and Sharp [18] (Table 5).
Table 5: Comparison of themes from this study with practitioners’ top ten research questions from [18]
<table>
<thead>
<tr>
<th>Frendenburg and Sharp top ten research questions (numbers indicate ranking by size)</th>
<th>Themes/Subthemes (numbers indicate ranking by size)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Agile and large projects</td>
<td>6. Scaling</td>
</tr>
<tr>
<td>3. Do teams really need to always be colocated to collaborate effectively?</td>
<td>3. Culture/Distributed teams</td>
</tr>
<tr>
<td>4. Architecture and agile—how much design is enough for different classes of problem?</td>
<td>6. Scaling/Large projects</td>
</tr>
<tr>
<td>5. Hard facts on costs of distribution (in $,£,€ and so on)</td>
<td>3. Culture/Distributed teams</td>
</tr>
<tr>
<td>6. The correlation between release length and success rate</td>
<td>1. Claims and Limitations/Shortcomings Sustainability/Process Improvement</td>
</tr>
<tr>
<td>7. What metrics can we use with minimal side effects?</td>
<td>7. Value/Measurement</td>
</tr>
<tr>
<td>8. Distributed agile and trust—what happens around 8–12 weeks?</td>
<td>3. Culture/Distributed teams</td>
</tr>
<tr>
<td>9. Statistics and data about how much money/time is saved by agile</td>
<td>(Time mentioned in several challenges from different themes)</td>
</tr>
<tr>
<td>10. Sociological studies—what were the personalities in successful/failed agile teams?</td>
<td>4. Teams/ Finding good people</td>
</tr>
</tbody>
</table>
Some research questions from 2010 are still a challenge today although they have changed emphasis, towards organisational concerns and away from internal agile team matters. For example, scaling challenges now relate to organisations, rather than large projects:
‘How do you scale up to a large project over many months or even years?’;
‘Scaling due to complexity (rather than large projects)’; and
‘Scaling across a large enterprise/companies.’
The challenge of metrics is now less about side effects and more about what management wants and how to measure value:
‘Agile is about measuring value, but management want efficiency, defect metrics etc. How to demonstrate team is efficient and improving efficiency?’
‘Lack of focus on business value (and identifying what it means).’
Similar concerns were found in BigBank’s first sub-challenge about communicating effectively when Head Office management valued different kinds of communication than the London Office. Co-location of teams is still an issue (under the Culture theme) but trust has emerged as a key concern.
The research questions relating to specificities of agile practice such as: ‘Architecture and agile—how much design is enough for different classes of problem?'; and ‘The correlation between release length and success rate’ seem to have less importance now. The current challenges around agile practices are not ‘how to’ challenges but are misconceptions and shortcomings. For example:
‘It’s hard to make it work with clunky legacy systems’
Or from the Survey findings:
‘Teams and managers breaking out of Scrum methodology & framework to reduce what they perceive as “framework” waste’
The lack of ‘how to’ challenges suggests a move away from understanding agile towards a wider concern of sustainability within more or less hostile environments. The high number of agile challenges in the Organisation theme and of organisation-related ones in other themes reinforces this point.
4.2 How Do Practitioner Challenges Manifest Themselves in Organisational Settings?
In answer to RQ2, ‘How do practitioner challenges manifest themselves in an organisational setting?’, the Case Study revealed a web of detailed, inter-linked sub-challenges. The Survey confirmed that these were experienced in different contexts by over 50% of respondents. Even when a specific challenge area was the focus, other interlinked challenges were uncovered through the study, and we found those challenges could be categorised under other challenge themes.
The challenge analysis shows that some challenges relate to more than one thematic category, and if viewed from a different perspective or investigated in more depth could be categorised differently. For example, BigBank’s main challenge was an example of the subtheme ‘agile projects in a non-agile environment’ in the Organisation theme. However, when this challenge was investigated in depth four sub-challenges were identified. As discussed at the beginning of section 3.2.3 these sub-challenges could be linked to four specific sub-themes in the Culture, Value and Organisation themes.
We suggest that the seven themes are more useful if they are seen as ‘perspectives’ or ‘lenses’. These perspectives are not distinct boxes into which challenges can be fitted, but standpoints or perspectives from which the challenges can be viewed. The Organisation theme looks at challenges from the standpoint of organisational structures and processes whereas the Team theme uses the perspective of
groups of individuals. Culture uses the perspective of the ethos and rules that are in play within working environments. Sustainability uses the standpoint of longevity and time, Scaling uses size and Value uses the perspective of benefit or worth. Claims and Limitations uses the perspective of how agile itself is perceived and how those perceptions themselves create challenges. Seen in this way some agile challenges are multi-dimensional problems that are experienced simultaneously as business, organisational, social and adaptation problems.
The inter-related themes identified in this study reflect the complex and multi-faceted environments in which software is developed. There is a long tradition of applying systems theory to organisations [28, 29]. Also, several authors have used a Complex Adaptive Systems view to explain agile methods in organisational contexts [30-33]. A system is an interconnected set of elements forming a whole that has properties belonging to the whole. A complex adaptive system uses transformative feedback loops to enable continuous improvement, has emergent and potentially unpredictable behaviour, distributed rather than centralised control; a shallow rather than a deep structure; and is enhanced by growth and evolution [30]. From this viewpoint an agile adoption should transform the whole organisation [34]. Some practitioners’ challenges, therefore, are multi-dimensional and either the challenges themselves, or the changes needed to address them, are disruptive to the organisational system within which they sit.
On the one hand this complexity is not surprising, but on the other it provides a salutary lesson for researchers wishing to investigate practitioner challenges. There is a tension between taking account of this complexity and developing specific research questions. It is often the case that resource and funding constraints dictate a tight research focus, but this is at odds with the landscape this work has uncovered. Different kinds of empirical study are used by researchers, which can be broadly categorised as in vivo, in vitro and in silico [35]. The findings here particularly support the need for in vivo case studies, the use of systems approaches to research such as action research [36], soft systems methodology [29] and systems dynamics [37], and recognition that a simplistic view of practitioner challenges is not helpful for practitioners.
4.3 Which Challenges Are Being Addressed by Research?
Findings show that practitioners face a wide range of challenges that change over time, are complex and interlinked. We now look at which challenges are being addressed by research. We compare themes and subthemes from this study with areas identified in Dybå and Dingsøyr’s 2008 systematic literature review [10] (Table 6). We discuss findings from this study in the light of a more recent systematic literature review, in 2014, by Chuang et al [13] that highlights what research has been carried out on specific subthemes. Also relevant, is the investigation by van Waardenburg and van Vliet [21] of agile within a plan-driven environment, omitted by Chuang et al [13].
Table 6: Comparison of themes and subthemes from this study with Dybå and Dingsøyr’s topics [10]*
<table>
<thead>
<tr>
<th>Dybå and Dingsøyr topics</th>
<th>Themes from this study</th>
<th>Subthemes from this study</th>
</tr>
</thead>
<tbody>
<tr>
<td>Introduction and adoption</td>
<td>Organisation</td>
<td>Adoption</td>
</tr>
<tr>
<td>Development process</td>
<td>(Not mentioned in our challenge list)</td>
<td></td>
</tr>
<tr>
<td>Knowledge and project management</td>
<td>Sustainability</td>
<td>Knowledge sharing</td>
</tr>
<tr>
<td>Human and social factors</td>
<td>Organisation</td>
<td>Organisational culture</td>
</tr>
<tr>
<td>Collaborative work</td>
<td>Teams</td>
<td>Team practices</td>
</tr>
<tr>
<td>Team characteristics</td>
<td>Teams</td>
<td>Finding good people</td>
</tr>
<tr>
<td>Perceptions of agile</td>
<td>Organisation</td>
<td>Commitment/engagement</td>
</tr>
<tr>
<td>Developer perceptions</td>
<td>Teams</td>
<td>Individual motivation</td>
</tr>
<tr>
<td>Student perceptions</td>
<td>(Not mentioned in our challenge list)</td>
<td></td>
</tr>
<tr>
<td>Comparative studies</td>
<td>Organisation</td>
<td>Management buy-in and understanding</td>
</tr>
<tr>
<td></td>
<td>Sustainability</td>
<td>Process improvement</td>
</tr>
<tr>
<td>Project management</td>
<td>Organisation</td>
<td>Management buy-in and understanding</td>
</tr>
<tr>
<td></td>
<td>Sustainability</td>
<td>Process improvement</td>
</tr>
<tr>
<td>Productivity</td>
<td>Sustainability</td>
<td>(Not mentioned in our challenge list)</td>
</tr>
<tr>
<td>Work practices and job satisfaction</td>
<td>Teams</td>
<td>Team practices</td>
</tr>
</tbody>
</table>
* Dybå and Dingsøyr identify four topic groups and 13 topics, which are mapped to four themes and nine subthemes from this study.
Organisation, Sustainability, Culture and Teams are themes that have been subject to research interest for some time. For example the topic groupings identified in Dybå and Dingsøyr’s systematic review [10] are reflected in these four themes from our challenge set. Also, van Waardenburg and van Vliet [21] identify ‘lack of business involvement’ as one of their two categories of challenges (the other being ‘increased landscape complexity’) which maps to our subtheme Management buy-in and understanding. Gandomani et al [20] identify ‘organisation and management related’ as one their challenge themes; this includes challenges that we have classified under Organisation (e.g. ‘transforming [...] from “command and control” to “leadership and collaboration”’ that we classified under Business and IT transformation) but also under Culture (‘Changing mindset of people and their organisational culture’ under Changing mindsets). Conboy et al [22] focus on people-related challenges some of which relate directly to subthemes; for example, ‘Developer fear caused by transparency of skills deficiencies’ relates to Organisation; ‘lack of agile specific recruitment policies and suitably trained IT graduates’ relates to Finding the right people (Teams), ‘Lack of developer motivation to use agile methods’ relates to Individual motivation (Teams). However some subthemes within three of these four main themes (Organisation, Sustainability, and Teams), such as Business and IT transformation, Fear, Contracts, Documentation and Leadership, do not seem very evident in the literature searches we have conducted for our industrial partners. This would, however, need to be confirmed by a more up-to-date systematic literature review. The need for business as well as IT
transformation, was of particular concern in the Challenge Wall data, with 11 challenge cards identifying this topic. Examples of challenges identified included:
'It's take up outside of the delivery function. That it is has been coined by IT for IT without the business guys. Which organisational changes are triggered by IT without anybody noticing/caring/managing those changes?'; and
'That everyone seems to think that it starts and stops in software development. How other disciplines blend in is a big challenge'.
A similar example from the Survey is:
'Senior management does not communicate detail of the Agile transformation to middle management.'
BigBank's third challenge ('How to overcome Head Office management perception of re-prioritisation and de-scoping as lack of control') also refers to the need for business transformation.
Scaling is also a topic that has been written about and discussed by practitioners [38, 39]. Chuang et al [13] reference seven papers on scaling, including large or complex projects (searching on 'scale', 'large', 'complex') and none on governance (searching on 'governance', 'PMO'). Dingsøyr and Moe reported from an XP2013 Workshop at which research challenges in large-scale agile development were discussed, that there were few research studies on the topic [19]. A recent systematic literature review on agile governance, identified a small but growing research base [40].
The two themes Value, and Claims and Limitations identified in our challenge set are generally less commonly reported in the empirical research literature, although some of the associated subthemes are more researched. In the references in Chuang et al [13] we found no papers on the topic of business value (searching on 'value'); eight discussing measurement ('metrics', 'measurement'); and none on claims or limitations ('misconception', 'shortcoming', 'fail', 'hype', 'lack', 'claim', 'limitation').
Challenge Wall participants identified 46 challenges on the theme of Claims and Limitations. Comments indicate a certain amount of frustration, but range over a number of topics, including:
'Religious approach';
'Everyone wants to reinvent it';
'Throwing away some of the old useful ideas'; and
'The lack of a project management framework for coordinating multiple teams and or work.'
And from the Survey findings:
'Lack of understanding the agile philosophy by Top Management, while strong from development team'
While there is some literature about the concept of agility [6, 41], there is very little about misconceptions, hype and failure. Agile hype is discussed by Janes and Succi [42] who suggest agile has followed the Gartner Hype Cycle, and is stuck in the 'Trough of Disillusionment' as a result of what they call the 'guru phenomenon'. In a grounded theory study of agile practitioners Hoda et al [43] identify agile hype and scepticism as factors that negatively affected customer involvement in agile projects. There are some discussions in the consultant literature [44], however we could find no empirical research that specifically focussed on investigating this topic.
There is very little research into agile failure, another subtheme of Claims and Limitations. McAvoy and Butler [45] report the failure of a team to adopt an agile method, identifying ineffective decision-making and actions, which occurred as a result of the team's desire to become more cohesive, as one of the key drivers of the failure. This is a gap, and has been mentioned by other researchers [13]. It is somewhat surprising as anecdotally it is not uncommon to hear stories of failure and organisational abandonment of agile.
We also compared our findings with the research areas Dingsøyr et al established in their 2008 preliminary roadmap paper [9] as goals for research achievements by 2015. They indicate some areas for priority in research: maturity, coverage, understanding and impact. They assess that research was having little impact on everyday practice in industry and suggest that “increased application of research methods like action research [36] can be helpful ensuring the relevance, and help provide a larger body of knowledge that can lead to a broader impact on industry.”
Research in the area has grown significantly [13], action research is being used [46, 47] and research may be getting more relevant and is definitely increasing the body of knowledge. However, some perspectives from our challenge list are:
‘That there is no academic research supporting the claimed success’; and,
‘It is isolated from many fields, e.g., a good research could be about bringing information visualisation theory and methods into agile project management in a systematic way.’
This suggests that even if research has been done, the gap between research and what industry wants to know has not yet been bridged.
4.4 Limitations
This paper is an extension of an earlier publication [7]. Additional empirical data from the case study adds depth of analysis and the survey extends and validates the previous findings. Overall limitations for the generalisability of the research are related to the number and type of practitioners accessed and the venues at which data was collected. In all three studies the focus of data collection was not specifically at the level of the development team. Managers of various types are better represented than
developers, analysts or testers in the Challenge Wall and Survey data and the challenge investigated in the Case Study did not impact on developers. Much of the work was undertaken in the UK, with a focus on the DSDM community. It is therefore likely that the data collected reflects manager-level perspectives more closely than developer perspectives, and because of this some concerns may not be represented.
This paper presents a snapshot of Agile challenges faced by the industry at one point in time. Further research with a broader and larger set of participants and additional methods such as workshops, round-table discussions, and focus groups would strengthen the findings.
5 Conclusions
If research is to have real impact in the practitioner community, researchers need to understand and address the challenges that this community faces. The research questions addressed in this paper help to inform this endeavour by mapping the landscape of current practitioner challenges, identifying the persistent yet evolving nature of challenges and illustrating the complexity that emerges when challenges are studied in an organizational context. Specifically, this work shows that:
1. The landscape of practitioner challenges is complex and the challenges are interlinked. Attempts by researchers to address these challenges need to recognise this and to treat the challenges in context rather than in isolation.
2. Some challenge areas have persisted for many years, and are just hard to address successfully. For example, identifying and measuring agile value, and understanding cultural change are highly contextual and complex. These challenge areas will continue to benefit from further research because it needs to keep abreast of cultural change.
3. Some challenge areas appear to have persisted for many years, but further analysis shows that their focus has changed, specifically towards organisational concerns. As agile is adopted outside the IT function some agile challenges become systemic. Further research and work with relevant disciplines needs to increase.
4. Practitioners are less concerned about adopting agile and more concerned about sustaining agile. The sustainability of agile has not been widely researched.
5. Other challenge areas have also not been widely researched, specifically governance, business engagement and transformation, failure, and the impact of claims and limitations. Future research would be beneficial in some of these areas, but it is not the answer to all of them as some would best be addressed by further or different education and training – e.g. those challenges classified as misconceptions and hype.
6. Some challenge areas appear in the current landscape of challenges but have also been the subject of research studies, such as business and IT transformation, changing mindsets, finding the right people and scaling. This implies that the research being done is not having the expected impact. This suggests further dissemination of the results, or re-packaging of the results to encourage practitioner implementation.
Overall, the picture is one of a complex, multi-faceted and constantly changing landscape of practitioner challenges. In the last five years (between 2010 and 2015) concerns of sustainability and organizational context have grown, and these would benefit from both further research and improvements in knowledge transfer. Furthermore, given the complex nature of the challenge landscape, engaged Case Study research will continue to be of paramount importance.
6 Acknowledgements
We would like to thank the conference attendees, survey respondents, our collaborators at BigBank, and our funders: the Dynamic Systems Development Method (DSDM) Consortium, the Open University and the University of Central Lancashire.
7 References
|
{"Source-Url": "http://oro.open.ac.uk/45729/2/The%20Challenges%20That%20Challenge%20-%20final.pdf", "len_cl100k_base": 15261, "olmocr-version": "0.1.49", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 75027, "total-output-tokens": 18468, "length": "2e13", "weborganizer": {"__label__adult": 0.00048732757568359375, "__label__art_design": 0.0006756782531738281, "__label__crime_law": 0.0004906654357910156, "__label__education_jobs": 0.014007568359375, "__label__entertainment": 8.344650268554688e-05, "__label__fashion_beauty": 0.0002598762512207031, "__label__finance_business": 0.0022983551025390625, "__label__food_dining": 0.0003933906555175781, "__label__games": 0.0007367134094238281, "__label__hardware": 0.0005435943603515625, "__label__health": 0.0005726814270019531, "__label__history": 0.0004737377166748047, "__label__home_hobbies": 0.00013971328735351562, "__label__industrial": 0.0005216598510742188, "__label__literature": 0.0004773139953613281, "__label__politics": 0.0005040168762207031, "__label__religion": 0.0005025863647460938, "__label__science_tech": 0.009368896484375, "__label__social_life": 0.00019359588623046875, "__label__software": 0.006931304931640625, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.00045013427734375, "__label__transportation": 0.0005936622619628906, "__label__travel": 0.0002789497375488281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 82624, 0.02441]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 82624, 0.2131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 82624, 0.94203]], "google_gemma-3-12b-it_contains_pii": [[0, 403, false], [403, 2991, null], [2991, 5856, null], [5856, 8819, null], [8819, 12138, null], [12138, 14799, null], [14799, 16246, null], [16246, 18723, null], [18723, 20770, null], [20770, 23978, null], [23978, 27049, null], [27049, 30060, null], [30060, 33063, null], [33063, 36535, null], [36535, 38513, null], [38513, 41304, null], [41304, 44139, null], [44139, 47132, null], [47132, 49482, null], [49482, 53031, null], [53031, 56048, null], [56048, 58992, null], [58992, 62146, null], [62146, 65554, null], [65554, 68008, null], [68008, 70945, null], [70945, 73615, null], [73615, 76024, null], [76024, 78827, null], [78827, 81408, null], [81408, 82624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 403, true], [403, 2991, null], [2991, 5856, null], [5856, 8819, null], [8819, 12138, null], [12138, 14799, null], [14799, 16246, null], [16246, 18723, null], [18723, 20770, null], [20770, 23978, null], [23978, 27049, null], [27049, 30060, null], [30060, 33063, null], [33063, 36535, null], [36535, 38513, null], [38513, 41304, null], [41304, 44139, null], [44139, 47132, null], [47132, 49482, null], [49482, 53031, null], [53031, 56048, null], [56048, 58992, null], [58992, 62146, null], [62146, 65554, null], [65554, 68008, null], [68008, 70945, null], [70945, 73615, null], [73615, 76024, null], [76024, 78827, null], [78827, 81408, null], [81408, 82624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 82624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 82624, null]], "pdf_page_numbers": [[0, 403, 1], [403, 2991, 2], [2991, 5856, 3], [5856, 8819, 4], [8819, 12138, 5], [12138, 14799, 6], [14799, 16246, 7], [16246, 18723, 8], [18723, 20770, 9], [20770, 23978, 10], [23978, 27049, 11], [27049, 30060, 12], [30060, 33063, 13], [33063, 36535, 14], [36535, 38513, 15], [38513, 41304, 16], [41304, 44139, 17], [44139, 47132, 18], [47132, 49482, 19], [49482, 53031, 20], [53031, 56048, 21], [56048, 58992, 22], [58992, 62146, 23], [62146, 65554, 24], [65554, 68008, 25], [68008, 70945, 26], [70945, 73615, 27], [73615, 76024, 28], [76024, 78827, 29], [78827, 81408, 30], [81408, 82624, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 82624, 0.25698]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
dd5087fd7cc7ff6da86add83a1671c879e100210
|
Structure and Evolution of Package Dependency Networks
Kikas, Riivo; Gousios, Georgios; Dumas, Marlon; Pfahl, Dietmar
DOI
10.1109/MSR.2017.55
Publication date
2017
Document Version
Accepted author manuscript
Published in
Proceedings - 2017 IEEE/ACM 14th International Conference on Mining Software Repositories, MSR 2017
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights.
We will remove access to the work immediately and investigate your claim.
Structure and Evolution of Package Dependency Networks
Rivo Kikas
University of Tartu
Tartu, Estonia
riivokik@ut.ee
Georgios Gousios
Delft University of Technology
Delft, The Netherlands
g.gousios@tudelft.nl
Marlon Dumas
University of Tartu
Tartu, Estonia
marlon.dumas@ut.ee
Dietmar Pfahl
University of Tartu
Tartu, Estonia
dietmar.pfahl@ut.ee
Abstract—Software developers often include available open-source software packages into their projects to minimize redundant effort. However, adding a package to a project can also introduce risks, which can propagate through multiple levels of dependencies. Currently, not much is known about the structure of open-source package ecosystems of popular programming languages and the extent to which transitive bug propagation is possible. This paper analyzes the dependency network structure and evolution of the JavaScript, Ruby, and Rust ecosystems. The reported results reveal significant differences across language ecosystems. The results indicate that the number of transitive dependencies for JavaScript has grown 60% over the last year, suggesting that developers should look more carefully into their dependencies to understand what exactly is included. The study also reveals that vulnerability to a removal of the most popular package is increasing, yet most other packages have a decreasing impact on vulnerability. The findings of this study can inform the development of dependency management tools.
I. INTRODUCTION
Open-source software development has resulted in an abundance of freely available software packages (libraries) that can be used as building blocks for new projects. Usage of existing libraries can increase velocity and reduce the cost of a software project [1]. However, introducing third-party libraries makes a project dependent on them. Dependencies need to be kept up-to-date to prevent exposure to vulnerabilities and bugs [2]. At the same time, bugs can also originate through transitive dependencies [3]. Developers might not have an overview of all the transitive dependencies as they did not include them themselves. Updating dependencies also entails risks, as new versions may break existing functionality or API correctness [4].
In March 2016, a single JavaScript package, left-pad was removed from the central JavaScript package repository npm. The removal caused issues also for projects that depended on it indirectly through transitive dependencies [5]. The left-pad incident illustrates the hidden risks of relying on publicly available packages. A problem with a single package can propagate through multiple levels of dependencies.
Over the years, a number of studies have addressed the question of how to develop maintainable software and how to cope with software evolution challenges [6], [7]. On the other hand, dependency management practices have received little attention, despite being a crucial part of almost all software projects. A recent study of the JavaScript package ecosystem [8] revealed that dependency requirement specifications using semantic versioning with flexible version constraints (e.g. the latest version) are widely used. This practice often leads to a new version of dependency being used implicitly every time a project is built. Another study of Maven packages [4] revealed that the semantic versioning scheme is not always used properly and breaking changes are also introduced in minor version releases. Implicit updates combined with non-conforming API changes can introduce unexpected behavior or software defects. Considering the left-pad incident and the lack of studies on dependency management, we seek to enhance the understanding of the state of dependency update practices and the structure of dependency networks.
More recently, data has become available from package repositories and GitHub repositories that enable us to study the package ecosystems of different programming languages. Having access both to packages that are published in a central repository and applications using these can give us an idea how often dependencies are updated and what is the state of the dependency ecosystem.
In this work, we take a novel network-based approach for studying dependency networks of JavaScript, Ruby, and Rust. We use data from package repositories and a subset of GitHub projects. We compose a network of projects based on dependency relations to understand how the dependency network evolves and how susceptible it is to a removal of a random project. We show that dependency networks of popular languages such as JavaScript and Ruby are growing and have at least one single package whose removal can affect more than 30% of projects in the ecosystem.
The goal of this work is to study the state of current dependency networks, to understand their characteristics, and to reason about their future evolution. We have formulated the following research questions to guide our research:
RQ1: What are the static characteristics of package dependency networks?
RQ2: How do package dependency networks evolve?
RQ3: How vulnerable are package dependency networks to a removal of a random project?
Answers to these questions can help to quantify the state of the ecosystems, give an overview of the trends in dependency management, and can inform the development of improved dependency management tools.
IEEE: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
II. BACKGROUND AND RELATED WORK
In this section, we explain the terminology and give an overview of the related work.
A. Terminology
Current work analyses dependencies among software projects. We distinguish between two types of software projects: packages and applications. We define packages as a reusable code or set of components that can be included in other applications by using dependency management tools. Packages are published in repositories and are available to everyone. Applications are projects that make use of packages, are not published as a package and thus can not be used in other projects as a dependency. Packages and applications can have multiple versions distinguished by version numbers.
A package can depend on another package. If package A depends on package B we say that A has a dependency (A is a dependent of B) and B has a reverse dependency (B has a dependent). Applications can have dependencies but since they are not published as a reusable package they cannot have reverse dependencies. A project has a direct dependency if a package on which the project depends, and which it needs to be built, is directly included in the project. A project can have a transitive dependencies on packages that are not needed for the project itself but needed for the direct dependencies included in the project to work. Transitive dependencies can be included through multiple levels of dependencies.
A dependency network is composed of packages, applications, and dependency relations between them. An ecosystem is the set of packages and applications involved in the dependency network.
B. Related work
The related work deals with analyzing dependency networks, analyzing risks associated with dependency usage, and API stability in libraries.
Dependency networks. Network-based analysis of programming language dependency networks has emerged recently. A first large-scale analysis of the npm ecosystems was carried out by Witten et al. [8]. Their analysis concludes that JavaScript is a striving ecosystem because of frequent releases of new and existing packages. They use GitHub applications only to study version numbering practices and state that there is a prevalence of flexible (not exact) version number specifications. They conclude that usage of flexible version constraints should result in the immediate adoption of a new release.
Decan et al. [9] analyze topologies of npm, PyPI, and CRAN and find that there are differences across ecosystems, e.g., the PyPI is less interconnected than npm. They state that analysis results are not generalizable from one ecosystem to another. Their follow-up work [10] focusing on dependency version specification usage analysis, points out that current tools and versioning schemes can introduce resiliency issues to the ecosystem.
German et al. [11] study packages in the R ecosystem. They find that most packages do not have any dependencies, but popular ones are more likely to have. They also find that growth of the ecosystem comes from user-submitted packages, and it takes a longer time to build a community around user-submitted packages than around core contributed packages. Another analysis of the R ecosystem [12] studies dependency resolution in R packages finds that lack of dependency constraints in package descriptions and backward incompatible changes often break dependencies. As community contributed packages are hosted on GitHub, there is no way to resolve dependencies among GitHub packages, and therefore a small amount of GitHub packages cannot be automatically installed.
Bogart et al. [13] interview seven maintainers of R and npm packages to understand how dependencies are maintained. They find that developers are not aware of the stability of packages in the ecosystems and make changes on ad-hoc principles. In a follow-up work [14], they found that npm, CRAN and Eclipse ecosystems differ substantially in their practices about resolving API breaking conflicts and expectations toward change.
Dependency management. A study of dependency management process in Apache projects [15] found that if the number of projects in the ecosystem grows linearly, the dependencies among them grow exponentially. Bavota et al. [16] also find that new releases often do not contain updates to their dependencies. Dependencies are updated only if major new features or bug fixes are released for the dependencies. Kula et al. [17] measure latency to adopt new versions among a sample of Java projects that use Maven. They conclude that over time, the maintainers become more trusting and update faster, although no reason is known for this behavior. Cox et al. [2] measure dependency freshness in 75 different closed source projects of 30 different vendors. Their findings indicate that projects with low dependency freshness are more than four times likely to include a security vulnerability.
Besides programming language ecosystems, previous research studied the Debian package ecosystem, how to resolve strong dependencies in it, and how to improve the planning of dependency changes [18], [19], [20], [21].
Vulnerabilities. Hejderup [3] studies vulnerability spreading across npm packages. He uses information about known vulnerabilities, tracks how long it takes for projects to update from a vulnerable version and shows that vulnerabilities can affect projects through dependencies. He also observes that some of the projects have a discussion in the issue tracker about vulnerable dependencies that need updating. Through qualitative analysis, he finds that developers were not aware of the vulnerabilities and the risk of breaking functionality is what holds back blindly updating vulnerabilities.
Cadariu et al. [22] propose a tool to track known vulnerabilities in Java projects. They conduct a case study on private Dutch enterprise projects and find that 54 out of 75 projects use at least 1 (and up to 7) vulnerable dependency.
Synthesis of related work. The three research questions proposed in this paper have received attention in the context of
existing research. There are similarities with existing research, but none of them fully covers the scope and problem of this paper. Witten et al. [8] and Decan et al. look at the network topologies for npm, PyPI (Python) and CRAN (R). Compared to [8], our work considers the network analysis in more detail and includes applications in the network analysis step. Compared to [9], [10] we also focus on the network evolution and outline more accurate dependency network model. Hejdreup [3] studies vulnerability spreading among npm projects. Our work analyses the whole ecosystem and includes evolution analysis to study if over time such vulnerabilities will become less or more likely.
III. RESEARCH QUESTIONS
We have formulated three research questions to guide our research. The overall motivation is to analyze structure and evolution of dependency networks to get insight into current dependency usage and possible issues. Next, we explain the motivation behind each research questions in more detail.
RQ1 (Structure). Currently, not much is known about the static properties and topologies of programming language package ecosystems. For example, we know to what extent dependencies are used in packages only [8], [9]. However, we do not know if there are differences in dependency usages across published packages and applications? Modern package managers allow different conventions for specifying dependency version numbers such as exact version or version range. However, we do not know what the most popular way of specifying dependencies is. Answers to these questions enable us to understand the current state of dependency ecosystem and would be the starting point for analyzing ecosystem evolution.
RQ2 (Evolution). Software projects can add new dependencies and update existing dependencies. Changes in dependencies in a new release of a single package will also be reflected in the overall dependency network. Studying the dependency network evolution since its creation can explain the current state and also provide knowledge to reason and make predictions about its future evolution. Need for such analysis was outlined by respondents to a recent survey on software ecosystems challenges [23]. One of the answers given by respondents stated: if an ecosystem is not able to evolve quickly it is going to die [23]. Similarly, our goal is to understand the current evolution state of the studied ecosystems and analyze if they are growing or stabilizing.
RQ3 (Vulnerability). When selecting a package use, several factors are important besides the functionality it provides. Developers ideally would like to be sure that the package quality is good, it is maintained, and is trustworthy. As these properties are not explicitly visible, developers might end up using packages of varying quality. For example, if an attacker publishes packages with names very similar to the names of popular packages, developers making a typo could end up using them unwillingly [24]. The left-pad incident happened because the developer decided to remove the package. How vulnerable are ecosystems to such scenarios? We define vulnerability as the number of projects that are affected if we remove a package or a specific version of it. This scenario also helps to estimate what fraction of the dependency network is impacted if a package contains a bug. Such information could be incorporated in measuring package importance with regards to vulnerability in an ecosystem.
IV. METHOD
In the following section, we describe the data collection method, preprocessing steps and our approach for modeling dependency networks using graphs.
A. Context
In this work, we study three package ecosystems for the programming languages, i.e., JavaScript, Ruby, and Rust. We chose these three languages as the majority of their packages and applications are hosted on GitHub. These languages have central repositories for hosting packages, namely npm, RubyGems, and Crates. Developers specify required packages in their project’s dependency files (package.json, Gemfile, Cargo.toml) and packages are retrieved by the dependency manager (npm, Bundler, Cargo). The packages contain source code and developers can use functionality from packages in their project. In addition to packages, we study applications download from GitHub. By adding applications, we can analyze package usage from the end-user viewpoint.
We chose to study JavaScript and Ruby, both dynamically typed languages popular choices for among web application development. Rust, on the other hand, is a multi-paradigm language that supports static typing primarily meant for systems programming. JavaScript and Ruby have been used since the 1990s and their corresponding central package managers appeared in 2010 and 2004. Rust first appeared in 2010 and its central package management in 2014. Our analysis of JavaScript revolves around the packages used in the node.js environment and managed through npm tool, but also includes packages only needed for web development, such as front-end frameworks. JavaScript differs from the other languages used in the study as it supports multiple versions of a project in its dependency chains. For example, if package A depends on package B version 1.0 and package C version 2.0, and package B depends again on package C version 3.0, then npm downloads both versions of the package C. Rust and Ruby do not allow such scenario and a single version of package C is required. In practice, JavaScript developers can have more freedom in including dependencies, but Rust and Ruby developers need to make sure their dependencies do not conflict.
B. Data collection
We used multiple sources for composing the dataset. For JavaScript and Ruby, we downloaded the full list of packages, release dates, dependencies, and other relevant meta-data from their central repositories, npm and RubyGems respectively. For getting data from npm, we used the public API [25].
For RubyGems, we used a copy of their meta-data database available on-line [26].
Central repositories such as npm and RubyGems host only projects that are typically libraries, frameworks, command line applications or resource bundles for web development. We also include end user applications from GitHub in our study to understand the package usage in practice. We used the GHTorrent [27] database of March 2016 to select projects whose repository language identified by GitHub was either Rust, JavaScript or Ruby, were not forks, and the project GitHub repository did not appear in the npm or RubyGems hosted project list. After composing the initial list of projects, we made an HTTP request to every repository to check if it had a dependency file in the root folder of the latest revision. We only cloned repositories that had a dependency file present in the latest revision. For Rust, we cloned all projects listed in GHTorrent, but for JavaScript and Ruby, we only cloned those that either had at least one fork or at least one star, to minimize the number of projects to collect. We acknowledge that we were not trying to collect all the projects from GitHub.
Rust has a central repository called Crates.io, but the meta-data is not available from there in a structured machine-readable format. Therefore, for Rust, we only rely on the packages from GitHub by first selecting all Rust language projects from the GHTorrent database and then filtering out those that do not have a dependency file named Cargo.toml. The Rust data can be considered as a sample of the whole package universe of Cargo and additional applications written in Rust.
Data collection took place during April and May 2016. We collected the package repository data after collecting applications from GitHub. We excluded all updates and changes after April 2016, to get a comparable time scale for all ecosystems.
C. Parsing GitHub projects
The projects obtained from GitHub have their dependency information recorded in dependency files. To extract dependencies, we consider all revisions of the dependency files to recover the dependency history. We used the git log command to extract all changes to the dependency file. For accurate modeling, we had to know when each version of a project was released. JavaScript’s package.json and Rust’s cargo.toml provide explicit version information of the project. Ruby’s dependency files (.gemspec and Gemfile) are written in Ruby code and sometimes the version number is expressed as a variable or read in from a file. This makes reading the exact version numbers hard, as there is no general pattern. Extracting this is therefore not feasible, as it would require manual inspection or executing the code. In cases we could not extract explicit version numbers, we used the time of the last modification of the dependency file. This only affects applications and does not impact the dependency network structure as they do not have dependents. The limitation of this approach is that there might be many more revisions than actual releases. If multiple revisions of a dependency file exist with the same version number, we use the latest revisions for the version. Developers might change contents of the file during development with the new version number already entered but after the release the contents will not change.
D. Resolving dependencies
When parsing dependency files, we encountered situations where some of the dependencies were not available. A dependency might not be available in a case where a single revision of a dependency file committed to the repository contained typos or incorrect version constraints, thus the dependency does not exist. We only kept those dependencies that we could match in the central repositories for JavaScript and Ruby. For Rust, we kept all dependencies we could match among the projects as we did not use official package repository data. If a dependency is specified as a reference to a git source code repository, we only kept this in the case of Rust projects and the repository was in the list of collected projects.
Dependency version constraints can be specified in different ways, for example as exact version, latest version or pattern based matching using the semantic versioning notation. A version number is typically written in the format of MAJOR.MINOR.PATCH. An increase in the MAJOR number denotes incompatible API changes, an increase in the MINOR number indicates an addition of backward compatible changes, and an increase in the PATCH number indicates a bug fix. A version requirement specification has specific notations for describing valid version. JavaScript and Rust support similar notation formats. To obtain any version or the latest version, the requirement should be specified as the wild-card (*) or with an explicit condition (≥ 0). The tilde operator (~) matches the most recent MINOR version. For example, 3.0.3 matches the highest version in the range [3.0.3,3.1), but will not match 3.1. The caret (^) will select the most recent MAJOR version (the first number). For example, ^1.2.3 matches highest version in the range [1.2.3,2.0). Ruby does not support the tilde and the caret directly, but has something similar called the pessimistic operator, expressed by ~>. For example, ~> 3.0.3 is equivalent to ^3.0.3. Requirement ~> 1.1 is equivalent to ~> 1.2, i.e., matches the highest version in the range [1.2.3, 2.0).
For network construction, we must be able to represent the state of dependencies as they were at the time a package was released or an application was committed to the repository. With inexact version requirements the actual version that might be included in the project might differ every time the project is built, as a more up-to-date version of a dependency that satisfies the requirements might have become available. We resolved all dependency version requirements to the version that would have been used when the package was released or a GitHub commit was made. Therefore, we knew when the release was made and also could trace back which packages and versions were available at that time. For JavaScript projects, we used the package semver to find for each dependency the highest version candidate available. For Ruby projects, we used Gem library code for finding the latest revision among
all matching candidates. For Rust, we implemented our own dependency resolution.
Dependency version resolution did not take into account transitive dependencies and possible version conflicts. We are aware that in practice, some other version might have been chosen. To resolve all dependencies we would have needed to re-implement the corresponding language dependency resolution algorithm because dependency management tools do not support resolving dependencies as they would have been resolved at any arbitrary time in the past.
E. Network construction
When modeling a system with a network, we need to define what nodes and edges represent. A straightforward approach to represent dependency relations in networks is to model projects as nodes, and directed edges between them denote dependencies between projects. The limitation of this solution is the lack of differentiation between project versions and thus this modeling approach could give misleading information about the network. Figure 1 illustrates three different approaches for network modeling. Packages A and B depend on different versions of C, but only C version 0.4 depends on D. The aggregated network model would indicate that package B is dependent on package D, which is not true. The number of different packages dependent on D is two (A and C) in the actual network, but aggregated version would give us three projects (C, A, and B). We also studied an approach where we annotate network edges with attributes. We have a list of pairs (source version, target version) for which this edge is valid. When traversing the network, we have to make sure that the target version on the edge that was used to access the node has a corresponding source node for taking the next step. For evolution analysis, both aggregated network and aggregated network with attributes are unsuitable. If we want to answer questions such as what is the number of transitive dependencies, we have to consider all project versions. A new release of a project can update its dependencies, thus increasing the connectivity in the aggregated graph. For example, all versions of the aggregated graphs (Figure 1) would give us that project C has two dependencies, however, at any time it only has one. Considering this, it might affect all the projects and we would get a more connected graph than the actual and the number of dependencies would not reflect the actual value.
We chose an approach where a node represents a specific project version. The edges denote dependency relations between specific versions (Figure 1, actual). With this modeling approach, we can have correct answers to queries such as how many different versions depend on a project and how many of these are unique projects.
In our analysis, we sometimes used the aggregated modeling version with edge attributes for some calculations. Whenever we did so, we mention it explicitly in the following. By analyzing the top 10 projects for JavaScript based on the number of dependencies, we confirmed that the aggregated network without edge attributes overestimates the dependency counts. Therefore we decided to use edge attribute information when analyzing dependency chains.
Or choice of dependency network model makes it hard to compare our results with existing research, which uses the aggregated network without attributes [8], [9]. Only Hejderup [3] uses a similar approach to our actual network. The difference is that Hejderup also keeps meta-nodes in the network to represent a project. Each meta node has links to the corresponding project’s version node.
We only use projects that have at least one dependency or one reverse dependency. If a project does not have dependencies nor is a dependency for others, it does not appear in the network. As soon as a project adds a dependency, it will appear in the network. Due to this filtering, single isolated nodes can not exist in the network, while isolated clusters of connected nodes can.
We kept snapshots of the network for each month. A snapshot records how the ecosystem looked at the end of the corresponding month. Snapshots are cumulative, adding new projects and dependency links. Neither projects nor links are ever removed. All analyses involving the temporal evolution is also cumulative, i.e., if we calculate some property at a specific time, we calculate the property for all the projects published until that point.
We manually removed three projects from our dataset that appeared to be outliers. Two JavaScript applications and one Ruby package had been engineered so that they would contain all possible packages in their dependency file.
V. Results
A. Description of dependency networks (RQ1)
In this subsection, we describe the data sets and basic properties of the dependency networks.
1) Static properties: Table I lists basic properties of the language ecosystems used in our study, the number of projects initially collected, and different releases in the network.
We initially collected 11037 Rust, 339453 JavaScript, and 184919 Ruby projects. However, not all packages have dependencies or are used as a dependent, and therefore we exclude those projects in the network based analysis. The exclusion was based on the latest snapshot and included projects that never had any dependencies. The final dataset comprises 7978, 246670 and 147449 projects for Rust, JavaScript, and Ruby correspondingly.
Table II lists the number of dependencies and dependents (reverse dependencies) per release. Comparing languages, we see that Ruby projects have more direct dependencies on average (8.8) than JavaScript (5.5) and Rust (3.0). The differences in the number of direct dependents are smaller, i.e., 1.2, 1.3, and 1.6, respectively. However, we again see larger differences across transitive dependencies and transitive dependents (the average number of projects that depend on a project). JavaScript has the largest amount of transitive dependencies and dependents, 54.6 and 15.5, respectively. Ruby has 34.1 and 6.4, and Rust 9.3 and 7.4, respectively. The number of transitive dependents for JavaScript is almost two times larger than for other languages. Ruby has the highest average number of direct dependencies and Rust has the highest number of direct dependents. Differences in the amount dependencies across ecosystems reveal that the internal structures of dependency networks are different across ecosystems. JavaScript’s large dependency count could possibly be attributed to tool support for different versions of a single package in dependencies.
2) Direct and transitive dependents: The left-pad incident had a high impact not because it was directly used in many projects but indirectly, through transitive dependents. Figure 2 shows the relationship between the total number of dependents (direct and transitive dependents) and direct dependents for all projects at the beginning of April 2016. For all ecosystems, we can see that there exist projects that have a small amount of direct dependents (less than 100) and a large number of transitive dependents. We can see that this pattern is stronger in JavaScript (Figure 2b) and Ruby (Figure 2b) than for Rust. Ruby also exhibits a clear pattern with a package having an equal amount of direct and transitive dependents, meaning that a package is only involved in direct dependencies but not transitive ones.
3) Weakly connected components: Even though we limited our analysis to projects that have at least one dependency relation, the ecosystems under study are not fully connected for Rust and JavaScript. We calculated the number of weakly connected components in the dependency graphs for all languages. A weakly connected component in a directed graph is a subgraph where each node is connected with every other node in the subgraph via an undirected path. We observed the emergence of a giant weakly connected component in each of the three analyzed ecosystems. For Rust, JavaScript and Ruby, 96.14%, 98.2%, 100% of projects belong to the largest weakly connected component in the latest snapshot. Many real-world networks such as social networks exhibit the giant component property [28]. The remaining projects are part of components with a small number of projects. The existence of a giant component illustrates the fact that existing packages, even being developed by different developers, can be used together in applications. The ability to be used together makes the ecosystem valuable.
4) Dependency updates and constraint notation practices: We define explicit dependency version change as a manually changed version constraint for a dependency by a developer. The number of explicit changes is similar across ecosystems (Table III). The number of implicit changes denotes the number of times a dependency was resolved to a different version after each project release or dependency file commit, but without modifying the dependency requirement specification. An implicit update happens when dependencies are specified with flexible constraints, and there are newer versions released matching the constraints. The number of implicit updates has a larger variation across projects, with the highest mean of 2.17 for Ruby, 1.7 for Rust, and 1.44 for JavaScript. The mean number of implicit updates for the published packages are smaller than for applications, 1.91 and 1.1 for Ruby and JavaScript. We also see that the maximum values for both explicit and implicit updates are larger for applications which...
can be explained by higher velocity in development as these projects do not have dependents. For both types of projects, packages and applications, Ruby seems to have higher update counts which can be explained by longer history. Another insight is that there are more implicit updates than explicit, indicating dependencies are updated more often than a developer would manually do this. In the following, we analyze more closely the popular ways of specifying dependency version requirements that lead to implicit updates.
Table IV lists the relative popularity of each requirement specification scheme in each ecosystem. Note that we distinguish here also between published packages and applications. The different ways to specify versions are: any or latest version (any), exact version (exact), explicitly specified version range such as [2.0, 4), and one-sided ranges (range), the most recent minor version (tilde), the most recent major version (caret) or anything else, such as manually specified git version (other).
The dominating approaches for Rust version specifications are exact and any versions, used in 32% and 47.8% of the cases. Besides these, all different possible schemes for specification are used by developers. Rust developers prefer to specify specific versions or latest versions, as the ecosystem is growing.
Among the most popular approaches for JavaScript are the caret, exact, and the tilde notation. Exact versions are used only in 22% of the cases for different JavaScript projects. The difference between JavaScript GitHub projects and published packages is non-existent, whereas for Ruby, there are differences in the fraction of exact versions and range based specifications. We looked more into range usage in packages and applications. We used Pearson’s chi-squared test to confirm that Ruby’s applications and published packages have different preferences in specifying version requirements ($\chi^2 = 884540$, $df = 5$, p-value $< 2.2 \cdot 10^{-16}$). Ruby also has the least amount of exact dependencies, which in turn can explain our observation of Ruby having the highest number of implicit version updates on average (Table III). In the end, we used Pearson’s chi-squared on the full contingency table (Figure IV with absolute values) to confirm that dependency management preferences differ across languages ($\chi^2 = 8025600$, $df = 20$, p-value $< 2.2 \cdot 10^{-16}$).
### Table IV
<table>
<thead>
<tr>
<th>Type</th>
<th>any(*)</th>
<th>caret(ˆ)</th>
<th>exact</th>
<th>other</th>
<th>range</th>
<th>tilde(˜)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ecosystem</td>
<td>JS Application</td>
<td>0.047</td>
<td>0.498</td>
<td>0.221</td>
<td>0.005</td>
<td>0.019</td>
</tr>
<tr>
<td></td>
<td>JS Package</td>
<td>0.037</td>
<td>0.536</td>
<td>0.217</td>
<td>0.007</td>
<td>0.029</td>
</tr>
<tr>
<td></td>
<td>Ruby Application</td>
<td>0.583</td>
<td>0.157</td>
<td>0.135</td>
<td>0.000</td>
<td>0.083</td>
</tr>
<tr>
<td></td>
<td>Rust Package</td>
<td>0.360</td>
<td>0.178</td>
<td>0.070</td>
<td>0.000</td>
<td>0.249</td>
</tr>
</tbody>
</table>
### B. Dependency network evolution (RQ2)
In this subsection, we will look in more detail the evolution of the dependency networks.
#### 1) General growth
To understand how the ecosystems are growing, we first analyzed the number of projects and dependency relations between them. Figure 3 shows the number of projects and unique relations in the dependency network. We also show the number of releases and the number of dependency links between them. We see that in almost all cases, the speed at which the number of relations is growing is getting faster compared to the number of nodes in the network, especially visible for JavaScript (JS N on Figure 3), where the difference between the number of projects and dependencies is tenfold. The figure also indicates the growth of Rust is still continuing. JavaScript has become larger than Ruby, both in terms of versions and dependencies between versions. The
growth of Ruby in general is leveling off and becomes steady, whereas JavaScript is growing even at accelerated rate.
Figure 3 highlights the size differences when analyzing actual networks and aggregated networks with annotated edges. There is more than ten times difference between the number of nodes and edges in both networks and the difference is growing. Therefore, there are differences in the network structure which confirms our initial discussion on the choice of network modeling approach.
As the ecosystem is composed of multiple projects, we next analyzed the project level changes in dependencies. What is the number of dependencies and dependents for projects and the full size of the transitive dependency chain? Figure 4a shows the number of dependencies and dependents for each project release. We see a faster growth for the number of dependents in Ruby and JavaScript. The number of dependencies has been growing at a slower rate. When comparing JavaScript and Ruby, we see that the difference between the number of dependents is larger than the number of dependencies. One possible explanation could be that the overall number of packages published in RubyGems is smaller than in npm and there are fewer alternatives for packages, leading to higher number of dependents.
Figure 4b shows the total amount of dependencies for each project release. We observe fast growth for JavaScript projects and slower, steadier growth for Ruby and Rust projects. The average size of total dependencies for JavaScript was 34.3 in April 2015 but grew to 54.6 in April 2016, more than 60% yearly growth. Growth at such speed is unlikely to continue and most likely will be lower in the future.
When comparing JavaScript’s and Ruby’s numbers of direct dependencies (Figure 4a) and the total amount of transitive dependencies, we see that JavaScript projects have more transitive dependencies, but less direct dependencies. This behavior indicates differences across these two ecosystems. Ruby has packages that are used mostly by applications and do not have dependencies themselves, but JavaScript published packages have dependencies themselves, making the ecosystem more connected and complex. One possible explanation for JavaScript’s larger amount of transitive dependencies is the fact that npm allows multiple versions of the same project to be included through transitive dependencies.
Judging by these observations, it is hard to predict the exact number of transitive dependencies for Rust as both Ruby and JavaScript have shown different behavior. We argue that this may be because Rust is a very new ecosystem at its initial stages of evolution.
2) Conflict evolution: The ecosystems keep growing and the number of dependencies between projects is also growing. We analyze next what is the number projects that have a single dependency included through multiple packages, which could lead to conflicts if the package version requirement specification would not match.
We define a dependency overlap as a situation when a project appears as a dependency through multiple different paths for a single project. In practice, overlap could lead to conflict, which would occur only if the version specification would not match and it would not be possible to find the best matching version. Dependency overlap illustrates how much dependencies are co-used in projects. On the other hand, it illustrates the need for consistent usage of version number specification by package maintainers. Increasing dependency overlap should give developers a signal to look their dependency version requirements and use as loose criteria as possible, to allow dependency managers to find a suitable version.
Figure 4c lists the fraction of projects that have dependency overlap in their dependency chains. The overall trends are similar to the overall growth of the ecosystems. More than two-thirds of Ruby and half of JavaScript projects have a single dependency appear through multiple dependency chains. The result indicates package reuse, but also the event of dependency version conflicts might become more likely. Increasing overlap can lead to issues which prevent different packages to be used together due to not satisfiable dependencies. Similar behavior has been observed for Debian software packages [29].
C. Fragility and vulnerability (RQ3)
Next we analyze dependency network tolerance to a removal of a single project or a single release. We define vulnerability of a package as the fraction of the network nodes that is impacted by a removal of a single package or a single package version. This approach enables to analyze the impact of incidents such as the left-pad project removal. While complete removal of a project removes all versions from the dependency networks, we can also study removal of a single version. For example, bugs or security vulnerabilities might not impact all project version, only selected specific of them might contain the bug.
We first calculate the vulnerability on the network where each node denotes the different version. For each package version, we calculate the number of total dependents. Next, we have the list with the number of total dependents for all packages. Among this list, we look at the maximum value and the 90th percentile value. We chose these values as the distribution of the number of dependents is skewed and the median value is typically either 0 or 1 depending on the snapshot date.
Figure 5a shows the maximum and 90th percentile vulnerability score normalized with respect to the full network size at each snapshot. We see that the maximum is fluctuating and having a positive trend, which means that there is a version in the network which importance is growing. Looking at the 90th percentile value, we see decreasing trend, which indicates that most of the other packages in the ecosystem are not central and are not included in the majority of dependency paths.
We also look at the vulnerability on the aggregated graph. Figure 5b shows the same vulnerability calculation on the aggregated network, meaning we remove a project and all its versions. It is evident that the maximum score is growing and impact a single project is growing. This is even interesting
in the context of growing ecosystems, the absolute values are also increasing therefore. The 90th percentile vulnerability is again decreasing.
To find differences between packages and application, we analyzed the mean vulnerability rate for different types of JavaScript and Ruby projects. Figure 5c shows the average number of projects affected by a single package removal. The figure illustrates the dependency on a single package. We see that right after the creation of the package ecosystem, it starts to decrease. In a later phase, the positive trend of JavaScript is more visible. The average number of impacted applications remains larger than the packages.
Table V list the top five releases based on unique dependent projects and unique dependent releases. For JavaScript, the list is composed of unique utility packages, such isarray or inherits. For Ruby and Rust we see that multiple versions of a single packages have made to the top lists. The top five packages for ruby are related to webserver (rack) or templates (erubis, tilt). Rust packages are interface to system level types and libraries (libc), a serialization library (rustc-serialize), and a logging library (log).
VI. DISCUSSION
In the following, we discuss our results, their practical implications, compare results with the related work and outline limitations of the research. The results differ to some extent for all studied languages, but generalizing conclusions can be brought out.
A. Results
Network modeling. Previous research on package dependency networks has not found an agreement on how to model dependencies using graphs. We propose an approach for modeling and constructing the network from dependency data. We believe that the chosen approach captures the actual network most accurately, enabling us to analyze dependencies on their version level. Although the analysis of aggregated network can yield similar conclusions [10], the real dependencies are still using version information and in future evolution stages it might not be sufficient anymore. We believe that our contribution in network modeling is a single step forward more unified software dependency network modeling.
Structure. Analysis of dependency network structure reveals differences between ecosystems. Although this has been observed before [9] for the dependencies, we have also shown differences in dependency version constraint specifications across ecosystems. The findings complement previous research [14] that found that different ecosystems approached API changes differently, which could impact dependency management. Our findings indicate that there are more implicit version updates than explicit, which suggests that there may be a need for tools to automatically monitor the dependencies that are included through implicit updates and reveal possible breaking API changes.
Evolution. Our evolution analysis revealed that the amount of transitive dependencies for JavaScript projects has been growing over 60% over the past year. A large amount of dependencies can lead to issues such as extended build time because of fetching the dependencies and increased software package size. Exponential growth has been observed inside Apache ecosystem as well [15]. Recently, a newer dependency management tool compatible with npm was introduced [30]. One of the key new functionality is improved concurrent dependency download. The tools tries to solve the dependency abundance problem by providing a faster download. Alternatively, a future solution could study how to reduce dependencies by better static code analysis. Our finding illustrates that observing network evolution, such troubles can be anticipated. Analysis of trends and number of transitive dependencies over time could be useful for other package based language ecosystems.
Vulnerability. Our vulnerability analysis, inspired by the left-pad incident [5], reveals that each studied ecosystem has packages whose removal could impact up to 30% of the other packages and applications. We showed that ecosystems have a few central packages that they depend on, which could enable bug spreading if they are not up to date. The high vulnerability score of a package should also alert developers and maintainers to make sure all security bugs are fixed quickly. A package with a high vulnerability score can be of interest to attackers.
as an opportunity to exploit projects depending on it.
B. Design implications
By using our findings, one could design a better package ecosystem and dependency management tooling. First, we would propose making dependency relations explicitly visible to understand the importance of packages in the ecosystems. Having an up to date view which packages are most popular and important in the ecosystem can make sure they receive maintenance and support effort from the community.
We would also investigate alternatives to semantic versioning to allow stricter dependency specification and version numbers from packages to help to minimize dependency conflicts. Overall, the ecosystem and tooling should improve awareness of what dependencies are used, make dependency listing explicit and help to minimize irrelevant dependencies.
C. Limitations
The limitation of our dependency network construction approach is that it will not compose the exact representation that the build tool would have. When resolving wildcard version specifications to a matching version, we look all dependencies separately for given projects. In practice, when using build tools, the whole transitive closure of dependencies would be resolved and if a package is included through multiple paths, a matching version would be calculated that shares all requirements. To recreate the exact dependencies for a project historically is complicated as dependency management tools do not support backdated retrieval.
VII. CONCLUSION AND FUTURE WORK
Our analysis of dependency networks of JavaScript, Ruby, and Rust shows that all analyzed ecosystems are alive and growing, with JavaScript having the fastest growth. JavaScript also shows the largest amount of transitive dependencies per project among studied languages. All ecosystems have some popular packages used in the majority of the projects. Yet, over time, ecosystems have become less dependent on a single popular package and a removal of a random project will not cause ecosystem collapse.
The main contributions of this paper are: (i) proposal of a network modeling approach specifically for dependency networks, (ii) insights into the structure and evolution of JavaScript, Ruby, and Rust, (iii) analysis of vulnerability reveals that ecosystems are not as vulnerable to a removal of a single package as they used to be.
This work opens up possibilities for multiple lines of future work. The dependency management process should be studied also studied by qualitatively to understand issues developers are facing. Second, based on the vulnerability measures and network aspects, a measure quantifying dependency health should be developed. Combining network information with data about testing efforts, code analysis, the number of maintainers etc, into a aggregated dependency health measure. The broad level goal of the future research is to support developers with tools in dependency management and maintenance and provide analytics for package maintainers about their packages and the overall ecosystem trends. Our next goal is to turn the code used in this paper into a set of reusable tools to analyze any package ecosystem based on GitHub and repository data.
Supplementary Information
Datasets used in this research are available at https://github.com/riivo/package-dependency-networks.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/41902509/ecosystems_evolution.pdf", "len_cl100k_base": 10086, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40219, "total-output-tokens": 12915, "length": "2e13", "weborganizer": {"__label__adult": 0.0003082752227783203, "__label__art_design": 0.00029015541076660156, "__label__crime_law": 0.0002579689025878906, "__label__education_jobs": 0.000690460205078125, "__label__entertainment": 5.793571472167969e-05, "__label__fashion_beauty": 0.0001170635223388672, "__label__finance_business": 0.00019609928131103516, "__label__food_dining": 0.00024271011352539065, "__label__games": 0.0005240440368652344, "__label__hardware": 0.0004014968872070313, "__label__health": 0.00031256675720214844, "__label__history": 0.0001959800720214844, "__label__home_hobbies": 5.7697296142578125e-05, "__label__industrial": 0.00019073486328125, "__label__literature": 0.00025081634521484375, "__label__politics": 0.00018608570098876953, "__label__religion": 0.0002856254577636719, "__label__science_tech": 0.00928497314453125, "__label__social_life": 9.566545486450197e-05, "__label__software": 0.0086669921875, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0001939535140991211, "__label__transportation": 0.0002701282501220703, "__label__travel": 0.00014495849609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58938, 0.03097]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58938, 0.26011]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58938, 0.9164]], "google_gemma-3-12b-it_contains_pii": [[0, 1227, false], [1227, 6997, null], [6997, 13072, null], [13072, 19035, null], [19035, 25374, null], [25374, 30792, null], [30792, 34884, null], [34884, 38706, null], [38706, 44957, null], [44957, 49336, null], [49336, 52675, null], [52675, 58938, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1227, true], [1227, 6997, null], [6997, 13072, null], [13072, 19035, null], [19035, 25374, null], [25374, 30792, null], [30792, 34884, null], [34884, 38706, null], [38706, 44957, null], [44957, 49336, null], [49336, 52675, null], [52675, 58938, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58938, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58938, null]], "pdf_page_numbers": [[0, 1227, 1], [1227, 6997, 2], [6997, 13072, 3], [13072, 19035, 4], [19035, 25374, 5], [25374, 30792, 6], [30792, 34884, 7], [34884, 38706, 8], [38706, 44957, 9], [44957, 49336, 10], [49336, 52675, 11], [52675, 58938, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58938, 0.03141]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
0ec3f17d8f9ae4c801e3489655221627902eef1d
|
Risk Assessment in Distributed Authorization
Peter Chapin
Department of Computer Science
University of Vermont
pchapin@cs.uvm.edu
Christian Skalka
Department of Computer Science
University of Vermont
skalka@cs.uvm.edu
X. Sean Wang
Department of Computer Science
University of Vermont
xywang@cs.uvm.edu
ABSTRACT
Distributed authorization takes into account several elements, including certificates that may be provided by non-local actors. While most trust management systems treat all assertions as equally valid up to certificate authentication, realistic considerations may associate risk with some of these elements; some actors may be less trusted than others, some elements may be more computationally expensive to obtain, and so forth. Furthermore, practical online authorization may require certain levels of risk to be tolerated. In this paper, we introduce a trust management logic that incorporates formal risk assessment. This formalization allows risk levels to be associated with authorization elements, and promotes development of a distributed authorization algorithm allowing tolerable levels of risk to be precisely specified and rigorously enforced.
Categories and Subject Descriptors
C.2.0 [Computer-Communication Networks]: General—Security and protection
General Terms
Security, Languages, Theory
Keywords
Distributed Authorization, Trust Management Logic
1. INTRODUCTION
Trust management systems provide a formal means to specify and enforce distributed authorization policies. From its origins in BAN [7] and ABLP logic [1], research progress in this field now comprises systems such as SDSI/SPKI [16, 9] and RT [13]. The expressiveness and rigor of these systems has become increasingly important to security in modern distributed computing infrastructures, as web-based interactions continue to evolve in popularity and complexity.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
FMSE ’05, November 11, 2005, Fairfax, Virginia, USA.
Copyright 2005 ACM 1-59593-231-3/05/0011 ...$5.00.
Authorization in trust management usually takes into account several facts and assertions, including certificates provided by non-local, untrusted actors. Although e.g. cryptographic techniques provide certain measures of confidence in this setting, not all components of authorization can realistically be used with the same level of confidence; the Pretty Good Privacy (PGP) framework acknowledges this, by including a notion of trustworthiness of certificates. Furthermore, efficient online authorization decisions often require a weakening of ideal security, since the latter may be prohibitively expensive. This weakening may involve the acceptance of assertions that would otherwise be verified, in case lowered confidence levels are more tolerable than the danger of intractability. Thus, many practical distributed authorization decisions include elements of risk associated with authorization components, where risk could be associated with trust, or computational cost, or any other practical consideration making some facts more or less risky than others.
A rigorous assessment of authorization should accurately assess risk, but risk in trust management is usually an informal consideration. In this paper, we introduce a trust management logic, called RT\textsuperscript{R}, that formally incorporates formal risk assessment. The system is a variant of RT [13], and includes an abstract definition of risk, a means to associate risk with individual assertions, and a semantics that assesses risk of authorization by combining the risk of assertions used in authorization decisions. This formalization promotes development of a distributed authorization algorithm allowing tolerable levels of risk to be precisely specified and rigorously enforced.
1.1 Paper Outline
The remainder of the paper is organized as follows. In Sect. 2, an overview of the RT\textsubscript{A} system is given for background. In Sect. 3, we define the syntax and set-theoretic semantics of RT\textsuperscript{R}, an authorization logic with risk assessment. In Sect. 4, we give a graph-theoretic interpretation of RT\textsuperscript{R} that is equivalent to the set-theoretic semantics, and show that so-called credential graphs can be automatically reconstructed by a distributed chain discovery algorithm, as an implementation of distributed authorization. In Sect. 5, we discuss some interesting applications motivating the development of RT\textsuperscript{R}, and we conclude with a summary of the paper and remarks on related work in Sect. 6.
2. OVERVIEW OF RT
Rather than defining a new trust management logic for a
formalization of risk, we take advantage of the existing RT system [13]. This system combines the strengths of role-based access control with an expressive trust management logic, and enjoys a variety of existing implementation techniques [15]. We believe these features make RT one of the most advanced trust management systems, and an appealing setting for the development of formal risk assessment. The RT role-based trust management system is actually a collection of trust management logics, all of which are variations on a basic logic called RT0 [13]. Variations include separation of duties and delegation. In this same spirit, we propose a variation on RT0 to incorporate a formalization of risk assessment, so we briefly review RT0 here to provide necessary background.
In RT0, individual actors, or principals, are called Entities and are defined by public keys. We let \( A, B, C, D, E \) range over entities. Each entity \( A \) can create an arbitrary number of Roles in a namespace local to the entity, denoted \( A.r \). The RoleExpressions of RT\(^R\) are either entities or roles or constructed from other role expressions by linking and intersection, as described below. To define a role an entity issues credentials that specify the role’s membership. Some of these credentials may be a part of private or roles or constructed from other role expressions by linking and intersection, as described below. To define a role an entity issues credentials that specify the role’s membership. Some of these credentials may be a part of private policy; others may be signed by the issuer and made publicly available. The overall membership of a role is taken as the memberships specified by all the defining credentials.
RT0 provides four credential forms:
1. \( A.r \rightarrow E \)
- This form asserts that entity \( E \) is a member of role \( A.r \).
2. \( A.r \rightarrow B.s \)
- This form asserts that all members of role \( B.s \) are members of role \( A.r \). Credentials of this form can be used to delegate control over the membership of a role to another entity.
3. \( A.r \rightarrow B.s.t \)
- This form asserts that for each member \( E \) of \( B.s \), all members of role \( E.t \) are members of role \( A.r \). Credentials of this form can be used to delegate control over the membership of a role to all entities that have the attribute represented by \( B.s \). The expression \( B.s.t \) is called a linked role.
4. \( A.r \rightarrow f_1 \cap \cdots \cap f_n \)
- This form asserts that each entity that is a member of all role expression forms \( f_1, \ldots, f_n \) is also a member of role \( A.r \). The expression \( f_1 \cap \cdots \cap f_n \) is called an intersection role.
Authorization is then cast as a role membership decision: an access target is represented as some role expression \( f \), and authorization for that target for some entity \( A \) is equivalent to determining whether \( A \) is a member of \( f \). In such a decision, we call \( f \) the governing role. Authorization always assumes some given finite set of credentials, denoted \( C \). We use \( Entities(C) \) to represent the entities used in a particular set of credentials \( C \), and similarly \( RoleNames(C), Roles(C) \), etc.
2.1 Example
Suppose a hotel \( H \) offers a room discount to certain preferred customers, who are members of \( H.preferred \). The policy of \( H \) is to grant a discount to all of its preferred customers in \( H.preferred \) as well as to members of certain organizations. \( H \) defines a role \( H.orgs \) that contains the public keys of these organizations. Into that role \( H \) places, for example, the key of the AAA, the American Auto Association. These credentials are summarized as follows:
\[ H.discount \rightarrow H.preferred \]
\[ H.discount \rightarrow H.orgs.members \quad H.orgs \rightarrow AAA \]
Now imagine that at a later time a special marketing plan is created to encourage travellers to stay at \( H \). A decision is made that all members of the AAA are automatically preferred customers and thus the credential \( H.preferred \rightarrow AAA.members \) is added to the policy.
Finally suppose that Mary is a member of the AAA. She has a credential, \( AAA.members \rightarrow M \), attesting to that fact. By presenting this credential to \( H \)’s web service Mary can prove in two distinct ways that she is authorized to receive the discount. On one hand she is a member of an organization in \( H.orgs \). On the other hand she is, indirectly, a preferred customer of \( H \). Certain practical considerations may motivate \( H \)’s decision about which “proof” to use. As we will see in Sect. 4, specified risk thresholds in RT\(^R\) can steer authorization in the right direction.
3. THE SYSTEM RT\(^R\)
The system RT\(^R\) is RT0 extended with a formal definition of risk assessments. In this section we define the syntax and semantics of RT\(^R\), and give some examples of risk-assessed authorization decisions in RT\(^R\). As for RT in [15], we define a set theoretic semantics for RT\(^R\), since this allows an easy correspondence with the graph theoretic characterization of RT\(^R\) for distributed chain discovery given in the next section. While a constraint datalog semantics for RT\(^R\)– similar to the datalog semantics of RT in [12]– is an interesting possibility, it is beyond the scope of this paper.
3.1 Syntax and Semantics
The system RT\(^R\) is defined as a framework, parameterized by a risk ordering, which is required to be a complete lattice \((\mathcal{K}, \ll)\). We let \( \kappa \) and \( K \) range over elements and subsets of \( K \) respectively, and let \( \top \) and \( \bot \) denote top and bottom. Any instantiation of RT\(^R\) is expected to supply a risk ordering with \( \ll \) decidable, and must also supply an associative, commutative, monotonic risk aggregation operation \( \oplus \). The relation \( \ll \) allows risks to be compared, and “greater” and “lesser” risks assessed, while \( \oplus \) allows risks to be combined when authorization decisions involve multiple risks. Examples of particular risk orderings are given in Sect. 3.2 and Sect. 5, below.
The basis of risk assessment is the association of risk with individual credentials, since credentials are the fundamental assertions used in authorization decisions. Thus, credentials in RT\(^R\) are of the following form:
\[ A.r \leftarrow \kappa \rightarrow f \]
where \( \kappa \) is the risk associated with the credential. We leave unspecified the precise mechanism of risk association, though
bounds[rmem](A.r) = \bigcup_{A.r \leftarrow e \in C} \text{expr}[rmem](e) \oplus \kappa
\text{expr}[rmem](B) = \{(B, \bot)\}
\text{expr}[rmem](A.r) = rmem(A.r)
\text{expr}[rmem](A.r_1.r_2) = \bigcup_{(B, \kappa) \in rmem(A.r_1)} \text{rmem}(B.r_2) \oplus \kappa
\text{expr}[rmem](f_1 \cap \cdots \cap f_n) = \bigoplus_{1 \leq i \leq n} \text{expr}[rmem](f_i)
\begin{figure}[h]
\centering
\begin{align*}
\text{Lemma 1.} & \quad \text{For all finite } A \subseteq \text{Entities, the poset:} \\
& \quad (\text{RiskAssessment}(A), \preceq) \\
& \quad \text{is a complete lattice.}
\end{align*}
\end{figure}
in many cases it is likely that the authorizing agent will automatically assign risk to credentials. In essence, the aggregation of risks associated with credentials used in some authorization decision constitutes the risk of that decision.
Formally, the semantics of RT^R associates risk \kappa with the membership of entities B in roles A.r. Thus, the meaning of roles A.r are finite sets of pairs of the form \((B, \kappa)\), called RiskAssessments; we let \(R^r\) range over such sets. For any \(A \subseteq \text{Entities}\), RiskAssessment(A) denotes the set of risk assessments \(R\) such that \((A, \kappa) \in R\) implies \(A \in A\). Note that any \(R\) may associate more than one role with any entity, i.e. there may exist \((A, \kappa_1), (A, \kappa_2) \in R\) such that \(\kappa_1 \neq \kappa_2\). This reflects the possibility of more that one path to role membership, each associated with incomparable risk. Taking the glb of incomparable risks in risk assessments is unsound, since the glb will assess a lesser risk of membership than is in fact possible to obtain through any path.
However, if a risk assessment associates two distinct but comparable risks with a given role membership, the lesser of the two can be taken as representative; in general, risk assessments can be taken as a set of lower bound constraints on risk in authorization. Thus, we define equivalence on risk assessments as follows:
\[ R \cup \{(A, \kappa_1), (A, \kappa_2)\} = R \cup \{(A, \kappa_1)\} \quad \text{where } \kappa_1 \preceq \kappa_2 \]
We call canonical those risk assessments \(R\) such that there exist no \((A, \kappa_1), (A, \kappa_2) \in R\) where \(\kappa_1 \not\preceq \kappa_2\), and observe that any equivalence class of risk assessments has a unique canonical form. Furthermore, the canonical representation of any assessment \(R\), denoted \(\hat{R}\), is decidable since assessments are finite and \(\preceq\) is decidable. We extend the ordering \(\preceq\) to risk assessments as follows:
\[ R_1 \preceq R_2 \iff \forall (A, \kappa_1) \in R_1, \exists (A, \kappa_2) \in R_2 \land \kappa_1 \preceq \kappa_2 \]
The relation is clearly decidable. We also observe that it is a partial order:
**Corollary 3.1.** The relation \(\preceq\) on risk assessments is a partial order.
Hereafter we restrict our consideration to canonical risk assessments without loss of generality. Given any \(A\), the relation \(\preceq\) induces a lattice on \(\text{RiskAssessment}(A), \preceq\):
**Lemma 1.** For all finite \(A \subseteq \text{Entities}\), the poset:
\[ (\text{RiskAssessment}(A), \preceq) \]
is a complete lattice.
**Proof.** Given \(R \subseteq \text{RiskAssessment}(A)\). For each \(A \in A\), let \(K_A = \{\kappa \mid \exists R \in R.(A, \kappa) \in R\}\), and let \(\kappa_A\) be the lub of \(K_A\), which must exist since we require risk orderings to be complete lattices. Let \(R_\kappa\) be an element of \(\text{RiskAssessment}(A)\), and is a lub of \(R\). The existence of a glb for \(\kappa\) follows dually. \(\square\)
A notion of aggregation of risk assessments is useful to define:
\[ R \oplus \kappa \triangleq \{(A, \kappa' \oplus \kappa) \mid (A, \kappa') \in R\} \]
\[ R_1 \oplus R_2 \triangleq \{(A, \kappa_1 \oplus \kappa_2) \mid (A, \kappa_1), (A, \kappa_2) \in R_1 \times R_2\} \]
We assert monotonicity of this operation:
**Corollary 3.2.** The operation \(\oplus\) on risk assessments is monotonic.
As we will see below, solutions to sets of credentials are functions of type:
\[ \text{Role} \rightarrow \text{RiskAssessment} \]
Letting \(f\) and \(g\) be functions of this type, we define:
\[ f \preceq g \iff f(A.r) \preceq g(A.r) \text{ for all roles } A.r \]
Now, we can define the semantics of \(\text{RT}^R\), by extending the semantics of \(\text{RT}_0\) in [15] to assess risk:
**Definition 1 (Semantics of RT^R).** Given a set \(C\) of \(\text{RT}^R\) credentials, the semantics \(\mathcal{S}_C\) of \(C\) is a function mapping role expressions to risk assessments. In particular, \(\mathcal{S}_C\) is the least function \(\text{rmem} : \text{Roles}(C) \rightarrow \text{RiskAssessment(Entities}(C))\) for every \(i\). The sequence is defined inductively by taking \(\text{rmem}_0(A.r) = \emptyset\) for every role \(A.r\), and letting:
\[ \text{rmem}_{i+1}(A.r) = \text{bounds[rmem]}(A.r) \]
for every \(A.r\). The function relating the values in \{\text{rmem}_i\}_{i \in \mathbb{N}}\) is monotonic, since \(\cup\) and \(\oplus\) are monotonic, the latter by...
definition and Corollary 3.2. Further, the pointwise ordering of functions \( rmem_i \) under \( \leq \) forms a complete lattice, by Lemma 1. Therefore, a least fixpoint of the sequence \( \{ rmem_i \}_{i \in \mathbb{N}} \) exists. Let \( rmem \) be this fixpoint, and define:
\[
\begin{align*}
S_C(A.r) &= rmem(A.r) & A.r \in \text{Roles}(C) \\
S_C(A.r) &= \emptyset & A.r \notin \text{Roles}(C)
\end{align*}
\]
It is easily shown that \( S_C(A.r) \) so defined is a least solution to \( C \) as specified in Definition 1.
### 3.2 Examples
We now give some examples of risk assessments for authorizations in two different risk models, illustrating applications of the system. Other more complex examples are discussed in Sect. 5.
#### 3.2.1 Bound-of-Risks
In [8], an information flow security model is presented where all static data is assigned to a security class. Security classifications of variables are then assigned based on the combination of security classes of data flowing into those variables, as determined by an abstract program interpretation. Security classes are identified by elements in a complete lattice, where “class-combination” is defined as the lub of combined classes.
We propose that an adaptation of this model is useful in the context of authorization risk assessment. We do not propose an abstract interpretation of authorization, incorporating some form of “may-analysis”, but rather a purely dynamic authorization and risk assessment model, so in this sense we differ from the model proposed in [8]. Nevertheless, we may adopt the use of least upper bounds as a “class-combination” mechanism— in our terminology, “risk aggregation” — that assesses the risk of any authorization decision as the least upper bound of risks associated with all credentials used in the decision.
Consider a risk ordering where three classifications \( K = \{ \text{low}, \text{medium}, \text{high} \} \) are defined, and the following relations are imposed:
\[
\text{low} \preceq \text{medium} \preceq \text{high}
\]
and \( \oplus \) is taken to be the lub operator. Imagine also that an online vendor called \( \text{Store} \) maintains a purchasing policy whereby representatives of the \( \text{Acme} \) corporation have \( \text{buyer} \) power only if they are both employees and official purchasers. Since this policy is maintained locally, it is associated with a \( \text{low} \) risk of usage, hence \( \text{Store} \) could specify:
\[
\text{Store.\,buyer} \rightarrow \text{Acme.\,purchaser} \cap \text{Acme.\,employee}
\]
Imagine further that \( \text{Ed} \) attempts to make a purchase from \( \text{Store} \), providing certificates claiming \( \text{employee} \) and \( \text{purchaser} \) status. However, if we assume that these certificates can possibly be faked, or that role membership within the \( \text{Acme} \) corporation has a volatile status, higher risk can be assigned to these certificates:
\[
\text{Acme.\,employee} \rightarrow \text{Ed} \quad \text{Acme.\,purchaser} \rightarrow \text{Ed}
\]
We also assume that a less risky path of establishing \( \text{Ed} \)'s membership in the \( \text{Acme.\,purchaser} \) role is through a \( \text{manager} \) certificate obtained directly from \( \text{Personnel} \), and via \( \text{Acme} \)'s own policy specifying \( \text{purchaser} \) power for all \( \text{managers} \):
\[
\text{Acme.\,purchaser} \rightarrow \text{Personnel.\,manager}
\]
\[
\text{Personnel.\,manager} \rightarrow \text{Ed}
\]
Although using \( \text{Ed} \)'s certificate asserting his membership in the \( \text{Acme.\,purchaser} \) role will incur a \( \text{high} \) risk, because of the less risky path to this relation, the risk assessment of this set of credentials will find that establishing \( \text{Ed} \)'s membership in the \( \text{Store.\,buyer} \) role requires a lower bound of \( \text{medium} \) risk. The least solution for all given roles is as follows:
\[
\begin{align*}
\text{Store.\,buyer} & : \{ (\text{Ed}, \text{medium}) \} \\
\text{Acme.\,employee} & : \{ (\text{Ed}, \text{medium}) \} \\
\text{Acme.\,purchaser} & : \{ (\text{Ed}, \text{low}) \} \\
\text{Personnel.\,manager} & : \{ (\text{Ed}, \text{low}) \}
\end{align*}
\]
Of course, in certain cases it may be preferable to use the certificate \( \text{Ed} \) provides, instead of going through \( \text{Personnel} \)— if wait times for distributed communication with that node are prohibitively long, for example. However, in this case it should be specified that a \( \text{high} \) level of risk will be tolerated in the credential chain. In Sect. 4 and Sect. 5, we define a technique for credential chain discovery that implements this idea.
Returning to the example, for the purposes of illustration we imagine that the risk ordering is extended with an element \( \text{moderate} \), that is incomparable with \( \text{medium} \), inducing the lattice:

We also imagine that \( \text{Store} \) has cached an old certificate, establishing \( \text{Ed} \)'s membership in the \( \text{Acme.\,employee} \) role with \( \text{moderate} \) risk:
\[
\text{Acme.\,employee} \rightarrow \text{Ed}
\]
In this case, since \( \text{moderate} \) and \( \text{medium} \) are incomparable, the risk assessment will reflect that \( \text{Ed} \)'s membership in the \( \text{Store.\,buyer} \) and \( \text{Acme.\,employee} \) roles can be established via two paths with incomparable risk:
\[
\begin{align*}
\text{Store.\,buyer} & : \{ (\text{Ed}, \text{medium}), (\text{Ed}, \text{moderate}) \} \\
\text{Acme.\,employee} & : \{ (\text{Ed}, \text{medium}), (\text{Ed}, \text{moderate}) \}
\end{align*}
\]
Precision and safety in the assessment of minimal risk is not lost by taking the glb of incomparable risk assessments.
#### 3.2.2 Sum-of-Risks
An alternative to the bound-of-risks model is a sum-of-risks model, where credentials are assigned numeric risk values and the total risk for any authorization decision is the sum of all risks associated with the credentials used in the decision. Thus, we take the risk ordering in this model to be the lattice of natural numbers up to \( \omega \) induced by \( \preceq \), and we take \( \oplus \) to be addition. This model is useful in case risk is considered additive, or in case the number of credentials used in an authorization decision is an element of risk, the more the riskier.
4. RT$^R$ CREDENTIAL CHAIN DISCOVERY
In this section we discuss an algorithm for authorization with risk in a distributed environment. Following RT credential chain discovery [15], our technique is to characterize credential sets graph-theoretically, except that our credential graphs are risk-weighted multigraphs, to accommodate risk assessments. Credential graphs are shown to be a full abstraction of solutions as in Definition 1, and the RT$^R$ discovery algorithm is shown to correctly reconstruct credential graphs.
In addition to theoretical correctness, our chain discovery algorithm has two important practical features:
1. The algorithm need not verify a role membership in a risk-optimal fashion, but rather is parameterized by a risk threshold, that is a maximum tolerable risk for role membership verification.
2. The discovery procedure is directed, in the sense that it is aborted along search paths whose risk overruns the maximum threshold.
The first feature allows end-users to modulate tolerable levels of risk in authorization. The second feature reaps any efficiency benefits intended by associating risks with credentials, as high risk may be associated with high expense, e.g. if risks are wait times.
4.1 Credential Graphs
We begin by defining an interpretation of credential sets $C$ as a credential graph. More precisely, sets of credentials are interpreted as a weighted multigraph, where nodes are role expressions, edges are credentials, and weights are risks.
Authorization is implemented by determining reachability, via risk weighted paths, where the aggregation of edge risk along the path is the risk of authorization. Reachability is predicated on simple paths, since traversing cycles can only increase risk, and any path with a cycle would otherwise generate an infinite number of risk weighted paths. Allowing the latter would preclude a constructive definition of credential graphs, since chains are distinguished by risk and cycle traversals increase risk monotonically.
**Definition 2** (Risk weighted credential chains). Let $\mathcal{G} = (\mathcal{N}, \mathcal{E})$ be a weighted multigraph with nodes $f \in \mathcal{N}$ and edges $f_1 \preceq f_2 \in \mathcal{E}$ weighted by elements $\kappa$ of a given risk ordering. The pair:
$$(f_1, \ldots, f_n, \kappa_1 \oplus \cdots \oplus \kappa_{n-1})$$
is a risk weighted path in $\mathcal{G}$ iff for all $i \in [1..n - 1]$, there exists $f_i \preceq f_{i+1} \in \mathcal{E}$. A weighted path $((f_1, \ldots, f_n), \kappa)$ is simple iff no node is repeated in $(f_1, \ldots, f_n)$. We write $f \preceq f'$, pronounced “there exists a credential chain from $f$ to $f'$ with risk $\kappa$”, iff $((f_1, \ldots, f'), \kappa)$ is a simple risk weighted path. We write $f \preceq f' \in \mathcal{G}$ iff $f \preceq f'$ holds given $\mathcal{G}$.
The definition of credential graphs is founded on the definition of risk weighted chains, since edges derived from linked and intersection credentials are supported by them.
**Definition 3** (Credential graph). Given $\mathcal{C}$, its credential graph is a weighted multigraph $\mathcal{G}_\mathcal{C} = (\mathcal{N}_\mathcal{C}, \mathcal{E}_\mathcal{C})$, where:
$$\mathcal{N}_\mathcal{C} = \bigcup_{A.r \preceq e} \{A.r, e\}$$
And $\mathcal{E}_\mathcal{C}$ is the least set of risk-weighted edges satisfying the following closure properties:
1. If $A.r \preceq e \in \mathcal{C}$ then $e \preceq A.r \in \mathcal{E}_\mathcal{C}$.
2. If $B.r_2, A.r_1, r_2 \in \mathcal{N}_\mathcal{C}$ and $B \preceq A.r_1$, then $B.r_2 \preceq A.r_1, r_2 \in \mathcal{E}_\mathcal{C}$.
3. If $D, f_1 \cap \cdots \cap f_n \in \mathcal{N}_\mathcal{C}$ and for each $i \in [1..n]$ there exists $D \preceq f_i$ then $D \preceq f_1 \cap \cdots \cap f_n \in \mathcal{E}_\mathcal{C}$, where $\kappa = \kappa_1 \oplus \cdots \oplus \kappa_n$.
The definition of credential graphs can be made constructive by iterating closure over an initial initial edge set $\mathcal{E}_0$:
$$\mathcal{E}_0 = \{A.r \preceq e \mid A.r \preceq e \in \mathcal{C}\}$$
In rules (2) and (3), the paths predicated membership in $\mathcal{E}_\mathcal{C}$ are called support paths, and the edges are called derived. On each iteration, add a new weighted edge according to closure rule (2) or (3). Since $\mathcal{C}$ is finite, and support paths must be simple, the process will reach a fixpoint in a finite number of iterations; this fixpoint is $\mathcal{E}_\mathcal{C}$.
We observe that the characterization of credential sets $\mathcal{C}$ is sound and complete with respect to the set theoretic semantics given in the previous section. These results will form a bridge with the semantics of RT$^R$ for establishing correctness of credential chain discovery. The statement of soundness reflects the fact that while risk assessments of credential sets express minimum risk bounds of role membership, the credential graph does not preclude reachability via paths of higher risk.
Theorem 4.1 (Soundness). For all $B, A.r$, if $B \leadsto A.r \in \mathcal{G}_R$, then $(B, \kappa') \in \mathcal{S}_C(A.r)$ with $\kappa' \preceq \kappa$.
The statement of completeness reflects that any assessed risk is the weight of some related path in the graph:
Theorem 4.2 (Completeness). For all $A.r$, if $(B, \kappa) \in \mathcal{S}_C(A.r)$, then $B \leadsto A.r \in \mathcal{G}_R$.
4.2 Backward Chain Discovery Algorithm
As discussed in [15], any role $A.r$ is defined by its credentials. In centralized chain discovery, all credentials are maintained locally by assumption. In distributed chain discovery, some credentials may be stored remotely. Backwards chain discovery assumes that the credentials defining a role $A.r$ are obtained through the entity $A$, so that chains need to be reconstructed “backwards”, beginning with the governing role of an authorization decision. We now define a backwards credential chain discovery algorithm checkmem for $RT_R$, possessing features described at the beginning of Sect. 4. We abstract the details of credential retrieval and risk assignment, other than its “backwards” nature, assuming that remote risk-weighted credentials can always be retrieved on demand (and cached, presumably). While forwards and mixed discovery techniques for RT are also discussed in [15], analogous techniques for RT$^R$ are beyond the scope of this paper. For brevity and clarity in the presentation, we textually describe the algorithm checkmem.
4.2.1 Definition of checkmem
The algorithm checkmem$(A, f, \kappa_{\max})$ reconstructs a proof graph, to check membership of $A$ in role $f$ within a given threshold $\kappa_{\max}$. The algorithm maintains the following mutable datastructures: a solution of type $\kappa_{\max}$, an association of $\mathcal{S}_C$ with $\mathcal{G}_R$, an association of solution monitors with graph nodes, discussed below, and an association of sets of search risks with graph nodes. Search risks are the accumulated risks along any discovery path to the given node; it is important to note that search risk associations are different than risk assessments. When a node is first encountered during search, it is added to the queue for future search. No node is added to the queue more than once.
Initially, the solution is a default mapping to the empty risk assessment, and every node is associated with an empty set of solution monitors and search risks. To begin the search, the node $f$ is added to the queue, and associated with the search risk $\perp$.
Nodes are taken from the queue individually for searching, but not indiscriminately; rather, only nodes that have a search risk below the threshold $\kappa$ are searched. In this way, the algorithm short-circuits discovery along paths that are too risky. Over-threshold nodes are not removed from the queue, since future discovery might find a less risky path to that node. Hence, nodes wait in the queue until they are the next below-threshold node to be searched. The algorithm runs until there are no below-threshold nodes left in the queue, or until a solution for $A$ in $f$ below threshold $\kappa_{\max}$ is found.
Solution monitors propagate solution elements $(A, \kappa)$ forward along discovered edges, aggregating edge risks as they go; their control flow structure mimics the discovered graph structure. Whenever a monitor notifies a node $f$ to add a solution element $(A, \kappa)$, if there does not exist $\kappa' \preceq \kappa$ such that $(A, \kappa')$ is already in $f$’s solution (in which case we say it is canonically new), the solution is added to $f$, and all of $f$’s solution monitors are applied to the new solution. There are three classes of solution monitors:
1. A role monitor for a given role $A.r$ and edge risk $\kappa$ is a function abstracted on solution elements $(B, \kappa')$, that associates to each node $B$ with $\kappa' \preceq \kappa$ to its solutions.
2. A linking monitor for a given linked role $A.r_1.r_2$ is a function abstracted on solution elements $(B, \kappa)$, that creates a role monitor for $A.r_1.r_2$ and $\kappa$, that acts as a monitor for each known element of $B.r_2$’s solution, and adds it to $B.r_2$’s solution monitors to propagate solutions yet to be discovered. Finally, given all search risks $\kappa'$ of $A.r_1.r_2$, $\kappa' \preceq \kappa$ is added to $B.r_2$’s search risks, and $B.r_2$ is added to the queue if it hasn’t already been.
3. An intersection monitor for a given intersection role $f_1 \cap \cdots \cap f_n$ is a function abstracted on solution elements $(B, \kappa)$, that creates a role monitor for $f_1 \cap \cdots \cap f_n$ and $\perp$, and applies to each element $(B, \kappa')$ in the canonical form of the assessment $R_1 \oplus \cdots \oplus R_n$, where each $R_i$ is the assessment of $f_i$ in the current solution.
Whenever nodes are taken from the queue according to the above described discipline, they are processed depending on their form:
1. To process an entity $A$, the node $A$ is notified to add $(A, \perp)$ as a solution to itself.
2. To process a role $A.r$, the credentials defining $A.r$ are retrieved. For each such credential $A.r \rightarrow f$, a role monitor for $A.r$ and $\kappa$ is created, is applied to all of $f$’s known solutions, and is added to $f$’s solution monitors for propagating solutions still to be discovered. Finally, given all search risks $\kappa'$ of $A.r$, $\kappa' \preceq \kappa$ is added to $f$’s search risks, and $f$ is added to the queue if it hasn’t already been.
3. To process a linked role $A.r_1.r_2$, a linking monitor for $A.r_1.r_2$ is created, is applied to all of $A.r_1$’s known solutions, and is added to $A.r_1$’s solution monitors. Every search risk $\kappa$ of $A.r_1.r_2$ is added to $A.r_1$’s search risks, and $A.r_1$ is added to the queue if it hasn’t already been.
4. To process an intersection role $f_1 \cap \cdots \cap f_n$, an intersection monitor for $f_1 \cap \cdots \cap f_n$ is created, and added to each $f_i$. Every search risk $\kappa$ of $f_1 \cap \cdots \cap f_n$ is added to each $f_i$’s search risks, and each $f_i$ is added to the queue if it hasn’t already been.
When node processing for an invocation checkmem$(A, f, \kappa_{\max})$ halts, the algorithm returns true if there exists $(A, \kappa)$ in the solution of $f$ such that $\kappa \preceq \kappa_{\max}$.
4.2.2 Properties
Assuming that defining credentials can always be obtained for any role, we assert that checkmem satisfies the following properties, demonstrating that it correctly reconstructs credential graphs. Since credential graphs are full abstractions of the $RT_R$ semantics as discussed in Sect. 4.1, these results demonstrate that checkmem is a correct implementation of $RT_R$.
Theorem 4.3 (Soundness). \( \text{checkmem}(A, f, \kappa_{\text{max}}) \) implies \( A \sqsubseteq f \) such that \( s \preceq \kappa_{\text{max}} \).
Theorem 4.4 (Completeness). \( A \sqsubseteq f \) implies that checkmem\((A, f, \kappa)\) holds.
We also observe that the algorithm terminates, regardless of the given risk threshold. This is because nodes are never visited more than once, and solution monitors will not traverse any graph cycle, and hence are guaranteed to terminate. Solution monitors only propagate canonically new members, but traversal of a cycle necessarily causes a monotonic increase in a solution’s risk assessment, hence canonical containment in an existing solution.
Theorem 4.5 (Termination). For all \( A, f, \) and \( k \), checkmem\((A, f, \kappa)\) terminates.
4.2.3 Discussion
There are two particular instances where the definition of checkmem could be enhanced, for more eager short-circuiting of chain discovery in case risk thresholds are exceeded along discovery paths. First, observe that credentials are retrieved before being checked to see if their risks will force the discovery threshold to be exceeded. However, risks such as expected wait time suggest that it is more practical for credentials to be retrieved after ensuring they won’t overrun the threshold. A number of minor variations on checkmem can be imagined that will address this.
A more interesting enhancement is relevant to the propagation of search risks along discovery paths leading from intersection nodes. Observe that from any intersection role \( f_1 \cap \cdots \cap f_n \), the search risks of \( f_1 \cap \cdots \cap f_n \) are propagated to each \( f_i \). However, this could be a under-approximation of search risks for any given \( f_i \). For example, suppose that \( A \) is being checked for authorization and \((A, \kappa)\) is known to be the only possible assessment of \( A \) in \( f_i \)’s solution. When checking \( f_n \), the search risks of \( f_n \) inherited from \( f_1 \cap \cdots \cap f_n \) could be aggregated with \( \kappa \), since \( \kappa \) is certain to be a component risk of any authorization supported by discovery from \( f_n \). A generalization of this idea is beyond the scope of this paper, but is an interesting topic for future work.
5. APPLICATIONS
In this section we discuss interesting applications of RT\(^R\). Details of these applications are avenues for future work; here we describe how RT\(^R\) could be used to support trust management systems that incorporate notions of risk.
5.1 Trust but Verify
The Trust but Verify (TbV) framework [17] provides a setting for distributed trust management that takes into account a notion of trust for online authorization decisions, backed up by offline verification. Many realistic authorization decisions require “softening” of security in the online phase; this amounts to trusting the validity of certain assertions in this phase, that would otherwise be too expensive to verify. However, online trust should be specified so that sound offline verification is well-defined, providing formal certainty that offline verification supports online trust.
Any authorization decision in the TbV framework is abstractly specified as derivability of a target privilege \( \text{priv} \) given a security context \( s \), written \( s \vdash \text{priv} \). Any instance of the TbV framework comprises a trust transformation, that formalizes the definition of trust in terms of a function, mapping initial security contexts \( s \) to contexts \([s]\), that contain assertions that are trusted solely for efficient online verification. Furthermore, the trust transformation should be reversible, via an audit technique that is required to reconstruct a security context that is at least as strong as the pre-image \( s \) of any trust-transformed security context \([s]\). The audit technique is the implementation of offline verification. In [17], the TbV framework is developed using ABLP logic [1]. However, the RT framework is a more modern trust management system, with a variety of implementation techniques and variations [15]. The RT\(^R\) variation offers a unique dimension of support for TbV, since trust can be encoded using definitions of risk in RT\(^R\).
The TbV framework is characterized by three conditions, that we recount here. We show how RT\(^R\) can be used to instantiate the framework in a system that satisfies these conditions. The first condition requires that authorization decisions are decidable:
Condition 1. Let \( s \) be an authorization context; then validity of \( s \vdash \text{priv} \) is decidable.
In RT\(^R\), authorization decisions are implemented as role membership decisions with an assessed risk, and security contexts are sets of credentials \( C \). That is, if the role \( A.r \) represents a target privilege and \( B \) is a privilege requester, then authorization amounts to discovery of \( B \sqsubseteq A.r \in C \), where \( \kappa \) must be within a specified threshold. The second condition specifies that auditing reverses trust transformation, though since trust transformations can be many-to-one, the context returned by auditing need not be the exact preimage of trust transformation:
Condition 2. Let \( s \) be a trusted context; then success of \( \text{audit}(s) \) entails \([\text{audit}(s)] = s\).
The last condition sufficiently strengthens the requirements of auditing to formally establish that any auditing is a sound verification of trust injected by the trust transformation:
Condition 3. Let \( s \) be a trusted context; then if \( \text{audit}(s) \) succeeds, for all \( \text{priv} \) it is the case that \( \text{audit}([s]) \vdash \text{priv} \).
The condition requires that auditing of a trust-transformed context must reconstruct a context that is at least as strong as the initial context, pre-trust-transformation. In RT\(^R\), since authorization contexts are credentials \( C \), and the authorization decision includes a risk threshold, trust transformation may be implemented as the extension of a credential base \( C \) with additional “riskier” credentials, along with an increase in the tolerable risk threshold in chain discovery. Returning to the example in Sect. 3.2, the initial authorization decision could be to determine \( Ed \sqsubseteq \text{Store.buyer} \), where \( \kappa \approx \text{medium} \). An online trust transformation could add \( Ed \)’s credential \( \text{Acme}.\text{purchaser} \vdash \text{high} Ed \) to the credential base, and tolerate \( \kappa \approx \text{high} \). Auditing in this case would entail removing \( Ed \)’s certificate from the credential base, and restoring the risk threshold \( \kappa \approx \text{medium} \). In fact, just raising the risk threshold would be sufficient, since raising the risk threshold would eliminate the possibility of using \( Ed \)’s certificate in the proof of his membership in \( \text{Store.buyer} \), and in general trust transformation and auditing could be implemented in RT\(^R\) purely by modulation of risk thresholds in chain discovery.
5.2 Cost/Benefit Analysis
Risk in RT\textsuperscript{R} is defined in an abstract manner. Although the examples in this paper have used atomic risk values, it is possible to define a risk ordering on compound risk values. For example, suppose both levels of “trustability” and expected wait times for retrieval of specific credentials are considered components of risk. The set $\mathcal{K}$ could then contain elements of the form $(\kappa, t)$, where $\kappa \in \{\text{low, medium, high}\}$ as in Sect. 3.2 and $t$ is a wait time represented as an integer, and:
$$(\kappa_1, t_1) \preceq (\kappa_2, t_2) \iff \kappa_1 \leq \kappa_2 \land t_1 \leq t_2$$
reflecting that lower wait times, as well as higher confidence in validity, define lower risk. Maximum risk in chain discovery would then specify both a tolerable level of trust, and a tolerable wait time for any particular credential.
This suggests an interactive procedure for chain discovery, where the costs of raising the level of one component of risk could be balanced against benefits in another risk dimension. In the above scenario, if chain discovery in some instance fails given a threshold $(\kappa, t)$, chain discovery could be re-run with a higher threshold, but notice there is a choice of which element(s) of risk to raise. The cost of raising $\kappa$ can then be balanced against the benefits in the time dimension, by re-running chain discovery with the threshold $(\kappa', t)$, where $\kappa \leq \kappa'$. The opposite is also clearly the case. This cost/benefit analysis would be further enhanced by optimizing chain discovery. The backward chain discovery algorithm presented in this paper ensures that risks are kept below a certain threshold, but does not attempt to optimize risk.
By extending chain discovery with optimization techniques, in the presence of compound risk, benefit dimensions could be optimized within a fixed cost dimension. For example, optimal wait times could be sought given a high level of trust risk. Development of optimizing algorithms is a topic for future work.
6. CONCLUSION
We now conclude with comments on related work and a short summary of the paper.
6.1 Related Work
Many trust management systems have been developed by previous authors. In such a system resource owners write policy statements using a suitable policy language that describes the attributes of authorized users. When a request is made, the requesting entity provides signed credentials that prove the requester complies with the policy. Proofs are constructed automatically, and implement a formal semantics. Previous systems include BAN [7] and ABLP logic [1], PolicyMaker [6], KeyNote [5], SDSI/SPKI [16], [9], and RT [14], [15], [13], to name a few. However, our focus is not on trust management, but trust management extended with risk assessment.
Proof carrying authorization (PCA) [4, 3] is a framework for specifying and enforcing webpage access policies. It is based on ABLP logic, but includes primitives for detecting timestamp expiration. While this capability reflects some sense of risk assessment, it is not as general as the notion of risk expressed in our system.
In [2], semantics for a number of RT variants are obtained via embedding in constraint datalog. An implementation of “confidence levels”, similar to our notion of risk assessment, is suggested via the use of constraints, though not developed in detail. While it is possible that many interesting risk assessment schemes can be defined using RT\textsubscript{1} or RT\textsubscript{2}, we believe that defining a new RT variant to explicitly capture the notion of risk assessments is appealing in various respects. In particular, we are able to define risk in a general manner, and isolate issues related to online authorization with components of risk.
Dealing with trustworthiness in distributed systems has been an active research area (see, e.g., [10]). In [11], an algebra is provided for reasoning about trust in certificate chains. Our notion of risk is related to the notion of trust, and some relevant operators of [11] may be directly incorporated into our framework. Comparative expressiveness of risk and trust operators is an interesting research topic, but is beyond the scope of this paper.
6.2 Summary
In this paper we have defined RT\textsubscript{R}, a role-based trust management framework with formal risk assessment. This system is a variation on RT [13], and includes the capability to associate credentials with risk, and to assess risk levels of authorization as the aggregated risks of authorization components. Risks are defined in an abstract manner, under the requirement that the set of risks be a complete lattice, with a monotonic aggregation operator. A formal semantics has been given, that associates role membership with risk levels. An algorithm has also been defined for implementation of this semantics, providing an automatic risk assessed authorization procedure. The algorithm is specialized for functionality in a distributed environment, and can be parameterized by risk thresholds, specifying a maximum tolerable risk for authorization. The algorithm is directed, to avoid proof paths whose aggregate risks exceed the given threshold, hence to risk as little as possible during the course of authorization.
7. REFERENCES
|
{"Source-Url": "http://cs.jhu.edu/~ces/misc/fmse27-chapin.pdf", "len_cl100k_base": 11782, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35028, "total-output-tokens": 13593, "length": "2e13", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.0005350112915039062, "__label__crime_law": 0.00121307373046875, "__label__education_jobs": 0.0014858245849609375, "__label__entertainment": 0.0001291036605834961, "__label__fashion_beauty": 0.0002574920654296875, "__label__finance_business": 0.0013322830200195312, "__label__food_dining": 0.0004405975341796875, "__label__games": 0.0007882118225097656, "__label__hardware": 0.0014896392822265625, "__label__health": 0.0013751983642578125, "__label__history": 0.0003960132598876953, "__label__home_hobbies": 0.00016307830810546875, "__label__industrial": 0.0006976127624511719, "__label__literature": 0.0005636215209960938, "__label__politics": 0.0005202293395996094, "__label__religion": 0.0005059242248535156, "__label__science_tech": 0.396484375, "__label__social_life": 0.00015914440155029297, "__label__software": 0.0288848876953125, "__label__software_dev": 0.56103515625, "__label__sports_fitness": 0.0002422332763671875, "__label__transportation": 0.0006284713745117188, "__label__travel": 0.00023114681243896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50318, 0.02065]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50318, 0.33325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50318, 0.87911]], "google_gemma-3-12b-it_contains_pii": [[0, 5025, false], [5025, 11659, null], [11659, 16805, null], [16805, 23234, null], [23234, 28223, null], [28223, 35003, null], [35003, 42146, null], [42146, 48518, null], [48518, 50318, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5025, true], [5025, 11659, null], [11659, 16805, null], [16805, 23234, null], [23234, 28223, null], [28223, 35003, null], [35003, 42146, null], [42146, 48518, null], [48518, 50318, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50318, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50318, null]], "pdf_page_numbers": [[0, 5025, 1], [5025, 11659, 2], [11659, 16805, 3], [16805, 23234, 4], [23234, 28223, 5], [28223, 35003, 6], [35003, 42146, 7], [42146, 48518, 8], [48518, 50318, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50318, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f297a4d6f7099f15fbb1b38fb21b02932fcd50ed
|
Quarterly research and development technical report, 1 Sep-30 Nov '78,
Quarterly Research and Development Technical Report
Spatial Data Management System
Computer Corporation of America
The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, express or implied, of the Advanced Research Projects Agency, or the United States Government.
Report Authors:
Christopher F. Herot, David Kramlich, Richard Carling, Mark Friedell, Jerry Farrell
Sponsor:
Defense Advanced Research Projects Agency
Office of Cybernetics Technology
ARPA Order Number:
3487
ARPA Contract Number:
MDA903-78-C-0122
Contract Period:
1 September 1978
Covered by Report:
30 November 1978
Contract Period:
30 November 1979
ARPA Order Number:
MDA903-78-C-0122
15 February 1978
30 November 1979
# Table of Contents
1. INTRODUCTION ................................................. 1
2. HARDWARE CONFIGURATION .................................... 3
2.1 Lexidata Display ........................................ 3
3. MOTION .......................................................... 8
3.1 Storage Levels ........................................... 8
3.1.1 Tiles on the disk .................................... 10
3.1.2 Tiles in core ......................................... 10
3.1.3 Data in the display .................................. 13
3.2 Scrolling .................................................. 15
3.2.1 Motion Programs ..................................... 22
3.2.2 First implementation ................................ 23
3.2.3 Second implementation ............................. 26
4. IMAGE PLANE EDITING .......................................... 30
4.1 Using the Editor ......................................... 31
4.2 Commands ................................................ 32
4.3 Mode Control ............................................. 33
4.4 Implementation ......................................... 34
5. ICON CREATION ................................................ 35
5.1 INGRES .................................................... 36
5.2 SQUEL ...................................................... 37
5.3 Icon Creation ............................................ 40
Appendix A. Modules ............................................. 43
Icon Manager ................................................ 44
Icon Creation ................................................ 54
GDS Editor ................................................... 59
Menu Manager ............................................... 65
Appendix B. ICDL ............................................... 72
References ...................................................... 76
1. INTRODUCTION
This fourth quarter of work on the design and implementation of a prototype Spatial Data Management System (SDMS) resulted in the first operational version of the system. The bulk of this effort was devoted to constructing the mechanism through which a user can create, modify, and view an image plane.
An image plane is a flat surface upon which a user can store information. As that surface may be considerably larger than the display screen, the SDMS provides a window which can be moved over that surface in order to view parts of it. As the window is moved, the window's position is indicated on an auxiliary screen which serves as a navigational aid containing a map of the entire image plane.
This implementation of SDMS at CCA is the first such system to integrate the I-Space creation and viewing operations so that a user can create and modify a database without the need of a skilled computer professional.
Chapter 2 describes the hardware configuration used in the prototype, with special emphasis being placed on the display devices.
Chapter 3 describes the motion control mechanisms which allow the user to maneuver over an Information Space.
Chapter 4 describes the programs which allow a user to create and modify an Information Space.
Chapter 5 summarizes the work to date on the programs for creating Information Spaces from symbolic data stored in the INGRES relational data base.
Appendix A contains the module descriptions from the Detailed Design Document [HEROT et al.] which pertain to the work accomplished this quarter.
Finally, Appendix B describes those statements of Icon Class Description Language (ICDL) which have been implemented this quarter.
Plate 1 SDMS User Station
The monitor at left displays the navigational aid and menu.
The monitor at right displays the Information Space.
The tablet and joystick are in the foreground.
2. HARDWARE CONFIGURATION
The prototype CCA Spatial Data Management System is designed to provide an individual user with graphical representations of both private and shared databases. It employs a dedicated PDP-11/70 processor with a megabyte of memory and a variety of peripherals (see Figure 2.1). The most important of these is the Lexidata display, which provides the user with a view of the Information Space and various navigational aids.
2.1 Lexidata Display
The Lexidata 6400 is typical of the current generation of raster scan displays, although it has several valuable features which make it especially suitable for use with SDMS. It is a frame buffer, that is, it employs a memory which stores the individual picture elements (pixels) of an image in discrete memory cells. There are 311,680 such cells, providing for 487 lines of 640 pixels each. The contents of each cell are accessed as the electron beam scans the corresponding location on the display tube, a process which is repeated 30 times each second. Each memory cell consists of nine bits. Typically, eight of
these are used to specify the color of the corresponding pixel on the screen. The ninth bit is used to provide overlay functions such as cursors. The architecture of the currently installed Lexidata is shown in Figure 2.2.
---
Lexidata Architecture
---
While many raster scan displays rely on hard-wired logic to perform the refresh operation, the Lexidata uses a programmable video microprocessor. This feature provides the flexibility required to achieve the smooth scrolling and zooming effects required by SDMS.
The Lexidata also contains a Nova minicomputer which performs control operations and generates graphics (such as vectors and conics) in the frame buffer. Data can also be stored in the frame buffer via a high speed parallel connection to the PDP-11 interface.
In January, the currently installed Lexidata will be replaced by a newer version which has several important new features:
1. three independent color display channels
2. zooming in integer steps from 1 to 16
3. horizontal scrolling in single memory-pixel steps
4. vertical scrolling with wrap-around
5. fast updates of vertical columns of pixels
6. ability to synchronize to an external video signal
The first of these features will allow driving three CRTs from the one display system, in effect providing three color displays in one chassis. One of the displays will have nine bits per point as in the current system. The other two will provide four bits per point on each display. Upon installation of this system, the Ramtek GX-100B will be retired. The Ramtek is old and is becoming increasingly expensive to maintain and is already prohibitively expensive to expand or replicate.
The second new feature, integer zooming, is essential for letting the user make transitions between the different
image planes which make up an Information Space in SDMS.
The third feature, horizontal scrolling, is already implemented at scale 2 in the current Lexidata, but is not operational at higher scales and results in motion of the margins of the displayed image. While these are currently hidden by masking the corresponding area of the monitor, the new feature will solve the problem at the video signal, allowing direct recording of the output.
A fourth improvement allows the video processor's addressing of the Lexidata memory to wrap-around at the end, permitting vertical scrolling over image plane higher than 480 lines.
A fifth important feature is a special mode to allow pixels in any arbitrary rectangle to be updated at 1 usec each, as long as the rectangle is aligned on a word boundary. This feature will reduce the time required to send the data from the PDP-11 to the display and allow smoother motion about the I-Space.
A sixth feature is the provision for synchronizing the video signal to an external source and encoding the output of the display for recording on video tape.
3. MOTION
A central component of SDMS is the facility which allows the user to move about the image plane upon which he stores his information. The user views these image planes through a window which he can maneuver. Currently, motion is implemented parallel to the image plane, and is controlled by pressing a joy stick in the direction in which the user wishes to travel.
The effect of motion is achieved by changing the portion of the image plane which is displayed on the user's CRT. As the frame buffer is not sufficiently large to contain an entire image plane, this must be accomplished by moving data from the disk, where the entire image plane is stored as an array of pixels, to the display. This process is referred to as staging and is described in detail in this section.
3.1 Storage Levels
The image planes of SDMS are stored at multiple levels in the system. The data appearing on the screen, together with a small amount of the surrounding area, is stored in the Lexidata frame buffer. A somewhat larger area is
stored in the memory of the PDP-11/70. The image plane is stored in its entirety on an RPO4 88Mb moving head disk. Eventually, compressed representations of image planes may be stored at other locations on the Arpanet, such as the datacomputer. The nesting of these representations is illustrated in Figure 3.1. Motion across an image plane requires staging data from the disk to the PDP-11's memory to the Lexidata in order to maintain the required nesting.
3.1.1 Tiles on the disk
The image plane is broken into tiles for storage on the moving head disk. The size and distribution of tiles on the disk have been designed to minimize the amount of data that must be read into primary memory to satisfy motion in any given direction, and ensure that the time required to move the head to the appropriate tiles and read them in will be equal for any direction of motion over the image plane. The optimal tile size, determined with the aid of a simulation, was 128 pixels wide by 64 lines high for a screen with standard (4:3) aspect ration. If motion were limited by disk time alone, such a storage scheme would allow one 640 x 480 screenful of data to be moved in the worst direction (diagonally) in about 3 seconds; at the more common scale 2 (320 x 240), the time would be about 0.8 seconds.
3.1.2 Tiles in core
The system attempts to keep all tiles containing the information currently on the screen, plus a suitable margin, in core at all times. The addresses of these tiles are stored in an array called the tile map. This array allows any reference to a point on the image plane to be mapped into the appropriate address in the core buffer.
Along with the address of the tile, each element of the tile map contains the disk address where the tile is stored and a number of flag bits which indicate whether the data in the core resident tile is valid and whether it has been modified.
The tile map allows the motion control system to ensure that a suitable margin of data is maintained in core at all times. When scrolling has caused the displayed area to deviate from the center of the data represented by the tile map, the table is rolled — that is, the data in it is shuffled so that the data represented in the center of the tile map is once again that which is in the center of the screen. This process is illustrated for the case of motion to the right in Figure 3.2. The tile addresses which have wrapped around from one margin to the other, shown as the shaded column in the figure, are flagged as being invalid. The motion control system then computes
the disk addresses necessary to fill these tiles with new data corresponding to the new margin and issues the appropriate disk read requests. The disk reading process flags each tile as containing valid data when it has been read in.
The tile map is also used by programs which modify the image on the screen and/or on the disk. Both the Graphical Data Space editor, described in Section 4, and the icon generation programs, described in Section 5, use the tile map to determine which core locations must be modified to perform a primitive image creation function. After modifying the appropriate image data, such a program sets the corresponding modification bits in the tile map entry, causing the system to write the modified tiles back to the disk when they are rolled out of the map.
A program may make additions to the image plane without setting the modification bit. Such a technique will be used for the output of the BLINK and FRAME statements, which will cause icons to be modified while they are on the screen but will not change them permanently on the disk. In this case, the modifications must be repeated each time a tile is read in from the disk.
3.1.3 Data in the display
In order to be seen on the CRT, image plane data must be resident in the Lexidata display. The system endeavors to maintain a margin of data around the visible image. As the user causes the display to scroll into this margin, the system re-uses the memory left behind, writing new image data into it and thus preparing a new margin.
The re-use of the Lexidata memory requires that the display refresh logic address the memory in such a way that the memory appears to "wrap-around" from line to line and from top to bottom. The data in the display is stored as a set of contiguous scan lines. This fact, together with the limited size of the display's memory, requires that only those portions of tiles which are visible, together with a small margin, be sent to the Lexidata from the PDP-11. These portions are composed of stripes of data which the motion software writes into the margin just ahead of the current direction of travel (see Figure 3.3).
MOTION Section
Data Staging for Scrolling
Figure 3.3
Disk
PDP-11 Memory
Lexidata
Visible Area
Direction of Motion
3.2 Scrolling
The operation most fundamental to SDMS is scrolling - the means by which the user manipulates his view of the image plane. Scrolling is performed by changing the point in the buffer at which the display refreshes the screen while simultaneously writing new data into that refresh buffer, producing the effect of continuous motion over a surface which may be many times larger than the refresh buffer itself. This process is best understood by visualizing the refresh buffer as a linear progression of memory cells, as illustrated in Figure 3.4. The state of the display when the hardware is initialized is shown in Figure 3.4(a). Starting at the point labeled "start of display refresh", the video output processor reads contiguous locations in memory until it has filled the screen. This action is repeated 30 times each second.
If the start of display pointer is incremented by one line width, so that the display starts at the second line in memory, as shown in Figure 3.4(b), that second line in memory will now be the first line to be displayed on the screen, causing a scroll in the vertical direction. Note that the display refresh will reach the end of memory
Display in initial state
Figure 3.4(a)
before it reaches the bottom of the screen. How it chooses the next line determines whether scrolling can be repeated indefinitely or not. To achieve continuous scrolling, the refresh processor's memory addressing scheme must wrap around, such that once it reaches the end of memory, the refresh continues uninterrupted from the beginning of memory. If the address of the last word of physical memory is also the last address in the address space, that is:
\[ \text{last-memory-location} = 2^{\text{number of bits in an address}} - 1 \]
then the desired wrap-around follows directly from the memory design. Otherwise, the refresh processor must be capable of detecting the end of memory and restarting at the beginning.
Continuous scrolling also requires that new data be written into the refresh buffer. In order for this to be accomplished without any visible artifacts, some number of lines in the memory are reserved for use as a margin and not displayed. New data is written into this margin which, given the above specified wrap-around, is the area of memory immediately preceding the start of the display pointer. This new data is then scrolled onto the screen, freeing up a new portion of memory which, in turn, becomes the margin.
Display scrolled vertically by 1 line
Figure 3.4(b)
Display scrolled by .5 screen width
Figure 3.4(c)
Horizontal scrolling is performed in a similar manner. Consider the effect of moving the start-of-display pointer by some fraction of a line, as shown in Figure 3.4(c). This has the effect of shifting every line of the display over to the left by that amount. Just as vertical scrolling results in data which was formerly at the bottom of the screen appearing at the top, horizontal scrolling results in data which was at the beginning of one line showing up at the end of the preceding line. If there is sufficient undisplayed space between lines (as shown by the gaps between the heavy lines in the figures) new data can be written into these gaps before they are scrolled onto the screen. These gaps are precisely the same as the margin depicted to the right of the screen in figure 3.3. Note that a display which has been scrolled horizontally by one screen width is in precisely the same state as one which has been scrolled vertically by one scan line. This phenomenon permits a significant degree of horizontal scrolling on displays without memory wrap-around, as only one scan line must be given up for every desired screen width of scroll. (The current Lexidata with 480 visible lines and 487 lines in memory thus permits scrolling over a space seven times as wide as it is high.)
Zooming the display can be illustrated by the same model. As shown in Figure 3.4(d) the display in the zoomed state
Display zoomed to scale 2
Figure 3.4(d)
uses line which are shorter and fewer in number, corresponding to the chosen zoom factor. At scale 2, there are half as many lines each half as long as at scale one. Such a display could be scrolled over an area which was two screen widths on a side without using any memory wrap-around. (In the current Lexidata, by re-using the bottom seven lines as described above, an area which was 2 screens high by 14 screens long could be traversed.)
3.2.1 Motion Programs
Scrolling is supervised by a program known as the navigator, which is responsible for five activities:
1. Sending the command to the Lexidata to update the point in its buffer at which it starts its display, thereby causing the image to scroll. This operation must be done at constant intervals if the scrolling is to appear even. It must also be done at least 30 times each second if the scrolling is to appear smooth.
2. Sending a command to the navigational aid telling it to update the position of the cursor.
3. Sending data to the Lexidata to maintain the margin of image data which the scroll command can move in
to. Enough data must be sent to ensure that the Lexi-
data can always have new data to scroll on the screen.
4. Causing new tiles to be read in from the disk so as to
maintain enough data in the PDP-11's memory to be sent
to the Lexidata when needed. Once again, this must be
done fast enough so that there is always data to feed
the Lexidata.
5. Maintaining status information (location, scale, coo-
dinate mappings, etc.) for use by other processes,
e.g. in interpreting queries relating to objects on
the screen.
Individually, none of these activities requires an inordin-
ate amount of time. Taken together, they present formid-
able problems if they are to be orchestrated in such a
manner as to collectively meet all of the criteria set forth
above. Accordingly, the solution required two iterations
which are outlined below.
3.2.2 First implementation
The initial implementation of SDMS motion employed four
processes communicating through pipes and shared memory. The
four processes were (picture in Figure 3.5):
1. The navigator, which read the joy sticks, updated
Motion Processes
Figure 3.5
Navigator
LPS
Clock
DISKIN
FEEDER
NAVAID
LEXIDATA
RAMTEK
Scroll
Feed
current status, and dispatched commands through pipes to the other processes.
2. The disk reader, which read tiles in from the disk.
3. The feeder which sent scroll requests and data to Lexidata upon receiving commands from the navigator.
4. The navigational aid, which maintained the position marker on the world view map.
The tile map, the tiles themselves, and various parameters of the system were stored in a section of core accessible to all SDMS processes, eliminating the necessity of sending large quantities of data through pipes. The commands, however, were sent through pipes to achieve the necessary synchronization.
This approach served two purposes:
1. It allowed the four activities described above to take place asynchronously.
2. It kept the size of each program down to a level which would fit into the available address space of a PDP-11.
Unfortunately, such an elegant solution was also too inefficient. Motion though the image plane was slow and erratic. The problem was traced to the excessive overhead of using so many processes.
The navigator had two pipes for communicating with the feeder in order to allow scroll and feed requests to be processed in the proper order. While scroll requests were short and frequent (every 33 msec), feed requests tended to come in bunches. The intention was that the navigator would place these requests in the appropriate pipes as it generated them and then the feeder would process them, giving scroll requests the priority they required.
In practice, once the feeder had started to process the feed requests, there was no way of insuring that the navigator would be scheduled to run in time to read the joystick before another frame time (33 msec) had elapsed. Inserting extra steps in the navigator-feeder pipe protocol to ensure that the navigator did run resulted in an inordinate time being spent in the Unix scheduler and its associated context switching. (Experiments indicated that each process switch via pipe I/O cost at least 5 ms.) In short, the four process approach had to be abandoned.
3.2.3 Second implementation
Reducing the number of processes in the SDMS motion facility required a modification to Unix to allow a single process to control more than one I/O operation. The original design of Unix did not permit a process which requested an
i/o operation to run again until that operation had been completed. This restriction freed the system's designers from providing for the eventuality that such a process would decide to terminate itself while an i/o request was still pending, a situation which could result in some data being read into core after that core had been re-assigned to some new process.
Instead, the designers of Unix encouraged the programmer to deal with such situations by use of multiple processes communicating through pipes. Unfortunately, the experience related above established that this solution was not satisfactory.
Accordingly, BBN was contracted to install a modification to the block i/o routine (physio) to allow the process calling it to initiate an i/o request, continue execution, and check at a later time for completion of the request. By restricting such use to data transfers involving the shared large core buffer area (LCBA) the danger outlined above was eliminated. This change is not complex and can easily be installed in other Unix systems.
The navigator and feeder processes were then combined into one task, as illustrated in Figure 3.6. This required making use of split instruction and data spaces, a feature not especially well supported in Unix.
A new motion system was constructed combining the navigator and the feeder into one process. This change resulted in significantly improved speed and smoothness of motion.
The prototype SDMS includes a set of tools for creating and modifying image planes and their associated navigational aids. These tools are integrated into the motion facilities so that the user can move to the location to be edited in the same manner as he would move to it to peruse the data located there. Once there, pressing the stylus to the data tablet activates the image creation and editing facilities. The various operations are controlled by selecting from a menu displayed on the key map monitor. These operations allow the input of graphic primitives, such as lines and circles, and permit the user to modify the operation of these input operations, such as by specifying colors and widths of lines.
While the image plane editing programs bear a strong resemblance to similar programs for graphic illustration developed in laboratories at Xerox PARC, MIT, and New York Institute of Technology, the programs in SDMS are the first to offer such tools together with the ability to move continuously about a drawing surface which is arbitrarily larger than the display screen. In addition, the multiple screen capability of SDMS, by allowing the menu to be placed on a separate screen from the image being created, permits the display of all possible menu options,
eliminating the need to choose between providing a complicated layering of commands or reducing the area of the display available for display of the painting surface.
4.1 Using the Editor
The editing program makes use of two screens. The main screen presents a view of the image plane. Pressing the pen to the tablet causes a color pallet to appear at the bottom of the screen and activates the editor.
An auxiliary screen presents a menu of operations which may be performed. Each selection is indicated by a colored rectangle containing the name of the operation which it invokes. The user indicates the desire to make a selection from the menu by moving the stylus to the left hand side of the tablet. At this point, the cursor appears on the auxiliary screen and can be positioned over the desired menu button. Pressing the stylus to the tablet enters the selected mode, which is indicated by the button being framed in color. If the selected operation requires the input of a coordinate from the image plane, the cursor moves back to the main screen and tracks the stylus there.
4.2 Commands
This section describes each of the facilities available to the user of the image plane editor.
COLOR specifies the color to be used in subsequent operations. After touching the color button, the user points to any place on the screen to indicate the desired color. The cursor under the pallet moves to indicate the new color which was selected. This mode is used primarily to select a color which has already been used in the image. The user may choose a color from the pallet at any time without using the color button merely by touching the pen to the desired spot on the pallet.
INK allows the user to input free-hand lines and curves, much as if he were drawing on the image plane with a regular pen in the color most recently selected.
RECT accepts two points and draws a rectangle between them in the specified color. While the pen is near the tablet, a white rectangular cursor is displayed to show the location of the rectangle which will be drawn when the pen is depressed, so as to allow the user to position it exactly.
CIRC similarly allows the input of circles.
TEXT prompts the user to type in the desired text string and specify a size. The cursor then assumes a rectangular shape of the same size as the text string. When the pen is touched to the tablet, the text is entered at the specified position in the specified color.
FLOOD fills a shape with color.
STRETCH allows input of "rubber band lines" which stretch from the first digitized point to the pen as it is moved about, allowing the easy input of straight lines.
PICK allows a rectangular area to be copied from one place to another, with optional scaling.
WIDE is like INK but with a wider pen point.
AIR allows the input of texture.
4.3 Mode Control
Two additional commands alter the effect of the preceding commands.
GRID sets up a rectangular grid to which coordinate input can be forced to aid in drawing regular geometric shapes.
MIX allows the specification of new colors by mixing the three primary colors.
4.4 Implementation
As mentioned previously, the CCA SDMS is unique in allowing a user to draw on a surface much larger than the display and to scroll over that display. This feature requires that graphical operations be performed on the image plane as it is stored in the PDP-11 core buffer. Such an approach allows operations that span a width greater than that of the display. It also allows implementing features not included with the display, such as shape filling and high quality text. In addition, the same low level routines can be used by the picture construction program described in Section 5.
The GDS editor uses the tile map described in Section 3.1.1 to map the coordinates of points to be modified into core addresses. When a location is modified, its containing tile is flagged as having been updated so that it can be written back to the disk. The result of the operation is also displayed on the screen as feedback to the user, so that the display and the core buffer are always in agreement.
5. ICON CREATION
Early in the fifth quarter of the SDMS project a new feature will be demonstrated that has never before been possible with a Spatial Data Management System: the ability to use SDMS as a view of a symbolic database management system. This will be accomplished through a set of tools which can populate an I-Space with icons generated from data in a DBMS. These tools are a combination of three facilities:
1. A symbolic database management system. In this prototype, the DBMS is INGRES, a relational database developed at the University of California at Berkeley.
2. A language for specifying the appearance of an icon as a function of data in the DBMS. This Icon Class Description Language (ICDL) is interpreted once for each tuple to be displayed.
3. An interface between the DBMS and ICDL. This interface is in the form of a query language called SQUEL, which is an extension of the INGRES query language, QUEL.
5.1 INGRES
INGRES is a relational database management system developed expressly to run under Unix. In place of the files, records, and fields of conventional DBMS's, INGRES provides relations, tuples, and attributes. A tuple typically contains information about one particular entity. For example, in a database of ships, there may be a relation giving each ship's location. Within that relation, there would be a tuple for each ship, containing the name of the ship and its location.
The implementation of INGRES includes a query language, QUEL, which allows a user to enter and retrieve information. Retrieval is typically accomplished by specifying a relation and some qualification for selecting the tuples to be retrieved. For example, to retrieve all ships flying the U.S. flag, the user might type:
```
RETRIEVE SHIPS WHERE (SHIPS.COUNTRY="US")
```
More complex retrievals can be composed by combining qualifications on one or more relations and attributes within relations.
Another useful feature of INGRES is EQUEL, a facility which allows a programmer writing in C, the implementation language of Unix, to access the database. Partially parsed QUEL statements can be passed through EQUEL to
INGRES, where they are processed, passing the results back
to the user program. EQUEL allows such user programs to
make use of all of the power of INGRES without needing
intimate knowledge of the internals of the DBMS, and aid
to the implementation and portability of such programs.
5.2 SQUEL
The queries of the symbolic database management system are
virtually the only typing required by SDMS. Accordingly,
the SQUEL monitor program is also the command interpreter
for SDMS. Expressions typed by the user are parsed by
YACC [JOHNSON], a parser included as part of Unix. If the
expression is a valid QUEL command, it is passed on to
INGRES. If it is one of the SDMS extensions to QUEL, the
SQUEL monitor will set various operations in motion, possi-
bly including the entering of one or more QUEL requests.
In the current discussion, the most significant SQUEL
statement is the ASSOCIATE statement. This statement
functions very much like the RETRIEVE statement of QUEL.
Rather than print the resulting tuples, however, the ASSO-
CIATE statement passes them off the association processor
which, in turn, passes them one at a time to the icon
creation module which actually interprets the ICDL and
creates the icons. The association processor also maintains a record of all icons and their corresponding tuples for use in updating the graphical data space if the INGRES database is changed. These processes are shown schematically in Figure 5.1.
The full format of the `associate` statement is as follows:
```
ASSOCIATE [relation] USING [icdl_description]
WHERE [qualification]
```
The relation and qualification are exactly as in QUEL. The `USING` clause allows the specification of one of a number of ICDL descriptions. The SQUEL interpreter builds a file containing the format of the relation, the selected tuples, and the name of the ICDL. It then informs the association processor via a pipe that an association is waiting to be processed. The association processor then processes the tuples one at a time, building a symbol table for each one containing the values of the attributes of the tuple. This symbol table is then passed to the icon-creation module.
Icon Creation Processes
Figure 5.1
Icon Creation Processes Diagram
5.3 Icon Creation
The icon creation module actually turns the symbolic data from INGRES into a graphical representation. For each tuple supplied by the association processor, it interprets an Icon Class Description which describes how the icon should be drawn as a function of the symbolic data.
Icon Class Descriptions are typically written by the database administrator for use by a number of users. Such descriptions allow the shape, size, color, and text of an icon to be specified. The Icon Class Description used to generate the icons shown in Plate 3 is given below:
icon class shipbyclass(r) of relation ship
begin
maximum size is (100,100); position is (r.class\*900+150,210); use picture 1
begin
picture 1
begin
image plane 0
begin
template icon 0;
attribute region r.type from (10,70) to (90,80);
attribute region r.nam from (10,85) to (90,95);
end;
end;
end;
end;
The relation ship contains tuples having the attributes class, type, and name.
The first line of the program specifies that this is an icon class description, named `shipbyclass`, which expects a tuple variable `r` and is to be used with the relation `ship`.
Following the `begin` the maximum size statement tells the icon manager how much space to allocate for this icon.
The `position` statement tells the icon manager where to try placing the icon. In this case, the integer value of the `class` attribute is used in an expression to divide the Information Space into regions along the x axis, such that the region in which an icon is placed is a function of its class.
The `use picture` statement selects the picture block from the following lines of code which has as its argument the same expression. In this particular example, there is only one picture block.
The `image plane` statement specifies that the following block contains information for image plane 0, the highest level image plane and the only one supported at this time.
The `template icon` statement specifies a template which is to be used as the background of all of the icons to be generated. It is drawn with the aid of the Graphical Data Space Editor described in the preceding section.
The two attribute region statements cause the character strings extracted from the tuple attributes type and name to be placed at the indicated locations within the icon.
As each icon is created, it is passed to the Icon Manager, a module responsible for keeping track of all icons and ensuring that none of them overlap. If the target position specified by an ICD is already occupied, the icon manager will attempt to place the icon at the nearest empty location.
Plate 3 The Information Space
The icons were generated from the ICDL of chapter 5.
Plate 4 Icon Being Painted
White rectangle indicates position and size of text which is about to be placed.
Plate 5 Completed Icon and Scaled Down Copies
Note that the icon has been partially scrolled in order to gain access to a blank area on which to place the scaled down copies.
NAME
add_icon - adds an icon to the icon database.
SYNOPSIS
/* adds an icon */
long add_icon(move_flg, coord, size, parent, source, type)
/* adds an attribute region */
long add_attr_icon(move_flg, coord, size, parent, source, id)
/* adds a port to a coordinate in the GDS */
long add_gds_port(move_flg, coord, size, parent, source, t_coord, scale)
/* adds a port to an icon */
long add_icon_port(move_flg, coord, size, parent, source, t_icon, scale)
/* adds a port to a UNIX process */
long add_proc_port(move_flg, coord, size, parent, source, program, arg1, arg2)
int move_flg; /* can icon can be moved for a fit? */
struct gds_coord coord; /* GDS coord of icon */
struct gds_size size; /* extents of icon */
long parent; /* parent icon */
int source; /* class assoc or GDS editor */
int type; /* icon type */
int id; /* id for attribute region */
/* following fields are for ports */
struct gds_coord t_coord; /* GDS coord for target */
int scale; /* scale for target */
long t_icon; /* icon id for target */
char *program; /* program for UNIX process port */
char *arg1; /* arg1 for program */
char *arg2; /* arg2 for program */
where the structures are:
struct gds_coord
{ int I_space; /* I-Space id */
float x,y; /* universal coord in GDS */
int z; /* plane number */
};
struct gds_size
{ float x_ext,y_ext; /* extents in GDS */
int z_ext; /* (number of planes) */
};
DESCRIPTION
These routines add icons to the database of the icon manager. If the icon fits at the requested location, the icon is added. If it does not fit, and if move_flg is true, the area nearby is searched for a place to put it. If a place is found, it is added, otherwise, the call fails.
add_icon - adds most types of icons.
add_attr_icon - adds icons of type attribute region.
add_gds_port - adds a port that have a GDS coordinate as their target.
add_icon_port - adds a port that has an icon as its target.
add_proc_port - adds a port that has a UNIX process as its target.
DIAGNOSTICS
Each of these routines returns:
icon id - of the new icon if successful.
-1 if unsuccessful for access or limitation reasons.
-2 if icon will not fit.
FILES
"icons": the file holding all icon data.
"ports": the file holding port data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
"deleted": file of deleted icon id numbers.
NAME
delete_icon - deletes an icon from the icon managers database.
SYNOPSIS
delete_icon(id)
long id; /* icon to delete */
DESCRIPTION
Deletes the given icon if it exists. Any children are also deleted.
DIAGNOSTICS
Returns:
0 if successful
-1 if icon does not exist.
FILES
"icons": the file holding all icon data.
"ports": the file holding port data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
"deleted": file of deleted icon id numbers.
NAME
move_icon - used to move an icon in the database of the Icon manager in the GDS.
SYNOPSIS
move_icon(icon_id,move_flg,coord)
long icon_id; /* id of icon to move */
int move_flg; /* true if "nearby" coord is ok */
struct gds_coord coord; /* target coordinate for icon */
where the structures are:
struct gds_coord
{
int I_space; /* I-Space id */
float x,y; /* universal coord in GDS */
int z; /* (plane number) */
};
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
This routine finds a new position for the given icon. It should be used before the icon is actually moved in the GDS. This call reserves the space for the new location. If it is successful, picture construction routines can be called to actually move the icon.
If move_flg is false and the icon does not fit at the specified location, the call is unsuccessful. If move_flg is true and it does not fit, the surrounding region is searched for a place. If found, the call is successful.
DIAGNOSTICS
Returns:
1 if successful but not in the requested location
(another call to
the icon manager must be made to get the new location)
0 if successful at the requested location
-1 if icon does not exist already
-2 if space could not be found
FILES
"icons": the file holding all icon data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
NAME
size - change the size of an icon.
SYNOPSIS
size_icon(icon_id, move_flag, size)
long icon_id ; /* id of icon to change size */
int move_flag ; /* true if "nearby" coord is ok */
struct gds_size size ; /* new size of icon */
where the structures are:
struct gds_size
{
float x_ext, y_ext ; /* extents in GDS */
int z_ext ; /* (number of planes) */
}
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
Updates the size of an icon. If the new size is smaller in all dimensions, the icon does not move. If it is larger and it fits in its current place, it does not move. If it does not fit and the move_flag is true, the region nearby is searched for placing the icon. If no space is found, this call fails. This routine should be called before the picture in the GDS is changed.
DIAGNOSTICS
Returns:
1 if successful but not in the requested location
(another call to
the icon manager must be made to get the new
location)
0 if successful and icon is in same location
-1 if icon id is invalid
-2 if it could not fit anywhere
FILES
"icons": the file holding all icon data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
NAME
upd_icon - updates the fields of an icon except for size and position
SYNOPSIS
/* updates an icon */
long upd_icon(icon_id,move_flg,parent,source,type)
/* updates an attribute region */
long upd_attr_icon(icon_id,move_flg,parent,source,id)
/* updates a port to a coordinate in the GDS */
long upd_gds_port(icon_id,move_flg,parent,source,
t_coord,scale)
/* updates a port to an icon */
long upd_icon_port(icon_id,move_flg,parent,source,
t_icon,scale)
/* updates a port to a UNIX process */
long upd_proc_port(icon_id,move_flg,parent,source,
program,arg1,arg2)
long icon_id; /* id of icon to update */
int move_flg; /* can icon be moved for a fit? */
long parent; /* parent icon */
int source; /* class assoc or GDS editor */
int type; /* icon type */
int id; /* id for attribute region */
long t_icon; /* icon id for target */
char *program; /* program for UNIX process port */
char *arg1; /* arg1 for program */
char *arg2; /* arg2 for program */
where the structures are:
struct gds_coord
{ int I_space; /* I-Space id */
float x,y; /* universal coord in GDS */
int z; /* (plane number) */
};
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
Updates an icon. The fields for position and size must be changed with move_icon and size_icon,
respectively. Other fields may be changed with this call.
DIAGNOSTICS
Returns:
0 if successful
-1 if icon id is invalid
FILES
"icons": the file holding all icon data.
"ports": the file holding port data.
NAME
get_icon - retrieves the data for an icon.
SYNOPSIS
get_icon(icon_id)
long icon_id;
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
Retrieves the data for an icon given the icon Id number.
DIAGNOSTICS
Returns:
0 if successful, with data in shared buffer
-1 if icon does not exist
FILES
"icons": the file holding all icon data.
"ports": the file holding port data. (files)
NAME
who_point - find the icons which touch a specified point in the GDS.
SYNOPSIS
who_point(coord)
struct gds_coord coord; /* GDS coord for query */
struct gds_coord
desc struct nspace; /* I-Space id */
float x, y; /* universal coord in GDS */
int z; /* (plane number) */
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
Finds all icons which touch the given point in the GDS. Remember, if more than one touch, they must be nested icons. This routine returns a list of icons in order from highest parent to lowest child. The I-Space icon is never included.
ALGORITHM
The region surrounding the point in question must be loaded into core. Then search for one icon which touches that point. Then trace to the top-most parent and return it and all its descendents which touch the point.
DIAGNOSTICS
Returns:
0 always, with the icon id numbers in a shared buffer.
FILES
"icons": the file holding all icon data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
NAME
who_region - returns all the icons within a region of the GDS.
SYNOPSIS
who_region(coord, size)
struct gds_coord coord ; /* GDS coord for query */
struct gds_size size ; /* size of region */
struct gds_coord
{
int I_space ; /* I-Space id */
float x, y ; /* universal coord in GDS */
int z ; /* (plane number) */
} ;
struct gds_size
{
float x_ext, y_ext ; /* extents in GDS */
int z_ext ; /* (number of planes) */
} ;
PROCESS MEMBERSHIP
icon_manager
DESCRIPTION
Finds all icons which touch the given region in the GDS. The icons are returned in a list which is not sorted in any way.
ALGORITHM
The specified region is loaded into core and searched. All icons touching the region are flagged. When the search is done, the id numbers of the flagged icons are put into a shared buffer.
DIAGNOSTICS
Returns:
0 always, with the icon id numbers in a shared buffer
FILES
"icons": the file holding all icon data.
I-Space directory: files which hold data for quick access to icons for a given region of an I-Space.
Icon Creation
The icon creation module is the only module which executes ICDL. Its purpose is to execute ICDL and to guide the creation of icons through the use of the picture construction routines.
Data structures
**per icon data:**
- local id number (used for referencing sub-icons),
- local id number of parent icon, maximum x and y extent, target GDS coord of origin of icon,
**per i-plane data:**
- source icon id, scale, orientation, (x,y) of upper-left corner of free text region, x and y extent of free text region
**per color statement:**
- color, (x,y) of position for filling
**per text statement:**
- text string, (x,y) of upper-left corner of text region, x and y extent of text region
per update region statement:
text string, update region id, (x,y) of upper-left corner region, x and y extent of region
5.4.2 Icon Creation Functions
The icon creation process falls into 4 parts:
1. Compile the icon class description, if necessary.
2. Execute the compiled file to obtain the parameters for icon construction.
3. Reserve space for the icon in the GDS.
4. Post the picture in the GDS using picture construction routines.
Each part is a separate routine in the following description.
NAME
create_icon - given a tuple, icon class description and I-Space identifier, it creates an icon for the tuple and places it in the GDS.
SYNOPSIS
create_icon (tuple, assoc_id)
struct tuple *tuple ; /* the entity tuple */
char *iid ; /* icon class to use */
int assoc_id ; /* id for class association to use */
tuple is the linked tuple for the icon to be created.
iid is a Unix file name where the icon class description resides. Otherwise, it is compiled first by the icdl compiler. i_space is an I-Space identifier.
PROCESS MEMBERSHIP
icon_creation
DESCRIPTION
This routine is called with three arguments, a tuple, an icon class description and an I-Space identifier. It creates an icon for the tuple from the given icon class description and places it in the given I-Space.
ALGORITHM
1. Read the parameter values into params, the icon parameter data structure.
2. Ask icon manager to reserve space for the free text and update regions.
3. For each plane in the I-Space that has an icon do the following using the picture construction primitives:
1. Create a scratch copy of the icon for this i-plane
2. Perform all coloring.
3. Perform any rotation and scaling.
4. Write any text onto the scratch picture.
5. Place the picture in the appropriate place in the GDS.
4. log the successful icon creation with the icon id.
DIAGNOSTICS
Returns:
icon id if it successfully created and placed the icon
-1 if the icon class was bad (i.e. could not compile) with an error message in a shared buffer
-2 if the icon could not be placed (i.e. no room
in I-Space)
FILES
icd - the file containing the ICD used by this module
/tmp/icon.params - temporary file which holds the results of executing the ICDL.
log file - logs the actions of this module.
GDS Editor Functions
NAME
pick
SYNOPSIS
pick(x1,y1,x2,y2) int x1,y1,x2,y2; - universal coordinates
PROCESS MEMBERSHIP
GDS EDITOR
DESCRIPTION
The rectangle defined by the two points is copied to a temporary buffer
ALGORITHM
being designed
DIAGNOSTICS
returns -1 if either point is outside the core buffer limits.
NAME
put
SYNOPSIS
put(mode,x1,y1,xExtent,yExtent) - place the contents of the local buffer at the location specified
int mode,x1,y1,xExtent,yExtent; x1,y1 universal coordinates
DESCRIPTION
put can be invoked in either of two modes. When mode is zero, xExtent and yExtent act as maximum limits on the size of the placed object. When mode is 1, scaling is performed to fit the buffer to the target area.
ALGORITHM
check that target area is in the core buffer
check that local buffer has something in it
get buffer size
if(mode=0)
if(buf size greater than target size)
truncate local buffer size
set return flag
copy temporary buffer to target area (rtn feedout;feedin)
if(mode=1)
use rtn feedin;scale;feedout
calling scale with the ratio (target size) / (local buffer si
DIAGNOSTICS
Returns:
0 if successful.
-1 if target area outside core buffer
-2 if empty local buffer
-3 if truncation occurs while placing
NAME
fill
SYNOPSIS
fill(x,y,color) - fill region pointed to by x,y with color
DESCRIPTION
fills all points in the closed polygon in which x,y is located.
ALGORITHM
The value of the pixel at x,y is copied to a local variable. On a raster at a time basis, first up and then down the image, each line is scanned until a new pixel intensity relative to the stored one is encountered. The scanned pixel area is replaced with the new pixel intensity. At each transition point in pixel value all neighbors of the pixel are scanned to obtain a list of local areas which are candidates for flooding, these are stacked to be processed. After checking transition areas, the stack is popped and flooding continues at the new position.
DIAGNOSTICS
Returns:
0 if successful.
-1 if x,y is outside of the core buffer
-2 if area leaked to the display screen margin
-3 if halted abnormally by an interrupt
Appendix A
GDS Editor
NAME
grid
SYNOPSIS
grid(grid_width,snapping_neighborhood) -
display a grid
int grid_width, snapping_neighborhood;
PROCESS MEMBERSHIP
GDS EDITOR
DESCRIPTION
the grid function places a grid of specified width on
the display to aid the user in drawing straight
lines. If the snapping neighborhood is non zero ver-
tices of lines snap to the grid if they satisfy the
neighborhood condition.
ALGORITHM
A grid is placed on the screen. If the snapping
neighborhood is non zero, each point drawn is checked
for snapping. The algorithm used is:
/* calculate dx and dy where they represent the distance
to the closest grid vertice */
dx = x % grid_len;
dy = y % grid_ht;
if(dx > grid_len/2) dx = grid_len - dx;
if(dy > grid_ht /2) dy = grid_ht - dy;
/* check if dx,dy satisfy the neighborhood condition */
if( dx < x_neighborhood && dy < y_neighborhood )
/* snap to the closest grid vertice */
{ newx = (x / grid_len) * grid_len;
if(((x % grid_len) > grid_len/2)) newx = newx + grid_len;
newy = (y / grid_ht) * grid_ht;
if((y % grid_ht) > grid_ht/2) newy = newy + grid_ht;
}
/* else just return the original x,y */
else
{ xnew = x;
ynew = y;
}
return(xnew,ynew)
Vertices of drawn lines are stored when in snapping mode to allow quick deletion of the original line and rapid redrawing.
DIAGNOSTICS
Returns:
- 0 if successful
- -1 if width is greater than the screen height or length
- -2 if snapping neighborhood is greater than the grid width
NAME
reserve_menu - reserve menu area
SYNOPSIS
```c
reserve_menu(virtual_tablet_fp, x, y, x_extent, y_extent, *rtn)
```
- reserves a menu area on the menu screen
DESCRIPTION
Reserve menu area and prepare for future calls defining active areas of the menu. This area is used as a window on initial scanning of the menu. Menu hits signal the process and pass to it a menu_key defining what selection was made. Virtual_tablet_fp is the virtual tablet file pointer obtained by opening a virtual tablet. This pointer is necessary when a process is using more than one virtual tablet. *rtn is a pointer to the routine to be invoked on a menu strike. Reserve_menu() returns a menu_descriptor used in further references.
DIAGNOSTICS
Returns:
- menu_descriptor > 0 if successful
- -1 if the space was not available
NAME
close_menu
SYNOPSIS
close_menu(menu_descriptor)
- unallocates the menu area
allocated by the reserve_menu
command
PROCESS MEMBERSHIP
DESCRIPTION
Deallocates menu area and tables set by the
reserve_menu command.
DIAGNOSTICS
Returns:
0 if successful
-1 if the menu_descriptor is invalid
NAME
set_menu
SYNOPSIS
set_menu(menu_descriptor, x, y, x_extent, y_extent, color, menu_key)
- sets the internal menu table
PROCESS MEMBERSHIP
DESCRIPTION
Sets the internal menu table to signal the process when a menu hit occurs. The process is passed the menu key which specifies which menu entry was picked. If color is less than 0 color is unchanged.
ALGORITHM
DIAGNOSTICS
Returns:
0 if successful
-1 if an invalid menu descriptor.
-2 if defined space extends outside virtual menu
NAME
icon_menu
SYNOPSIS
icon_menu(menu_descriptor,x,y,x_extent,y_extent,icon_file_name,menu_key)
- sets the internal menu table using the icon file specified.
DESCRIPTION
Sets the internal menu table to signal the process when a menu hit occurs. The process is passed the menu_key which specifies which menu entry was picked. The icon file is read in and used as a menu picture.
DIAGNOSTICS
Returns:
0 if successful
-1 if an invalid menu descriptor.
-2 if defined space extends outside virtual menu.
-3 if the icon file can't be found or is read protected.
NAME
color_menu
SYNOPSIS
color_menu(menu_descriptor,x,y,x_extent,y_extent,color)
- color rectangular
area of menu specified.
PROCESS MEMBERSHIP
DESCRIPTION
Colors the menu area specified.
DIAGNOSTICS
Returns:
0 if successful.
-1 if an invalid menu descriptor.
-2 if defined space extends outside virtual menu.
NAME
disable_menu
SYNOPSIS
disable_menu(menu_descriptor) - temporarily disable menu area
DESCRIPTION
Temporarily disables menu area. This routine and the one following are designed for temporarily targeting the active menu area, avoiding accidents from inappropriate menu commands and avoids dropping the menu visually. Useful when initially setting up the menu.
DIAGNOSTICS
Returns:
0 if successful.
-1 if an invalid menu descriptor.
NAME
enable_menu
SYNOPSIS
enable_menu(menu_descriptor) - enable menu area
DESCRIPTION
enables menu area referenced by the menu descriptor.
DIAGNOSTICS
Returns:
0 if successful
-1 if an invalid menu descriptor.
Appendix B
NAME
template icon
SYNOPSIS
TEMPLATE ICON number
DESCRIPTION
Selects the template icon from the template image plane.
NAME
attribute region statement - defines an attribute region.
SYNOPSIS
ATTRIBUTE REGION attribute_name FROM (x1,y1) TO (x2,y2)
DESCRIPTION
Defines an attribute region and specifies its position within the icon. The region will display the value of the specified attribute.
DEFINITIONS
<attribute> ::=
ATTRIBUTE REGION<attribute_name>
FROM (<arith_expr>,<arith_expr>)
TO (<arith_expr>,<arith_expr>);
<attribute_name> ::= <identifier>;
EXAMPLE
/* attribute region for persons phone */
attribute region r.phone from (0,0) to (50,20)
SEE ALSO
general(icdl), plane(icdl), example(icdl)
DIAGNOSTICS
BUGS
NAME
image plane statement - statement to describe one plane of an icon.
SYNOPSIS
IMAGE PLANE plane# plane_stmts END
DESCRIPTION
The image plane statement fully describes one plane of an icon for an icon class description. The plane# states which plane in the I-Space is intended. The plane_stmts further modify this picture. They include text, coloring, drawing pictures, etc.
The icon used for a plane should be one plane deep. If the icon covers more than one plane, only the top-most plane is used.
DEFINITIONS
<plane> ::= IMAGE PLANE <plane#> <icon_id> <plane_stmts> END ;
<plane#> ::= <arith_expr> ;
<plane_stmts> ::= <plane_stmt> ;
<plane_stmt> ::= <plane_stmt> <plane_stmts> ;
EXAMPLE
see example(icdl)
SEE ALSO
general(icdl), icon(icdl), example(icdl)
NAME
position - defines target position for icon.
SYNOPSIS
POSITION (x, y)
DESCRIPTION
Defines the target position for each icon. The target position is the position in the I-Space where SDMS will attempt to place the created icon. This position is in the user's coordinates for the I-space. If it cannot place it at the target position, it attempts to find a position close by. If the icon cannot fit anywhere, an error occurs.
DEFINITIONS
<position> ::= POSITION (<arith_expr>,<arith_expr>);
EXAMPLE
/* set icon origin to be I-Space origin */
// (taken from example(icdl))
position (0,0)
SEE ALSO
general(icdl), icon class(icdl), example(icdl)
DIAGNOSTICS
BUGS
References
[HEROT et al.]
[JOHNSON]
|
{"Source-Url": "https://ia800802.us.archive.org/11/items/DTIC_ADA068863/DTIC_ADA068863.pdf", "len_cl100k_base": 13113, "olmocr-version": "0.1.50", "pdf-total-pages": 83, "total-fallback-pages": 0, "total-input-tokens": 154180, "total-output-tokens": 16781, "length": "2e13", "weborganizer": {"__label__adult": 0.0003604888916015625, "__label__art_design": 0.0011796951293945312, "__label__crime_law": 0.00029087066650390625, "__label__education_jobs": 0.0014848709106445312, "__label__entertainment": 0.00011909008026123048, "__label__fashion_beauty": 0.00019109249114990232, "__label__finance_business": 0.0004813671112060547, "__label__food_dining": 0.00034546852111816406, "__label__games": 0.0008025169372558594, "__label__hardware": 0.0168609619140625, "__label__health": 0.0003542900085449219, "__label__history": 0.0006151199340820312, "__label__home_hobbies": 0.00018537044525146484, "__label__industrial": 0.0013856887817382812, "__label__literature": 0.0002582073211669922, "__label__politics": 0.0002524852752685547, "__label__religion": 0.000568389892578125, "__label__science_tech": 0.232421875, "__label__social_life": 6.109476089477539e-05, "__label__software": 0.03302001953125, "__label__software_dev": 0.70751953125, "__label__sports_fitness": 0.0002601146697998047, "__label__transportation": 0.0008211135864257812, "__label__travel": 0.0002002716064453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59226, 0.01708]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59226, 0.4854]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59226, 0.87284]], "google_gemma-3-12b-it_contains_pii": [[0, 862, false], [862, 2845, null], [2845, 3912, null], [3912, 4546, null], [4546, 4734, null], [4734, 5821, null], [5821, 5821, null], [5821, 6341, null], [6341, 7605, null], [7605, 8699, null], [8699, 9732, null], [9732, 10191, null], [10191, 11382, null], [11382, 12302, null], [12302, 13468, null], [13468, 14448, null], [14448, 14569, null], [14569, 15753, null], [15753, 15793, null], [15793, 17036, null], [17036, 17089, null], [17089, 17140, null], [17140, 18547, null], [18547, 18588, null], [18588, 19677, null], [19677, 20757, null], [20757, 20865, null], [20865, 21927, null], [21927, 23198, null], [23198, 24460, null], [24460, 24632, null], [24632, 24632, null], [24632, 24632, null], [24632, 25907, null], [25907, 26994, null], [26994, 28087, null], [28087, 28932, null], [28932, 30025, null], [30025, 30960, null], [30960, 32167, null], [32167, 33369, null], [33369, 34339, null], [34339, 34408, null], [34408, 35367, null], [35367, 36557, null], [36557, 37023, null], [37023, 37107, null], [37107, 37216, null], [37216, 37392, null], [37392, 37392, null], [37392, 38783, null], [38783, 39760, null], [39760, 40265, null], [40265, 41663, null], [41663, 42866, null], [42866, 44181, null], [44181, 44391, null], [44391, 44772, null], [44772, 45786, null], [45786, 46832, null], [46832, 47537, null], [47537, 47658, null], [47658, 48039, null], [48039, 49604, null], [49604, 49805, null], [49805, 49826, null], [49826, 50122, null], [50122, 51063, null], [51063, 51955, null], [51955, 53147, null], [53147, 53431, null], [53431, 54243, null], [54243, 54548, null], [54548, 55042, null], [55042, 55615, null], [55615, 55933, null], [55933, 56379, null], [56379, 56596, null], [56596, 56728, null], [56728, 57355, null], [57355, 58124, null], [58124, 58794, null], [58794, 59226, null]], "google_gemma-3-12b-it_is_public_document": [[0, 862, true], [862, 2845, null], [2845, 3912, null], [3912, 4546, null], [4546, 4734, null], [4734, 5821, null], [5821, 5821, null], [5821, 6341, null], [6341, 7605, null], [7605, 8699, null], [8699, 9732, null], [9732, 10191, null], [10191, 11382, null], [11382, 12302, null], [12302, 13468, null], [13468, 14448, null], [14448, 14569, null], [14569, 15753, null], [15753, 15793, null], [15793, 17036, null], [17036, 17089, null], [17089, 17140, null], [17140, 18547, null], [18547, 18588, null], [18588, 19677, null], [19677, 20757, null], [20757, 20865, null], [20865, 21927, null], [21927, 23198, null], [23198, 24460, null], [24460, 24632, null], [24632, 24632, null], [24632, 24632, null], [24632, 25907, null], [25907, 26994, null], [26994, 28087, null], [28087, 28932, null], [28932, 30025, null], [30025, 30960, null], [30960, 32167, null], [32167, 33369, null], [33369, 34339, null], [34339, 34408, null], [34408, 35367, null], [35367, 36557, null], [36557, 37023, null], [37023, 37107, null], [37107, 37216, null], [37216, 37392, null], [37392, 37392, null], [37392, 38783, null], [38783, 39760, null], [39760, 40265, null], [40265, 41663, null], [41663, 42866, null], [42866, 44181, null], [44181, 44391, null], [44391, 44772, null], [44772, 45786, null], [45786, 46832, null], [46832, 47537, null], [47537, 47658, null], [47658, 48039, null], [48039, 49604, null], [49604, 49805, null], [49805, 49826, null], [49826, 50122, null], [50122, 51063, null], [51063, 51955, null], [51955, 53147, null], [53147, 53431, null], [53431, 54243, null], [54243, 54548, null], [54548, 55042, null], [55042, 55615, null], [55615, 55933, null], [55933, 56379, null], [56379, 56596, null], [56596, 56728, null], [56728, 57355, null], [57355, 58124, null], [58124, 58794, null], [58794, 59226, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59226, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59226, null]], "pdf_page_numbers": [[0, 862, 1], [862, 2845, 2], [2845, 3912, 3], [3912, 4546, 4], [4546, 4734, 5], [4734, 5821, 6], [5821, 5821, 7], [5821, 6341, 8], [6341, 7605, 9], [7605, 8699, 10], [8699, 9732, 11], [9732, 10191, 12], [10191, 11382, 13], [11382, 12302, 14], [12302, 13468, 15], [13468, 14448, 16], [14448, 14569, 17], [14569, 15753, 18], [15753, 15793, 19], [15793, 17036, 20], [17036, 17089, 21], [17089, 17140, 22], [17140, 18547, 23], [18547, 18588, 24], [18588, 19677, 25], [19677, 20757, 26], [20757, 20865, 27], [20865, 21927, 28], [21927, 23198, 29], [23198, 24460, 30], [24460, 24632, 31], [24632, 24632, 32], [24632, 24632, 33], [24632, 25907, 34], [25907, 26994, 35], [26994, 28087, 36], [28087, 28932, 37], [28932, 30025, 38], [30025, 30960, 39], [30960, 32167, 40], [32167, 33369, 41], [33369, 34339, 42], [34339, 34408, 43], [34408, 35367, 44], [35367, 36557, 45], [36557, 37023, 46], [37023, 37107, 47], [37107, 37216, 48], [37216, 37392, 49], [37392, 37392, 50], [37392, 38783, 51], [38783, 39760, 52], [39760, 40265, 53], [40265, 41663, 54], [41663, 42866, 55], [42866, 44181, 56], [44181, 44391, 57], [44391, 44772, 58], [44772, 45786, 59], [45786, 46832, 60], [46832, 47537, 61], [47537, 47658, 62], [47658, 48039, 63], [48039, 49604, 64], [49604, 49805, 65], [49805, 49826, 66], [49826, 50122, 67], [50122, 51063, 68], [51063, 51955, 69], [51955, 53147, 70], [53147, 53431, 71], [53431, 54243, 72], [54243, 54548, 73], [54548, 55042, 74], [55042, 55615, 75], [55615, 55933, 76], [55933, 56379, 77], [56379, 56596, 78], [56596, 56728, 79], [56728, 57355, 80], [57355, 58124, 81], [58124, 58794, 82], [58794, 59226, 83]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59226, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c165062e4d7153281ae621aeb0de67a873a848d3
|
Abstract
In this paper, we propose a resource-aware solution to achieving reliable and scalable stream diffusion in a probabilistic model, i.e., where communication links and processes are subject to message losses and crashes, respectively. Our solution is resource-aware in the sense that it limits the memory consumption, by strictly scoping the knowledge each process has about the system, and the bandwidth available to each process, by assigning a fixed quota of messages to each process. We describe our approach as gambling in the sense that it consists in accepting to give up on a few processes sometimes, in the hope to better serve all processes most of the time. That is, our solution deliberately takes the risk not to reach some processes in some executions, in order to reach every process in most executions. The underlying stream diffusion algorithm is based on a tree-construction technique that dynamically distributes the load of forwarding stream packets among processes, based on their respective available bandwidths. Simulations show that this approach pays off when compared to traditional gossiping, when the latter faces identical bandwidth constraints.
1 Introduction
Reliable stream diffusion under constrained environment conditions is a fundamental problem in large-scale multimedia content delivery. In this context, the efficiency of a given content delivery solution directly depends on the performance of its underlying multicast protocol. Environment conditions are typically constrained by the reliability and the capacity, usually limited, of its components. Nodes and communication links can fail, unexpectedly ceasing their operation and dropping messages, respectively. Moreover, real-world deployment does not offer nodes and links infinite memory and infinite bandwidth. Therefore, realistic solutions should use local storage and inter-node communication sparingly, and account for node crashes and message losses.
In this paper, we investigate the problem of reliable stream diffusion in unreliable and constrained environments from a novel angle. Our approach is probabilistic: with high probability, all consumers will be reached and deliver all information addressed to them; however, there is no guarantee that this will happen. Differently from previous probabilistic algorithms found in the literature, we resort to a “gambling approach,” which deliberately penalizes a few consumers in rare cases, in order to benefit most consumers in common cases. We show experimentally that the approach pays off in that it outperforms traditional gossip-based algorithms when subject to similar environment constraints.
The key idea of our solution is to stream multimedia content according to a global propagation graph. This graph approximates a global tree aiming at the maximum reachability and efficient use of the available bandwidth. The approach is completely decentralized: nodes build propagation trees, which we call Maximum Probability Trees (MPTs), autonomously. Several MPTs are dynamically composed to achieve a global graph reaching most (hopefully all) consumer nodes. This solution is scalable and based on a composition of local optima, i.e., each MPT ensures the maximum probability of reaching all processes in its subgraph when subject to bandwidth constraints. MPTs are composed in a manner that respects bandwidth constraints, and the MPT construction is fully parameterized. Nodes are free to define the scope of their local knowledge, from direct neighborhood to the entire network. The scope of each process can be defined according to its local constraints (e.g., processing power, memory capacity).
Besides discussing a new reliable stream diffusion algorithm, we also show that it can be implemented in a very modular way, lending itself to real deployment. Our solution consists in decomposing the problem of reliable stream diffusion into sub-problems. This separation of concerns gives rise to an architecture composed of five layers.
The remainder of this paper is organized as follows. In Section 2 we introduce the system model and define the problem that motivates this work. Section 3 describes our reliable streaming solution based on a tree-construction technique. Section 4 describes a performance evaluation of our approach, including an analysis of the costs and benefits of gambling. We discuss related work in Section 5. Finally, in Section 6 we summarize our findings and conclude with some final remarks.
2 Scalable Resource-Aware Streaming
Stream diffusion is a typical 3-step scenario: (1) the producer breaks the outgoing stream into elemental messages (stream packets) and multicasts them to interested consumers, (2) intermediate nodes route these messages to the consumers, and (3) each consumer recomposes the received messages into a coherent incoming stream. This is depicted in Figure 1. In a resource-
constrained environment, the main challenge then consists in routing stream messages in a way that efficiently uses available resources.
2.1 Basic system model
We consider an asynchronous distributed system composed of processes (nodes) that communicate by message passing. Our model is probabilistic in the sense that processes can crash and links can lose messages with a certain probability. More formally, we model the system’s topology as a connected graph $G = (\Pi, \Lambda)$, where $\Pi = \{p_1, p_2, ..., p_n\}$ is a set of processes of size $n$, and $\Lambda = \{l_1, l_2, ...\} \subseteq \Pi \times \Pi$ is a set of bidirectional communication links. Process crash probabilities and message loss probabilities are modeled as failure configuration $C = (P_1, P_2, ..., P_n, L_1, L_2, ..., L_{|\Lambda|})$, where $P_i$ is the probability that process $p_i$ crashes during one computation step and $L_j$ is the probability that link $l_j$ loses a message during one communication step.
2.2 Problem statement
Intuitively, the main question addressed in this paper is the following: how can we make stream messages reach all consumers with a high probability, in spite of unreliable processes and links, and of the limited resources (e.g., bandwidth) available to each process?
Formally, the limited resources constraint is modeled as $Q = (q_1, q_2, ..., q_n)$, the set of quotas associated to processes in the system. Each individual quota of messages $q_i$ represents the number of messages process $p_i$ able to send in order to forward a single stream packet. A quota may represent a set of physical constraints related to the limited hardware resources or a dedicated percentage of these resources fixed by the peer itself. This percentage captures the fact that the user behind a peer can voluntarily limit the resources dedicated to the P2P streaming service. In other words, a quota is a translation of both the percentage of hardware resources a peer is willing to dedicate to forward a stream packet and the upload limit of the ISP of the peer, which might be further limited by the percentage of that bandwidth the peer is willing to dedicate to the streaming service. By extending the basic system model presented earlier, we then can say that the tuple $S = (\Pi, \Lambda, C, Q)$ completely defines the system considered in this paper.
In order to take into account processing and memory constraints, we further assume that
each process has only a partial view of the system, meaning that its routing decisions can only be based on incomplete knowledge. Formally, the limited knowledge of process $p_i$ is modeled with distance $d_i$, which defines the maximum number of links in the shortest path separating $p_i$ from any other node in its known subgraph. Distance $d_i$ implicitly defines the partial knowledge of $p_i$ as scope $s_i = (\Pi_i, \Lambda_i, C_i, Q_i)$, with $\Pi_i \subseteq \Pi$, $\Lambda_i \subseteq \Lambda$, $C_i \subseteq C$, and $Q_i \subseteq Q$. In the remainder of this paper, any graph comprised of processes and links should be understood as also including the corresponding configuration and quota information.
Based on the above definitions, we can now restate the problem we address in this paper more succinctly: given its limited scope $s_i$, how should process $p_i$ use its quota $q_i$ in order to contribute to reach all consumers with a high probability?
3 A Gambling Approach
In the absence of any constraints on resources, making stream messages reach all processes with a high probability is quite easy, typically via some generous gossiping (or even flooding) algorithm. In a large-scale resource-constrained system, however, such a solution is not realistic.
3.1 Diffusion trees as starting point
The starting point of our approach can be found in [1], where we proposed an algorithm to efficiently diffuse messages in a probabilistically unreliable environment. Intuitively, the solution consists in building a spanning tree that contains the most reliable paths connecting all processes, using a modified version of Prim’s algorithm [32]. The algorithm is also somehow resource-aware in that it tries to minimize the number of messages necessary to reach all processes with a given probability.
This algorithm, however, does not limit the bandwidth: when asking the algorithm to diffuse a message with a high probability in a very unreliable environment, the number of messages tends to explode. Furthermore, this solution does not limit memory consumption either: in order to achieve optimality, it requires a complete knowledge of the system topology and of the failure probabilities associated to links and processes. Informally, the approach presented hereafter consists in building a diffusion graph that exhibits properties similar to that of [1], while taking into account strict constraints on resources (bandwidth, memory, etc). As presented in Section 2, these constraints are modeled via $q_i$ and $s_i$, respectively the limited quota and the limited scope available at each process $p_i$.
As soon as we face resource constraints, we have to make difficult decisions. In the context of this paper, this observation translates into deciding how high the risk we are willing to take is, in order to increase our chances to reach all consumers. More specifically, the question we ask ourselves is the following: does it pay off to take the risk to sacrifice a few consumer processes in some executions, in order to reach every process in most executions? As we shall see in Section 4, when comparing the performance of our solution to that of a typical gossiping approach, the answer is clearly yes.
Intuitively, our approach consists in having processes make bold decisions, in spite of their limited view of the system (scope), in the hope to better use the available resources (quota). That is, along the paths from the producer to the consumers, a process $p_i$ may decide to build a local propagation tree based on its limited scope $s_i$ in order to maximize the probability to
reach everybody in $s_i$.\footnote{The actual criteria that determines whether $p_i$ will make such a decision or not is explained later.} In building its local propagation tree, $p_i$ also decides how processes in $s_i$ should use their quotas. Since these decisions can be made concurrently, process $p_i$ has no guarantee that processes in $s_i$ will actually follow its decisions. As we shall see in Section 4, this approach can lead to some (fairly rare) executions in which some processes are never reached. Experiments show however that the benefits of taking such a risk pays off in most executions.
### 3.2 Solution overview
Our solution is based on the five-layer architecture pictured in Figure 2. The top layer represents a standard stream fragmentation layer. It executes the Scalable Streaming Algorithm (SSA), is responsible for breaking the outgoing stream into a sequence of messages on the producer side, and for assembling these messages back into an incoming stream on the consumer side. Roughly speaking, this layer corresponds to the Transport layer in the OSI model \[2\]. The SSA layer then relies on the Packet Routing Algorithm (PRA), which is responsible for routing stream messages through a propagation graph covering the whole system; this layer corresponds to the Network layer in the OSI model. This propagation graph results from the spontaneous aggregation of various propagation trees concurrently computed by some intermediate routing processes defined as responsible for this task. As suggested by Figure 2, producers and consumers execute both the SSA and PRA layers, while pure routing processes only execute the PRA layer. The responsibility for building propagation trees is delegated to the Propagation Tree Algorithm (PTA), which in turn relies on the partial view delivered by the Environment Modeling Layer (EML). The latter relies on Bayesian inference to approximate the environment within distance $d_i$ of each process $p_i$. Explaining how the environment modeling actually works falls beyond the scope of this paper and can be found in \[1\]. Finally, the Unreliable Link Layer (ULL) allows each process $p_i$ to send messages to its direct neighbors in a probabilistically unreliable manner. This layer corresponds to the Data Link layer of the OSI model.
Figure 2: A layered architecture
3.3 Scalable Streaming Algorithm (SSA)
The scalable streaming solution, presented in Algorithm 1, is fairly straightforward. On the producer side, as long as some data is available from the outgoing stream (line 6), the algorithm reads that data, builds up a message containing it and multicasts the message using the `multicast()` primitive of the PRA layer (lines 7 to 10). On the consumer side, upon receiving a message from PRA (line 11), the algorithm writes the data contained in that message to the incoming stream, provided that the message is not out of sequence (lines 12 to 14). Because of the probabilistic nature of our environment, messages can indeed be received out of sequence, in which case they are simply dropped. This is the standard way to handle out-of-sequence packets when streaming real-time data, such as audio or video streams. Note that this strategy can be easily improved by a simple local buffering mechanism in order to deal with jitter and out-of-order messages.
**Algorithm 1** Scalable Streaming Algorithm at \( p_i \)
```plaintext
1: uses: PRA
2: initialization:
3: \( \text{nextSeq} \leftarrow 1 \)
4: \( \text{lastSeq} \leftarrow 0 \)
5: To multicast some \( \text{outgoingStream} \) to a set of \( \text{consumers} \):
6: while not \( \text{outgoingStream.eof()} \) do
7: \( \text{m.data} \leftarrow \text{outgoingStream.read()} \)
8: \( \text{m.seq} \leftarrow \text{nextSeq} \)
9: \( \text{nextSeq} \leftarrow \text{nextSeq} + 1 \)
10: \( \text{PRA.multicast(m, consumers)} \)
11: upon \( \text{PRA.deliver(m)} \) do
12: if \( m.seq > \text{lastSeq} \) then
13: \( \text{incomingStream.write(m.data)} \)
14: \( \text{lastSeq} \leftarrow m.seq \)
```
3.4 Packet Routing Algorithm (PRA)
The packet routing solution, presented in Algorithm 2, consists in disseminating stream messages through a propagation graph generated in a fully decentralized manner. This propagation graph actually results from the spontaneous aggregation of several propagation trees. Each propagation tree is in turn the result of an incremental building process carried out along the paths from the producer to the consumers. It is important to note however that the aggregated propagation graph itself might well not be a tree.
**On the producer.** The routing process starts with producer \( p_i \) calling the `multicast()` primitive (line 4). As a first step, \( p_i \) asks the PTA layer to build a first propagation tree \( pt \), using the `incrementPT()` primitive (line 5). This primitive is responsible for incrementing the propagation tree passed as argument, using the scope of the process executing it (here \( p_i \)). Since \( p_i \) is the producer, the initial propagation tree passed as argument is simply composed of \( p_i \) and its associated information (failure probability \( P_i \) and quota \( q_i \)). As discussed in Section 3.5, the returned propagation tree \( pt \) maximizes the probability to reach everybody in scope \( s_i \), based on available quotas. Process \( p_i \) then calls the `optimize()` primitive, passing it \( pt \) (line 6). This primitive is discussed in details in Section 3.7. At this point, all we need to know is that it
Algorithm 2 Packet Routing Algorithm at $p_i$
1: uses: PTA, ULL, EML
2: initialization:
3: \[ r \leftarrow \ldots \]
4: procedure multicast($m$)
5: \[ pt \leftarrow \text{PTA.incrementPT}(\{p_i\}, \emptyset, \{P_i\}, \{q_i\}) \]
6: \[ \vec{m} \leftarrow \text{optimize}(pt) \]
7: \[ \text{propagate}(m, pt, p_i, \vec{m}) \]
8: upon ULL.receive($m, p_k, pt, \vec{m}$) do
9: \[ \text{if } \text{EML.distance}(p_k, p_i) \geq r \text{ then} \]
10: \[ pt \leftarrow \text{PTA.incrementPT}(pt) \]
11: \[ \vec{m} \leftarrow \text{optimize}(pt) \]
12: \[ \text{propagate}(m, pt, p_i, \vec{m}) \]
13: \[ \text{else} \]
14: \[ \text{propagate}(m, pt, p_k, \vec{m}) \]
15: \[ \text{if } p_i \text{ is interested in } m \text{ then} \]
16: \[ \text{SSA.deliver}(m) \]
17: procedure propagate($m, pt, p_k, \vec{m}$)
18: \[ \text{for all } p_j \text{ such that link } (p_i, p_j) \in E(pt) \text{ do} \]
19: \[ \text{repeat } \vec{m}[j] \text{ times :} \]
20: \[ \text{ULL.send}(m, p_k, pt, \vec{m}) \text{ to } p_j \]
returns a propagation vector $\vec{m}$ indicating, for each link in $pt$, the number of messages that should be sent through that link in order to maximize the probability to reach everybody in scope $s_i$. Finally, $p_i$ calls the propagate() primitive (line 7), which simply follows the forwarding instructions computed by optimize(). That is, it sends stream message $m$, together with some additional information, to $p_i$’s children in $pt$. As we shall see below, this additional information is used throughout the routing process to build up the propagation graph.
**On the consumer.** When a consumer $p_i$ receives message $m$, together with the aforementioned information (line 8), it has first to decide whether to increment $pt$ before further propagating $m$ (lines 10 to 12), or to simply follow the propagation tree $pt$ it just received (line 14). The propagation tree $pt$ should be incremented if and only if the distance that separates $p_i$ from $p_k$, the process that last incremented $pt$, is equal to $r \leq d_k$, the increment rate. In such a case, $p_i$ is said to be an incrementing node.
Intuitively, $r$ defines how often a propagation tree should be incremented as it travels through the propagation graph. The latter then spontaneously results from the concurrent and uncoordinated increments of propagation trees finding their ways to the consumers. Finally, process $p_i$ delivers message $m$ to the SSA layer only if it is interested in it (lines 15 and 16). If this is not the case, process $p_i$ is merely a router node.
### 3.5 Propagation Tree Algorithm (PTA)
The solution to increment propagation trees is encapsulated in the incrementPT() primitive, presented in Algorithm 3. This primitive takes a propagation tree $pt$ as argument and increments it if needed, i.e., if something changed in the environment of $p_i$ or if $pt$ is different from the
propagation tree that was last incremented (line 8). The conditional nature of this increment is motivated by performance and resources concerns: during stable periods of the system, propagation trees remain unchanged, cutting down the processing load of incrementing nodes. To get an up-to-date view of its surrounding environment, \( p_i \) calls the \texttt{getScope()} primitive provided by EML (line 7).
To build local tree \( lpt_i \), process \( p_i \) first builds a Maximum Probability Tree (MPT), using the \texttt{mpt()} primitive (line 11). Details about the notion of maximum probability tree, and primitive \texttt{mpt()}, are provided in Section 3.7. Briefly, MPT maximizes the probability to reach every process within a given scope, by taking into account not only the intrinsic reliability of processes and links in scope \( s_i \), but also the individual quotas available to processes in \( s_i \). Note that primitive \texttt{mpt()} increments \( pt \) as a whole (see discussion below), whereas Algorithm 3 is in fact only interested in the subtree rooted at \( p_i \) (line 12). This subtree is precisely the local tree \( lpt_i \).
\begin{algorithm}
\begin{align*}
1: & \quad \text{uses: EML} \\
2: & \quad \text{initialization:}
3: & \quad lpt_i \leftarrow \emptyset \\
4: & \quad pt_i \leftarrow \emptyset \\
5: & \quad s_i \leftarrow \emptyset \\
6: & \quad \text{function incrementPT}(pt) \\
7: & \quad \quad s \leftarrow \text{EML.getScope()} \\
8: & \quad \quad \text{if } pt_i \neq pt \lor s_i \neq s \text{ then} \\
9: & \quad \quad \quad pt_i \leftarrow pt \\
10: & \quad \quad s_i \leftarrow s \\
11: & \quad \quad myMpt \leftarrow mpt(s_i, pt_i) \\
12: & \quad \quad lpt_i \leftarrow \text{subtree of } myMpt \text{ with } p_i \text{ as root} \\
13: & \quad \text{return } pt \cup lpt_i
\end{align*}
\end{algorithm}
3.6 The gambling effect.
Intuitively, the approach taken by the \texttt{mpt()} primitive consists in augmenting \( pt \) with the best branches in scope \( s_i \), even if some of these branches are not downstream from \( p_i \). These latter branches are said to be \textit{concurrent branches}. This approach somehow consists in taking the risk to exclude some consumers from the propagation graph by accident. Process \( p_i \) has indeed no way to inform processes located along concurrent branches about its incremental decisions, and has no guarantee that incremental decisions will be taken coherently with respect to each other. In order to partially mitigate this risk, Algorithm 3 merges the local tree with the original propagation tree passed as argument (line 13), rather than directly returning the maximum reliability tree.
\textbf{Execution example.} Figure 3 illustrates the incrementing of the propagation tree on a simple example. In this scenario, the distance defining the scope and the increment rate \( r \) are the same for all processes and equal to 2. Process \( p_1 \), the producer, builds a first \textit{propagation tree} \( pt_1 \) covering its scope \( s_1 \); this tree is pictured in Figure 3 (a) using bold links. All nodes in \( pt_1 \) that are at a distance \( r = 2 \) from \( p_1 \) are \textit{incrementing nodes}, which means they have to increment \( pt_1 \) when they receive it. Process \( p_3 \) being such a node, it calls the \texttt{mpt()} function, passing it \( pt_1 \) and its scope \( s_3 \). This function adds the dashed links pictured in Figure 3 (a)
Figure 3: Propagation tree increment
to \( pt_1 \) and returns the resulting Maximum Probability Tree (MPT); this MPT contains the local propagation tree rooted at \( p_3 \), i.e., \( lpt_3 \). The latter is then extracted from the MPT, merged with the initial propagation tree \( pt_1 \) and returned. Figure 3 (b) pictures the new propagation tree resulting from the above increment process.
3.7 Maximum Probability Tree (MPT)
The concept of Maximum Probability Tree (MPT) is at the heart of our approach, as it materializes the risk taken during the construction of the propagation graph. Intuitively, an MPT maximizes the probability to reach all processes within a given scope by optimally using the quotas of these processes. Before describing how the \( mpt() \) function given in Algorithm 4 builds up an MPT, we first recall the notions of reachability probability and reachability function.
Reachability probability. The reachability function, denoted \( R() \), computes the probability to reach all processes in some propagation tree \( T \) with configuration \( C(T) \), given a vector \( \vec{m} \) defining the number of messages that should transit through each link of \( T \). We then define the probability returned by \( R() \) as \( T \)'s reachability probability. Equation 1 below proposes a simplified version of the reachability function borrowed from [1] — this version assumes that only links can fail by losing messages with a given probability, whereas processes are assumed to be reliable.\(^2\)
\[
R(T, \vec{m}) = \prod_{j=1}^{|\vec{m}|} 1 - L_j^{m[j]} \text{ where } L_j \in C(T)
\]
Using \( R() \), we then define the \( maxR() \) function presented in Algorithm 4 (lines 8 to 10), which returns the maximum reachability probability for \( T \). To achieve this, \( maxR() \) first calls the \( optimize() \) function in order to obtain a vector \( \vec{m} \) that optimally uses the quotas available to processes in \( T \). It then passes this vector, together with \( T \), to \( R() \) and returns the corresponding reachability probability.
The \( optimize() \) function iterates through each process \( p_s \) in \( T \) and divides individual quota \( q_s \) in a way that maximizes the probability to reach direct children of \( p_s \) (line 14 to 20). For
\(^2\)Note that this simplification causes no loss of generality; see [1] for details.
this, function optimize() allots messages of $q_s$ one by one, until all messages have been allocated (line 18 to 20). That is, in each iteration step it chooses the outgoing link $l_u$ from $p_s$ that maximizes the gain in probability to reach all $p_s$’s children in $T$, when sending one more message through $l_u$ (line 19). When all individual quotas have been allocated, optimize() returns a vector $\vec{m}$ that provides the maximum reachability probability when associated with $T$.
**MPT building process.** We now have all the elements needed to present the MPT building process carried out by mpt(), given a scope $S$ and an initial propagation tree $T$. This function simply iterates until all processes in $S$ but not in $T$ have been linked to $T$, i.e., it only stops when $T$ covers the whole scope $S$ (line 2 to 6). In each iteration step, the mpt() function then adds the link that produces a new tree exhibiting the maximum reachability probability (line 5).
**Execution example.** Figures 4 to 6 illustrate the MPT building process on a simple example. In this example, the initial tree $T$ is composed of only process $p_1$ and $S$ is the scope of $p_1$, i.e., $S = s_1$. During the first iteration step, the algorithm simply chooses the most reliable link, i.e., link $l_{1,2}$ with failure probability $L_{1,2} = 0.2$. At this point, it means that the entirety of $p_1$’s quota has been allocated to reach $p_2$. In this example, the quota is identical for all processes and equal to 3, i.e., $\forall p_i : q_i = 3$.
At the beginning of the second step, the algorithm faces two alternatives: either adding link $l_{1,3}$ and splitting the quota of $p_1$ between links $l_{1,2}$ and $l_{1,3}$, or adding link $l_{2,4}$ and using the entirety of $q_2$, the quota of $p_2$, to reach $p_4$. These two alternatives are pictured in Figure 5 as trees $T'$ and $T''$ respectively.
Based on the result of function maxR(), the algorithm chooses to keep $T''$, since it is the tree that offers the maximum probability to reach everybody. Note however that this decision implies
adding link $l_{2,4}$ rather than link $l_{1,3}$, although the latter is more reliable. Figure 6 pictures the final Maximum Probability Tree returned by function $mpt()$.
4 Performance evaluation
The performance of our scalable algorithm was evaluated through a simulation model. For simplicity, we only considered link failures, while assuming that processes are reliable, i.e., $\forall p_i : P_i = 0$. As mentioned in Section 3.7, this does not compromise the generality of our approach. We performed experiments with processes organized in various topologies: we started from a ring where each process had two neighbors and then incrementally augmented the number of neighbors until reaching a connectivity of 20 neighbors per process. This provided a spectrum of possibilities for the evaluations, starting with a worst-case topology with respect to process distances (i.e., the ring), and gradually reducing the mean distance between processes in the system by adding more links. Unless mentioned otherwise, we assumed topologies with 100 processes.
To facilitate the evaluation, we set the scope to be the same for all processes during the execution, i.e., $\forall p_i : d_i = d$. To avoid regular network configurations, we then defined 20% of
processes to be hubs. A hub has twice the quota of a normal process and is connected to its neighbors through highly reliable links, i.e., we set the message loss probability of these links to $10^{-4}$. Our performance evaluation consists in measuring the success rate of 1000 distinct executions. We consider an execution to be a success when the multicast stream packet reaches all nodes in the system, i.e., the success rate is precisely what the notion of reachability probability tries to capture.
4.1 Benefits of gambling
Multicast protocols fall into two categories, those based on structured information dissemination such as our Scalable Streaming Algorithm (SSA), and those based on unstructured information dissemination, typically the case of gossip-based protocols. To measure the benefit of our gambling approach, we compare SSA with a typical Gossip-Based Algorithm (GBA), modified to implement the notion of individual quota: to propagate an incoming message $m$, the algorithm repeats the following two steps until exhausting its quota: (1) randomly choose a neighbor among those that did not yet acknowledge $m$ and (2) send $m$ to those neighbors. For the comparison, we then set the quota to 5 and the failures probability of each link to a random value within $[0.05, 0.55]$. As for specific parameters of SSA, we set the scope to 5 and the increment rate to 2.
Figure 7 shows the evolution of the success rate of SSA and GBA respectively, when varying the network connectivity. As we can see, the success rate of GBA decreases as the connectivity increases. This is due to the fact that each process randomly uses its quota of messages, without taking into account the reliability of links. Indeed, as the connectivity increases, it becomes more and more important to maximize the impact of each message on the overall reachability probability.
For SSA on the contrary, the success rate tends to increase with the network connectivity because SSA has a larger choice of links when computing local Maximum Probability Trees (MPTs), and thus more chances to build a global propagation graph with a favorable reachability probability. Furthermore, even if some processes have a number of neighbors that exceeds their quota, our approach still tries to maximize the overall reachability probability by adapting the number
---
Figure 7: SSA vs. GBA with quotas – Success rate
---
3To be more precise: each link that is not attached to a hub.
of children of each process to its quota. As shown in Figure 7, this has a significant impact on the actual success rate. For a connectivity of 20 for example, which is 4 times higher than the quota used in our experiments, the success rate is close to 100. In this figure however, we can also see a drop of the success rate for connectivities between 10 and 16. As discussed hereafter, this drop constitutes the costs of gambling.
4.2 Cost of gambling
To evaluate the cost of our gambling approach, we introduce the notion of a missed execution of our algorithm. Such an execution, also simply called a miss, is one where at least one node in the system never received the multicast packet. We can further categorize such misses as either probabilistic misses or gambling misses. Probabilistic misses are caused by unreliable links sometimes losing messages, i.e., they are due to the probabilistic nature of the model we consider. Gambling misses on the other hand happen when the effective propagation graph does not cover the whole system. An effective propagation graph results from the aggregation of effectively followed propagation trees.
In Figure 8, we show how probabilistic misses and gambling misses influence the success rate of our algorithm, i.e., the two curves presented in this figure result from the decomposition of the SSA curve presented in Figure 7. Considering probabilistic misses, we can observe that as the connectivity increases, the probability of reaching all nodes also increases. This is not surprising, since as the connectivity increases, the number of links increases and the algorithm has a larger choice of links when computing MPT and thus more chances to get an MPT with a favorable reachability probability. For gambling misses on the contrary, as the connectivity increases, misses due to the structure of the effective propagation graph become more frequent because the algorithm has a larger choice of links, which induces a higher risk to make contradictory decisions when building distinct propagation trees. However, when reaching a high connectivity (12 links or more in our example), gambling misses become less frequent because the scope of each process becomes close to the whole system.\(^4\)
\(^4\)When the scope covers the whole system, the propagation graph corresponds to the MPT built by the producer and covering the whole system. In this case there is no gambling involved.
Gambling cost mitigation. The good news is that many cases of gambling misses are detectable and can be mitigated via a simple countermeasure, which leads to a few nodes exceeding their quotas. As discussed in Section 2.2, we assume that a quota of a node is not defined as the whole node propagation capacity. It can represent either a part of its capacity or the percentage of resources the peer allocated to the streaming service. As we just saw, a gambling miss occurs when the resulting effective propagation graph does not cover the whole system. Such misses can be caused by two types of conflicting situations, pictured in Figure 9.
A cyclic conflict, illustrated in Figure 9(a), is caused by the inclusion of some node \( c \) into two distinct propagation trees. When \( c \) receives a propagation tree, it uses its quota to propagate the packet in that tree, not knowing that a second tree will reach it. So, when the second propagation tree reaches \( c \), the absence of remaining quota can cause some descendants of \( c \) in the second tree to never be reached. In Figure 9(a), node \( c \) receives two conflicting propagation trees, first one computed by node \( a \) and then one computed by node \( b \). As a consequence, nodes below \( c \) in the tree computed by \( b \) might never be reached. It is easy to see that upon reception of the second tree, \( c \) is able to detect the conflict and to apply the countermeasure described hereafter.
A mutual delegation conflict, illustrated in Figure 9(b), is caused by contradictory decisions about how to include a given node, when incrementing two distinct propagation trees. In Figure 9 (b), node \( a \) decides to delegate the task of reaching node \( x \) to node \( b \), while \( b \) decides to delegate the task of reaching \( x \) to \( a \). As a consequence, node \( x \) will never be reached. Because incrementing nodes do not inform each other about their respective incrementing decisions, the mutual delegation conflict is not detected.
Cyclic conflict countermeasure. As already suggested, we can mitigate cyclic conflicts by occasionally having some nodes exceed their quotas. It is interesting to note however that the detection of a cyclic conflict by some node \( c \) does necessarily imply that some nodes might not be reached. More precisely, there exists two independent cases that require node \( c \) to exceed its quota. The first case is straightforward and occurs when a descendant node of \( c \) in the second propagation tree is not in the first propagation tree. This case is formalized by Condition 2
below. The second case, formalized by Condition 3, is more complex and explained thanks to the example of Figure 10.
Figure 10: Two conflicting propagation trees received by $c$
\[ \exists y \in T_2.\text{children}(c) - T_1 \]
(2)
\[ \exists y \in (T_2.\text{children}(c) \cap T_1)) \land
(x \in T_2 | c \in T_2.\text{children}(x) \land (y \in T_1.\text{children}(x))) \]
(3)
In Figure 10, node $c$ first receives tree $T_1$ and later $T_2$. When receiving $T_2$ from node $x$, $c$ detects that node $y$ might not be reached. Indeed, in both trees $T_1$ and $T_2$, $y$ is a descendant of $c$. However, $x$ is a descendant of $c$ in $T_1$ and an ancestor of $c$ in $T_2$. So, when $c$ receives $T_2$ from $x$, it deduces that $x$ was either not reached in $T_1$ or reached but decided to re-transmit though $T_2$. In both cases, $c$ should retransmit, hence exceeds it quota, in order to reach $y$ or in order to reach the node for which $x$ decided to retransmit.
**Countermeasure evaluation.** When evaluating the effectiveness of our solution to mitigate gambling costs, we compared the final success rate of experiments implementing the proposed countermeasure with experiments that do not. In doing so, we varied the network connectivity $c$, while fixing the incrementing rate $r$ to 2, the scope $d$ defining the known subgraph of each process to 5, the range of loss probability $L_i$ to $[0.05, 0.55]$, and the quota of messages $q_i$ to 5.
Figure 11 (a) shows the success rate of executions implementing our countermeasure by varying the network connectivity, while Figure 11 (b) shows the corresponding average number of exceeded quotas based on 1000 distinct executions. When comparing curves of Figure 11 (a) and Figure 8 that shows the success rate considering the gambling misses,\(^5\) we can see that our countermeasure significantly improves the final result. Furthermore, as shown in Figure 11 (b), the average number of exceeded quotas is negligible, i.e., less than one for 1000 distinct executions.
\(^5\)Executions corresponding to these two curves have the same parameters.
4.3 Benefits of combined adaptiveness.
This section discusses the advantage of combining both resource and unreliability awareness when building the propagation tree, that is, it shows the benefit of our MPT construction technique. We compare our MPT to two relevant solutions. The first solution is inspired by the tree defined in Overcast [12]. Overcast is targeted at bandwidth-intensive applications. It defines a tree overlay that aims to maximize the bandwidth by placing nodes as far as possible from the root (the source) without sacrificing bandwidth. The available bandwidth resource in Overcast is modeled as weights assigned to links. In order to adapt the Overcast tree construction technique to our model, we consider the link weight as the number of messages assigned to the link, calculated by dividing the node quota by the number of its outgoing links in the tree. Thus, when building the Overcast tree in our model, at each iteration we add the link through which we can assign the maximum number of messages.
The second protocol is part of our previous work defined in [1] defining a reliable broadcast taking into account nodes failures probabilities ($P_i$) and links message loss probabilities ($L_i$). This broadcast solution is also based on a tree overlay named the Maximum Reliability Tree (MRT). This tree defines the most reliable tree of a known subgraph through which a message will be propagated. To avoid compromising this protocol, we assume for this comparison that each node will be able to send at least one message to each of its children in a tree. Thus, as shown in Figure 12 (a), each node quota of messages is equal to the number of direct neighbors, i.e., $\forall q_i, q_i = c$ the network connectivity.
When it comes to the limited knowledge each node has about the system, we assume that, in both our strategy and the compared protocols, nodes has only a partial view. Based on this knowledge, we use our Gambling increment strategy in order to build a propagation graph covering the whole system while using the different tree build criteria. Then, we apply our countermeasure to mitigate the gambling misses and focus our comparison in probabilistic misses. For fair comparison, we also apply our Optimize() function both to the Overcast tree and the MRT. Thus, once our comparison trees are built all nodes quotas are distributed in way to maximize the advantage of this resource.
Figure 11: SSA with countermeasure, $Li \in [0.05, 0.55]$ and $q_i = 5$ messages.
In this comparison, we vary the network connectivity $c$, while fixing the incrementing rate $r$ to 2, the scope $d$ defining the known subgraph of each process to 5 and the range of loss probability $L_i$ to $[0.05, 0.55]$. As shown in Figure 12, the success rate of our approach is higher when using MPT than when using the Overcast tree and the MRT. When the quota is equal to the network connectivity $c$ (Figure 12 (a)), the success rate of our approach and its comparison protocols increases as the $c$ increases and thus as $q_i$ increases. This reflects the capacity of available resources to hide the environment unreliability. When the quota $q_i$ is fixed to 5 messages (Figure 12 (b)), our approach provides a higher reliability when using MPT than when using the Overcast tree. In addition, our approach has a different behavior than when using the Overcast tree while varying the network connectivity. Indeed, as the connectivity increases more links in the system are created offering a larger choice of links to the MPT construction technique. While the MPT takes advantage to include a more reliable links, the Overcast tree moves away from the line structure which imposes more leaves and thus more lost quotas. These latter quotas would contribute to hide the environment unreliability if not lost.
### 4.4 Scalability
In order to evaluate the scalability of our algorithm, we performed several experiments with our simulation model, by drastically augmenting the number of processes in the system. In doing so, we considered all links to have the same loss probability $L \simeq 0.05$, and we fixed the scope $d$ to 50, the incrementing rate $r$ to 40 and the network connectivity $c$ to 4. We also considered all individual quotas $q_i$ to be the same and equal to the network connectivity, i.e. $\forall q_i, q_i=4$. Our scalability evaluation is pictured in Figure 13 (a) and Figure 14 (a) which show the rate of executions that succeeded to reach all nodes (100% of nodes), 99% of nodes and 98% of nodes. In Figure 13 (a) the number of nodes in the system is varied in a linear way while in Figure 14 (a) it is varied in an exponential way. Figure 13 (b) and Figure 14 (b) then show the corresponding countermeasure price, in terms of the average of exceeded quotas needed to handle detectable gambling misses. Based on these Figures, we can conclude that our strategy provides a scalable streaming solution, with a graceful linear decrease as the number of processes in the system increases. We also notice that our solution requires a very small number of exceeded quotas to
correct cyclic conflicts.
In each execution we have measured (1) the system diameter, computed as the number of links in the shortest path separating the most distant nodes; (2) the average tree depth, which represents the average distance, in terms of number of links, separating the source node to all the leaves in the resulting propagation graph; and (3) the tree depth in the propagation graph (i.e., maximum distance between the source node and the leaves). These measurements are shown in Figure 13 (c) and Figure 14 (c). Notice that the average tree depth is lower than the system diameter. This shows that while our tree construction technique aims at using the maximum of available resources, the resulting propagation graph is not a line, although a line is the topology that maximizes the use of quotas. Indeed, when enough quotas are available at some nodes (e.g., at hubs) our MPT construction algorithm assigns more than one children to those nodes, making the global tree shorter.
5 Related Work
Several peer-to-peer streaming solutions have been proposed recently. Mainly, we can classify them into two classes: structured [4, 5, 7, 8, 12, 17, 18, 26] and unstructured [14, 19, 20, 21,
Figure 14: Scalability of SSA with exponential growth of nodes, $L \simeq 0.05$ and $q_i = 4$ messages
23, 24, 29, 31]. The unstructured approach usually relies on a gossiping protocol, which consists in having each peer forward the data it receives to a set of randomly chosen neighbors. As a consequence, the path followed by the disseminated data is not deterministic. By contrast, the structured approach consists in first organizing the network peers into some overlay network and in routing disseminated data through this virtual topology.
These two approaches focus on different goals. Initially the structured strategy was devised to adapt to the underlying network characteristics, whereas the unstructured strategy, known as network agnostic, was devised for scalability. To ensure the same reliability, a structured dissemination uses fewer messages than an unstructured one. It however assumes that nodes have some knowledge about the network and imposes a computation overhead, which hinders scalability of these approaches.
Recently several researches worked to reduce the gap between the structured and the unstructured approaches. In the unstructured side, several approaches propose a more deterministic forward decision in order to adapt some environment constraints or to avoid wasting resources by sending a duplicated messages. Along this line, [14, 15, 16] propose a gossip based strategies to ensure either an optimal reliability or an optimal delay by tuning the forward decision based on information about the neighbors received packets. That is, in order to reach some delay or rate targets, each node tries to answer the following question: which stream packets should be forwarded to which neighbor? Similar to our approach, [15] adresses the network links capacity limitations, however it does not consider these components unreliability.
In the structured side, several approaches propose an overlay construction mechanism to approach the scalability of the unstructured strategies. Our solution is part of this category. Along this line, several solutions are based on a tree have been proposed in the literature [7, 8, 12, 30]. Some of them define a multicast tree that aims at optimizing the bandwidth use [7, 6, 12]. Others, also deal with scalability by limiting the knowledge each process has about the system [8, 30]. Yet, other systems aim at increasing robustness with respect to packet loss [10, 11, 13]. Our approach differs from these systems in that it targets the three goals simultaneously. Our propagation structure is build collaboratively by distributed processes using their respective partial views of system. Reliability is accounted for by each process when building its local tree. Finally, bandwidth constraints are considered when defining how to forward packets along the propagation graph.
Narada [7] builds an adaptive mesh that includes group members with low degrees and with the shortest path delay between any pair of members. A standard routing protocol then is run on the overlay mesh. This work differs from ours by considering latency as the main cost related to links. While using the probing to change links in order to optimize the mesh, Narada does not take into account the loss probability of added or retrieved links. Furthermore, Narada nodes maintain a global knowledge about all group participants. In comparison, we take process and link failure probabilities into account and maintain local information only.
Regarding the forwarding load distribution, the work closest related to ours is Overcast [12], which leads to deep distribution trees. Such a tree would be our MPT in reliable environments, that is, if links do not lose messages.
Reducing the number of gossip messages exchanged between processes by taking the network topology into account is discussed in [27] and [28]. Processes communicate according to a predetermined graph with minimal connectivity to attain a desired level of reliability. Similarly to our approach, the idea is to define a directed spanning tree on the processes. Differently from ours, process and link reliabilities are not taken into account to build such trees.
Our strategy shares some design goals with broadcast protocols such as [1]. Both rely on the definition of a criteria for selecting the multicasting graph. In our strategy, however, we strive to both decrease packet loss and balance the forwarding load. The notion of reachability probability of a tree is presented in [1] to define the Maximum Reliability Tree (MRT). In our work, we define the reachability probability of the streaming differently, by considering local knowledge only. These approaches illustrate a tradeoff in stream diffusion algorithms: while the protocol in [1] can lead to the optimum propagation tree, it requires global topology knowledge; our current algorithm relies on local knowledge but may not result in the optimal propagation tree.
When it comes to dealing with loops, which naturally appear in decentralized tree-based streaming solutions, several streaming solutions propose tree computation techniques that consist in dividing multicast members into groups. In such approaches, each group has a leader who is responsible of organizing group members in a subtree, while leaders are in turn also organized in a tree [3, 5]. While this strategy prevents loops in the resulting overlay, it however penalizes the efficiency since all optimization are done locally to each group, i.e., nodes in different groups are unable to form overlay links.
Another set of tree-based solutions avoid loop problems by taking advantage of logical address techniques, traditionally dedicated to routing solutions, in order to build a tree overlay. An example of such solutions is SplitStream [8], which builds several trees based on Scribe [4] and Pastry [9]. This approach ensures scalability since no computation is needed to define the tree. The routing is done implicitly by following the logical addresses assigned to members. The drawback of this approach is the absence of match between the overlay and the underlying physical network. That is, no efficiency guarantee can be ensured with this approach.
Our approach to detect loops, when building efficient tree overlays, differs from previous ones in that it ensures a resulting global tree close to the one built in a centralized manner, i.e., the tree we would obtain if we had a global knowledge about the system. In [25], we presented an overview of our solution focusing on the tree build technique, while providing no detail on our loop detection mechanism nor on its handling.
6 Conclusion
This paper introduces a probabilistic algorithm for reliable stream diffusion in unreliable and constrained environments. Differently from more traditional approaches, we resort to a “gambling approach,” which deliberately penalizes a few consumers in rare cases, in order to benefit most consumers in common cases. Experimental evaluation has shown that our protocol outperforms gossip-based algorithms when subject to similar environment constraints. We believe that this main open up new directions for future work on large-scale data dissemination protocols.
References
|
{"Source-Url": "http://www.inf.usi.ch/faculty/pedone/Paper/2009/2009CJ.pdf", "len_cl100k_base": 11493, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 69447, "total-output-tokens": 14573, "length": "2e13", "weborganizer": {"__label__adult": 0.0004210472106933594, "__label__art_design": 0.0004792213439941406, "__label__crime_law": 0.0004973411560058594, "__label__education_jobs": 0.000766754150390625, "__label__entertainment": 0.000579833984375, "__label__fashion_beauty": 0.0001779794692993164, "__label__finance_business": 0.000431060791015625, "__label__food_dining": 0.0006265640258789062, "__label__games": 0.0036106109619140625, "__label__hardware": 0.00399017333984375, "__label__health": 0.00070953369140625, "__label__history": 0.000637054443359375, "__label__home_hobbies": 0.00011605024337768556, "__label__industrial": 0.0005865097045898438, "__label__literature": 0.0005898475646972656, "__label__politics": 0.00039267539978027344, "__label__religion": 0.000583648681640625, "__label__science_tech": 0.4619140625, "__label__social_life": 0.00011360645294189452, "__label__software": 0.035400390625, "__label__software_dev": 0.48583984375, "__label__sports_fitness": 0.0005335807800292969, "__label__transportation": 0.00060272216796875, "__label__travel": 0.00034332275390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57031, 0.03354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57031, 0.42521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57031, 0.90627]], "google_gemma-3-12b-it_contains_pii": [[0, 1183, false], [1183, 4909, null], [4909, 7359, null], [7359, 10980, null], [10980, 13324, null], [13324, 16525, null], [16525, 19431, null], [19431, 22894, null], [22894, 25288, null], [25288, 27385, null], [27385, 28641, null], [28641, 31109, null], [31109, 33546, null], [33546, 36160, null], [36160, 38275, null], [38275, 40791, null], [40791, 43394, null], [43394, 44600, null], [44600, 44703, null], [44703, 48784, null], [48784, 52105, null], [52105, 54777, null], [54777, 57031, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1183, true], [1183, 4909, null], [4909, 7359, null], [7359, 10980, null], [10980, 13324, null], [13324, 16525, null], [16525, 19431, null], [19431, 22894, null], [22894, 25288, null], [25288, 27385, null], [27385, 28641, null], [28641, 31109, null], [31109, 33546, null], [33546, 36160, null], [36160, 38275, null], [38275, 40791, null], [40791, 43394, null], [43394, 44600, null], [44600, 44703, null], [44703, 48784, null], [48784, 52105, null], [52105, 54777, null], [54777, 57031, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57031, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57031, null]], "pdf_page_numbers": [[0, 1183, 1], [1183, 4909, 2], [4909, 7359, 3], [7359, 10980, 4], [10980, 13324, 5], [13324, 16525, 6], [16525, 19431, 7], [19431, 22894, 8], [22894, 25288, 9], [25288, 27385, 10], [27385, 28641, 11], [28641, 31109, 12], [31109, 33546, 13], [33546, 36160, 14], [36160, 38275, 15], [38275, 40791, 16], [40791, 43394, 17], [43394, 44600, 18], [44600, 44703, 19], [44703, 48784, 20], [48784, 52105, 21], [52105, 54777, 22], [54777, 57031, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57031, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
25ed2ad20248669f70aff3011053ec8d075cca85
|
Applications of natural language processing in software traceability: A systematic mapping study
Zaki Pauzi †, Andrea Capiluppi
Bernoulli Institute, University of Groningen, Nijenborgh 9, Groningen, 9747 AG, The Netherlands
A R T I C L E I N F O
Article history:
Received 20 July 2022
Received in revised form 6 December 2022
Accepted 11 January 2023
Available online 16 January 2023
Keywords:
Software traceability
Information retrieval
Natural language processing
A B S T R A C T
A key part of software evolution and maintenance is the continuous integration from collaborative efforts, often resulting in complex traceability challenges between software artifacts: features and modules remain scattered in the source code, and traceability links become harder to recover. In this paper, we perform a systematic mapping study dealing with recent research recovering these links through information retrieval, with a particular focus on natural language processing (NLP).
Our search strategy gathered a total of 96 papers in focus of our study, covering a period from 2013 to 2021. We conducted trend analysis on NLP techniques and tools involved, and traceability efforts (applying NLP) across the software development life cycle (SDLC). Based on our study, we have identified the following key issues, barriers, and setbacks: syntax convention, configuration, translation, explainability, properties representation, tacit knowledge dependency, scalability, and data availability.
Based on these, we consolidated the following open challenges: representation similarity across artifacts, the effectiveness of NLP for traceability, and achieving scalable, adaptive, and explainable models. To address these challenges, we recommend a holistic framework for NLP solutions to achieve effective traceability and efforts in achieving interoperability and explainability in NLP models for traceability.
© 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
1. Introduction
Software traceability is a fundamentally important task in software engineering: for some domains, traceability is even assessed by certifying bodies (Guo et al., 2017a). Given that traceability permeates all aspects of software production, the need for automated traceability has increased too, considering that software projects have steadily become more complex and the ever-increasing number of artifacts (Cleland-Huang et al., 2007; Duan et al., 2009; Guo et al., 2017b).
The underlying complexities of the logical relations between artifacts, at various stages in the software process, have prompted a variety of empirical studies (Maletic et al., 2003; Schwarz et al., 2010; Mäder et al., 2017) and several areas of research, particularly in the inception of semantic domain knowledge (Marcus and Maletic, 2003; Zhao et al., 2017a). During the software process life cycle, complex traceability challenges emerge due to differential evolution and the heterogeneity of artifacts, rendering trace link retrieval to be onerous. This calls for a holistic framework that requires tools and techniques to be able to promote extensibility and automation (by having a common representation), mapping of native representation to common representation, and rules defining consistency between artifacts (Pete and Balasubramaniam, 2015).
As we endeavour to achieve this framework, it is inevitable to acknowledge the role of natural language processing (NLP) in these efforts; a viable research frontier solution to traceability problems (Arunthavanathan et al., 2016). With recent advancements in NLP, we are addressing a critical need to consolidate and study all recent research efforts in this space.
Extracting information from a corpus of text to derive meaningful output is a technique most often found in NLP. In other words, semantic extraction is obtained from textual data and arranged in formal grammars that specify relationships between text units (Nadkarni et al., 2011). The role of NLP in software traceability addresses limitations of conventional Information Retrieval (IR), particularly around natural language data composition (Russell-Rose and Stevenson, 2009). NLP plays a vital role in these efforts, yet there is very little done to study the existing research efforts in this space. We have devised the following general topics of research focus:
1. Extracting meaningful information from software artifacts using NLP tools;
2. Recovering traceability links through automatic or semi-automatic approaches;
https://doi.org/10.1016/j.jss.2023.111616
0164-1212/© 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
3. Binding the extracted information with domain-specific concepts to decipher context or domain.
These topics form the basis and rationale for this systematic mapping study (SMS), addressing the problem of traceability recovery through solutions of information retrieval with NLP. Given the width and the breadth of traceability in the software life cycle, an SMS is a more appropriate approach to uncover the ways in which NLP has been instrumented and deployed, and in which phase of the software life cycle. By conducting this study, we are able to consolidate diverse and scattered efforts across multiple branches, and identify key areas of gaps pertaining to traceability solutions that necessitate more attention.
The following research questions were outlined based on existing research and work in NLP for software traceability, and will be assessed as part of the SMS:
RQ1: What are the demographics of the published articles?
Rationale: This information gives us an overview of the publications’ metadata, enabling impact and quality analysis. We will also analyse high-impact publications as part of our study.
RQ2: What is the trend analysis of NLP techniques and tools proposed and evaluated in the published articles?
Rationale: This allows us to establish the state of existing knowledge and efforts, subsequently allowing us to identify research gaps in our current understanding, and predict how future trends may be.
RQ3: What is the trend analysis across the phases of the SDLC?
Rationale: By using the SDLC framework, we can identify key areas of NLP application in traceability that were proposed and evaluated in publications. Given the width and breadth of the SDLC, an SMS appears to be a better choice than a Systematic Literature Review (SLR).
RQ4: What are the reported key issues, barriers and setbacks?
Rationale: Through collating these, we are able to consolidate pain points and bottlenecks. This allows us to understand the perils and pitfalls of NLP in traceability so we can identify focus areas for future research.
RQ5: What are the open challenges?
Rationale: From the key issues, barriers and setbacks identified, we collate the themes covering these as open challenges.
This paper aims to tackle these questions by conducting a thoroughly focused, yet comprehensive, systematic mapping study. This paper addresses the need to consolidate recent NLP efforts in traceability, analyse what are the common issues, barriers, and setbacks to effective traceability, and provide recommendations to address open challenges. Section 3 will explain the methodology and data process behind the study. Section 4 will cover the results and subsequently will be discussed and analysed in Section 5. Section 7 finally concludes.
2. Background
2.1. Contextual definition
NLP is a branch of Artificial Intelligence and Linguistics that allows the representation and analysis of human language computationally (Khurana et al., 2017). Due to the recent phenomenon of vast amounts of unstructured textual data being collected and used for machine learning, applications of NLP to solve real-world problems is gaining more attention from researchers and practitioners alike. In the context of software engineering, NLP is utilised to harness value from the natural language present in software artifacts. Justification of the use of the textual format of these artifacts relates to the following (Yalla and Sharma, 2015):
- possibility for automation
- information that is naturally represented, thus making it recognisable and readable for humans
By leveraging the syntactic and semantic nature of software artifacts, we aim to study past and current efforts in trace-link recovery between software artifacts that used NLP techniques and tools. Our paper looks into multiple perspectives (orientation) of software traceability, and the application of NLP to achieve the goals of traceability between software artifacts, including the ‘golden challenge’ of ubiquitous traceability (Cleland-Huang et al., 2014), that is, instrumenting traceability to be built into the engineering process.
2.2. Related work
A mapping study of IR approaches to software traceability was completed in 2014, with a particular focus on previous evaluations and evidence strength (Borg et al., 2014). The study, however, was done excluding core techniques in NLP methods such as machine learning (Spanoudakis et al., 2003) and semantic networks (Lindvall et al., 2009). These were disregarded in the study as they were too different to fit in the scope due to the complexities in development and deployment. However, the landscape in NLP research has witnessed major breakthroughs in recent years, driving a new wave of tools and applications specifically for software engineering tasks. Each example of applications: training word embeddings in the software engineering domain space (Efstathiou et al., 2018), requirements classification using deep learning (Navarro-Almanza et al., 2017), and textual classification of natural language in software engineering text mining pipelines (Mäntylä et al., 2018).
A more recent review was done, broadly focusing on adopting NLP to mine unstructured data in software repositories (Gupta and Gupta, 2019). The review was done by looking into general applications of mining repositories, with a sub-focus on traceability efforts. In terms of integrating NLP applications into the SDLC, an assessment of how NLP is employed (in the different phases) was mentioned in Yalla and Sharma (2015). This integration, deemed as multidisciplinary research, highlights the potential advantages: a more holistic approach to Computer Science and Engineering, greater possibility for automation, and a step closer to achieving universal programmability: the possibility to program in a natural language, and without the need of a formal programming language (Tichy et al., 2013). In the context of traceability in artifacts, the need for precise semantics for trace links between heterogeneous systems is critical due to inadequate available tools (Mustafa and Labiche, 2017). This review highlighted the need to define a taxonomy for trace links, as characteristics of trace data are likely to be domain-, organisation-, or even project-dependent.
Although machine learning (such as NLP) has gained an incredible amount of attention only in recent years, one of the earliest systematic literature reviews of traceability approaches (specifically for software architecture and source code) analysed efforts in automatic traceability reconstruction using machine learning classifiers to detect tactic-related classes (Javed and Zdun, 2014): classes that were instrumental to implement the tactical design decisions. This paper studies efforts in NLP application for traceability in recent years, postdating the study done in 2014 (Borg et al., 2014). Our study aims to look into recent applications of NLP by leveraging (natural language) semantics already present in these artifacts. This is an ongoing focus area in the field of information retrieval, particularly due to recent developments in computational power and the advent of large amounts of linguistic data (Torfi et al., 2021). There is a great amount of necessity to consolidate and study these sporadic efforts across different platforms globally to analyse trends in the
techniques and tools used, analyse trends of traceability across the SDLC phases, and analyse the open challenges pertaining to NLP application for traceability. Our work contribution progresses in this trajectory, similarly to a checkpoint of reference through analysis of past recent work and recommendations for future efforts.
3. Methodology
Following the updated guidelines for conducting systematic mapping studies in software engineering (Petersen et al., 2015), we define our methodology through the process of identifying, analysing, and interpreting all available evidence in a way that is unbiased and (to a degree) reproducible. The following steps were taken to address our research questions outlined in Section 1.
3.1. Mapping study planning
An overview diagram of our steps is shown in Fig. 1.
We extracted the content and metadata of each piece of literature using a systematic approach and applied various tools to gather all publications necessary within our scope. As shown in Fig. 1, 3.55% of the total result entries have been included as part of our study. This planning was done to ensure comprehensiveness in the study and to address the research questions at hand. Threats to the validity of our study strategy will be discussed in Section 5.
3.2. Search string
Table 1 shows the terms relevant to our search and their synonyms. These were derived to expand the boundaries of semantic keywords that are relevant to the research topics. We have separated the terms according to the relevant theme it belongs to, and only the most relevant synonyms (to our research questions) are shown in the table.
Forming the search string is the core component of any search strategy of a systematic review or mapping study that involves searching indexed literature databases, as it enables transparency for validation and reproducibility for others. An effective search strategy is usually iterative and benefits from trial searches using various combinations of search terms derived from the research question(s) (Kitchenham and Charters, 2007). This was incorporated into our search strategy in our study as follows.
3.2.1. Evaluating synonym terms
Including all the identified synonym terms would yield a wide coverage but be inundated with a great number of false positives. Hence, we evaluate the potential candidate synonym terms to determine those that will be included as our string output. The three components (themes) of our search string will need to be joined using the AND operator, which ensures that results will reflect a “must” rule that all these themes need to be covered. For the individual terms in each theme, we use the OR operator to join them. This is to ensure that every theme is represented by at least one of the terms.
3.2.2. Trial of potential candidate terms
Fig. 2 shows the combination of terms that were tested. We grouped the synonyms according to common properties they share, denoted by the ovals. Each of these groups are then evaluated on effectiveness through trials and decision is then made. Green coloured groups were those chosen.
3.2.3. Decision and final string output
- Theme 1: (top-down order) Main terms, parent term, methods, model types, subject, and artificial intelligence.
- Theme 2: (top-down order) Main terms and offshoot terms.
Fig. 1. Overview of steps in mapping study planning.
Table 1
Terms table.
<table>
<thead>
<tr>
<th>Theme</th>
<th>Term</th>
<th>Synonyms</th>
</tr>
</thead>
<tbody>
<tr>
<td>Natural Language Information Retrieval (NL-IR)</td>
<td>NLP, natural language processing</td>
<td>Information retrieval, natural language understanding, text mining, language model, embedding, linguistic, lexical, text extracting, machine learning</td>
</tr>
<tr>
<td>Traceability</td>
<td>Traceability</td>
<td>Trace link recovery, trace retrieval</td>
</tr>
<tr>
<td>Software artifact</td>
<td>Software artifact</td>
<td>Source code, tests, documentation, requirements</td>
</tr>
</tbody>
</table>
For Theme 1, NLP and the meaning of its acronym had to be included. We also found out that the generic term “information retrieval” widened the results beyond the scope of our RQs. The methods group for Theme 1 had to be included because the majority of efforts in text processing do include natural language, although not explicitly mentioned in every case. “linguistic” as a term was producing similar results to “information retrieval” and the artificial intelligence group was not effectively returning the right hits. NLP solutions that already use any form of machine learning is already included when using just the main terms “NLP”.
The terms in Theme 2 were more straightforward. We found out that using the term “traceability” was enough to generate the relevant papers in scope of our RQ, as the term is a commonly used term in software engineering, even without including the term “software”. We also discovered that lemmatising “traceability” to “trace” and adding “link” was useful to pick up cases where traceability happens without specifically mentioning that it is a traceability problem. For example, locating bugs in the source code, or linking requirements to test cases.
For Theme 3, we found out that including the artifact types into our search string restricted our scope of search — this is particularly due to the nomenclature used to represent artifacts produced throughout the SDLC, which can be numerous. We decided to only use the main terms, both spellings of “artifact” and “artefact”. As a result, we specified the following search string (in order) to extract all related publications within our scope:
\[
\text{("NLP" OR \textit{natural language processing} OR \textit{text mining} OR \textit{text extracting}) AND \text{("traceability" OR \textit{trace link}) AND \text{("source code" OR \textit{software code} OR \textit{software artefacts}) OR \textit{software artefacts")}}
\]
As control papers, we used a 10% random sample of the set of papers obtained in the query: for the updated search query, the control papers used were (Pruski et al., 2015; Lin et al., 2021; Khatiwada et al., 2017; Salih et al., 2021; Ali et al., 2018; Lam et al., 2015; Capobianco et al., 2013a; Iammarino et al., 2020; Scanniello et al., 2015). These were analysed by the second author.
to make sure that the search query was appropriate, or if it needed different terms.
3.3. Inclusion and exclusion criteria
To ensure our results are reflective of recent research, we have imposed inclusion criteria in terms of period scope: years 2013 to 2021. Spanning a period of 9 years in consideration, we aim to fill in the gap of studies that predate our start year and focus on more recent developments of NLP-based IR in software traceability. For exclusion, we have disregarded content that is unrelated to (software engineering) traceability, such as other reviews and artifacts with no natural language.
For the exclusion criteria, we used the following filters to weed out the papers that are not within our scope:
1. Duplicates: repeated entries
2. Language: non-English papers
3. Data: incomplete (missing) data
4. Reviews: other reviews, surveys, and mapping studies
5. Context: irrelevance to our defined research topics
The exclusion process (filtering) of papers was necessary due to the abundant false positive results mainly from Google Scholar. Duplicates were identified through automated checking of integrity in titles and authors. For language, we only included those written in English. Incomplete and missing data refers to search results that do not fully reflect published material, for example, only the publication source was mentioned with no article title. We also excluded all other secondary and tertiary studies.
The final filter was ‘context’. We had to determine if the papers were relevant to our defined research topics. We start with the abstract (as they typically serve as the first point of entry). If relevance is not evident, we look into the research questions and methodology, as these describe the work done to achieve a goal and to answer the research questions. The first author was responsible for this task: 9 control papers (as defined above) were read by the second author to make sure that the context was relevant for the papers to be included. Since the control papers (selected randomly) were all found to be relevant to the context chosen by the research question, a Cohen’s kappa was not evaluated as not necessary to determine the agreement between reviewers.
3.4. Data extraction and management
Table 2 shows the literature databases that were used for our first step in data extraction. The aim was to gather all relevant publications related to our study topics, and by using the search string defined. The extraction was done either by exploiting from the web page (via manual extraction using the Web UI) or API.
Google Scholar was further used to widen our search results: despite the abundance of false positives (noise), it has the potential to considerably extend the outreach of the systematic search (Harzing and van der Wal, 2008). The results in impact analysis (of publications) will be covered in Section 4.
After the cleaning step instrumented by the exclusion criteria, we gathered a total of 96 papers held by libraries worldwide. We have also ensured that all these were peer-reviewed publications. These were extracted, along with the metadata, and compiled into a spreadsheet consisting of all the information and content for each paper.
4. Results
The following are the results of our study based on our research topics. These results reflect our findings in NLP efforts in software traceability in recent years, answering our research questions at hand.
4.1. RQ1: Demographics of published articles
In terms of demographics for impact and quality analysis, we look at the following metrics:
- publication type, shown in Fig. 3
- citation count per year,¹ shown in Fig. 4
The complete list of papers in scope can be found in Appendix A. We have also included the respective sources (e.g., conference name) of each paper. The distribution of accepted papers is roughly two-thirds geared towards conference and workshop contributions, and the rest in more established venues (books and journals). This is further proof that conference papers still attract quality contributions, although, as relevant and well-known as a conference might be, this does not define the quality of the papers that are contained in one. Some noticeable conference venues are namely the International Conference of Software Engineering (ICSE, 4 papers), International Conference on Software Maintenance and Evolution (ICSME, 4 papers), and International Requirements Engineering Conference (RE, 4 papers). These are also examples of A*/A rated software engineering conferences.
Fig. 3. Distribution of publication types.
1 Number of citations / (current year–published year).
as listed in CORE conference rankings, which are labelled as flagship and excellent conference venues.
For citation count per year, we can see 7 outliers that are the top cited publications per year, corresponding to the papers (Panichella et al., 2013; Lam et al., 2015; Arora et al., 2015; Shokripour et al., 2013; Poshyvanyk et al., 2013; Wang et al., 2014; Lin et al., 2021). Despite the citation count to be, arguably, a weak indicator of research quality for some (Aksnes et al., 2019), for the purpose of our mapping study, we consider citation count as a factor in research impact, and we will analyse these in Section 5.
4.2. RQ2: Trend analysis of NLP techniques and tools for traceability
In this study, we identify how NLP is being used to achieve traceability solutions. Not all NLP efforts are similar; hence, it is useful to categorise these efforts by amount of task complexity, so we can understand how much of NLP was involved in the traceability solutions. We categorise according to the following tiers:
- Tier 1: Only basic complexity tasks, such as processing text (stemming, pattern matching etc.) and tokenising. This category typically only deals with text syntax and no training is involved.
- Tier 2: Basic to intermediate tasks, such as training word embeddings and topic modelling. This category involves training models, pre-trained or otherwise. Semantics are involved and this is closely related to the naturalness of language.
- Tier 3: Basic to advanced tasks, such as implementing deep learning models. This category is an extension of Tier 2 where the semantics (context) of language is derived by (essentially) deep learning. This commonly involves the extended implementation of pre-trained deep learning models in the context of software traceability, such as augmenting neural networks with vector space models (VSM).
Distinction between these tiers is solely determined by task complexity: how much work (in processing natural language) has been done (not only for traceability purposes) to achieve the desired solution. For example, traceability work that uses a pre-trained deep learning model (e.g., BERT) would be classified as Tier 3 because deep learning is a relatively high-complexity task albeit being already pre-trained. It is important to note that these tiers are not disjointed, but rather, each tier is an extension of the preceding tier. Tier 2 would include tasks in Tier 1 and Tier 3 would include tasks in Tiers 1 and 2. For example, to train a transformer like BERT (Tier 3), basic work tokenisation still needs to take place. Regardless, segregating these tiers is necessary as it allows us to understand ‘up to’ what level of task complexity is involved in each paper. Classification of these tiers was performed based on the following steps:
1. In each paper, we extract two sections where present: Introduction and Methodology.
2. In the order listed above, we locate the application of NLP based on the proposed solution. Most of the proposed solutions are explained in the Introduction, although when not clear how NLP is applied, we use the Methodology section to identify the keywords which describe the task complexity involved.
3. Every solution typically involves multiple aspects, and where multiple NLP techniques and tools are applied, only the highest complexity is assigned.
The classification of papers into the three tiers was performed by the first author. The second author, using the subset of publications used as control papers above, used the same three steps.
Fig. 5. Paper count throughout the years.
to determine which tier a paper belongs to. The results of the two classifications were later discussed agreement was sought. Unsurprisingly, there was a 100% agreement between the two authors on this sample of papers: this is due to how the tiers are formulated, and by how clearly each tier is defined from the others.
Based on our analysis, all the task complexity properties are transitive: Tier 1 tasks are a subset of Tier 2 tasks and Tier 2 tasks are a subset of Tier 3 tasks. For example, one does not train a word embedding without having to pre-process the text and one does not train a deep learning model without having to embed layers of word vectorisation models. Thus, these tiers are not disjointed — higher tiers will include tiers that have lower complexity levels, in other words, an implicit “up to” is implied for each. Fig. 5 shows the trend of the tools and techniques involved from a tiered perspective, in terms of published paper count, and by year of publication.
Within each tier, we have multiple techniques and tools that were used as part of traceability solutions in our study. Table 3 shows NLP techniques involved in traceability solutions with the relevant papers involved. Table 4 shows external support tools and libraries that have been identified with the relevant papers involved.
4.3. RQ3: Trend analysis of NLP application for traceability across the SDLC phases
We look into traceability applications through the phases of the SDLC framework. Given that there is not one official SDLC model, we will be using the common de facto phases of the framework as our basis (Mishra and Dubey, 2013):
1. REQ: Requirements engineering (problem understanding)
2. DES: Design (planning)
3. CODE: Coding (implementation)
4. TEST: Testing
5. OPS: Deployment & Maintenance
To visualise the relationships identified effectively, we present Fig. 6: a bubble chart of the pairwise SDLC phase relationship counts over the years. The horizontal dotted line across 'REQ-CODE' shows the SDLC phase relationship that is present in all years, with 2019 showing the maximum count overall. Where there is no bubble in place, it means that the count is zero.
Every paper in scope has been involved in one or more pairwise SDLC relationships. In cases where papers involve multiple pairwise relationships (which is few), those papers will exist in every bubble, where the pairwise relationship is present for that year. In other words, every paper is not exclusive to every bubble — multiple bubbles may represent one paper that has multiple pairwise relationships. The distribution count is as follows:
- No. of papers with one pairwise relationship: 84
- No. of papers with two pairwise relationships: 11
- No. of papers with three pairwise relationships: 1
- Total no. of papers involved: 96
Fig. 6 also shows the ‘OTH’ (others) phase, which refers to artifacts involved outside of the SDLC phases identified in Section 4.3. Some examples of artifacts identified at ‘OTH’ are (informal) documentation, user queries, and release notes.
4.4. RQ4: Key issues, barriers, and setbacks
We have identified eight key issues, barriers, and setbacks, outlined in Table 5, with relevant papers highlighting each of these. These were identified through analysing the discussion of results, which is typically found in the ‘Discussion’ section of each paper. We extracted all identifiable (implicit or explicit) issues, barriers, and setbacks that are direct results of using NLP in the proposed traceability solutions. Each of these is explained in this section and further discussed in Section 5.
4.4.1. Syntax convention
There does not exist a unified convention for naming syntax of various references in the artifacts, such as functions, variables, and classes. Due to this, we cannot generalise every model
Table 3
<table>
<thead>
<tr>
<th>Tier 1</th>
<th>Paper reference</th>
<th>Technique examples</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Arunthavanathan et al. (2016), Wijesinghe et al. (2014), Salih and Sahraoui</td>
<td>Parts-of-Speech (POS) tagging, Stemming, Lemmatising, Tokenising, Stopwords removal,</td>
</tr>
<tr>
<td></td>
<td>(2016), Aldibidi and Mahmood (2015), Kchaou et al. (2019), Nishikawa et al.</td>
<td>Regular expressions, Key phrase extraction, Terms frequency-inverse document frequency (TF-IDF).</td>
</tr>
<tr>
<td></td>
<td>(2015), Salih et al. (2021), Keim and Kez nelek (2019), Pruski et al. (2015),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Keim et al. (2021), Kchaou et al. (2017), Zamani et al. (2014), Lin et al.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2017), Pruski et al. (2014), Rashek et al. (2017), Li and Cleland-Huang (2013),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Shokri Pirouz et al. (2013)</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Tier 2 Falesi et al. (2016), Kicsi et al. (2021), Zhao et al. (2017b), Kicsi</td>
<td>Latent semantic indexing (LSI), Latent semantic analysis (LSA), Embeddings, Vector</td>
</tr>
<tr>
<td></td>
<td>et al. (2018), Csuvik et al. (2019b), Rubasinghe et al. (2018a), Harin and</td>
<td>space model (VSM), Topic modelling, Translation (language), Named entity recognition</td>
</tr>
<tr>
<td></td>
<td>Singh (2022), Pauzi and Capiluppi (2021), Lapeña et al. (2019), Tian et al.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2019), Rubasinghe et al. (2018b), Liu et al. (2020a), Velasco and Aponte Melo</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Panichella et al. (2015), Csuvik et al. (2019a), Pauzi and Capiluppi (2020),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Qusef et al. (2014), Alazzam et al. (2014), Gadelha et al. (2021), Iamarino et</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2020), Rashek et al. (2019), Mills and Haiduc (2017), Tsuchiya et al. (2015),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Liu et al. (2020b), Hey et al. (2021), Ali et al. (2018), Chen et al. (2019),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Divya et al. (2014), Huang et al. (2016), Zhang et al. (2016b), Mahmoud and</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Niu (2015), Chen et al. (2021), Champagne and Carver (2020), Arora et al.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2015), Heck and Zaidman (2014), Khatiwada et al. (2017), Liu et al. (2019),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Effa Bella et al. (2018), Mahmoud and Williams (2016), Mahmoud (2015),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Scanniello et al. (2015), Xia et al. (2014), Xie et al. (2019), Wang et al.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2014), Yang and Lee (2021), Malliotra et al. (2018), Zhou et al. (2017),</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Eder et al. (2015), Zhang et al. (2016a), Gharibi et al. (2018), Capobianco et</td>
<td></td>
</tr>
<tr>
<td></td>
<td>al. (2013), Daghastan et al. (2013), Panichella et al. (2013), Borg et al.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(2013), Poshvaneyk et al. (2013), Berta et al. (2017), Zhang et al. (2021)</td>
<td></td>
</tr>
</tbody>
</table>
to be trained on certain specifics, and this hampers effective traceability efforts.
4.4.2. Configuration
Finding the optimum configuration may be possible for one use case. However, in reality, artifacts evolve over time (through active development), and (optimal) configurations change as well. Although NLP has been effective in recovering missing and broken trace links, it is still a pertinent issue in achieving effective traceability. In deep learning tasks (Tier 3), searching for the optimal configuration (exhaustive evaluation) poses other issues, such
Table 4
External NLP supporting tools/libraries identified.
<table>
<thead>
<tr>
<th>Tools</th>
<th>Paper reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>ANTLR<sup>d</sup></td>
<td>Arunthavanathan et al. (2016), Rubasinghe et al. (2018a, 2020, 2018b)</td>
</tr>
<tr>
<td>Dbpedia<sup>e</sup></td>
<td>Alobaidi and Mahmood (2015), Malik et al. (2016), Mahmood et al. (2015)</td>
</tr>
<tr>
<td>BabelNet<sup>f</sup></td>
<td>Alobaidi and Mahmood (2015), Malik et al. (2016), Mahmood et al. (2015), Liu et al. (2020b)</td>
</tr>
<tr>
<td>BERT (Devlin et al., 2018)</td>
<td>Kcici et al. (2021), Thommazo et al. (2014), Lin et al. (2021), Cuvik et al. (2020), Keim et al. (2020b), Hey et al. (2021), Keim et al. (2020a)</td>
</tr>
<tr>
<td>NLTK<sup>g</sup></td>
<td>Falessi et al. (2016), Zhao et al. (2017b), Singh (2022), Wang et al. (2019), Liu et al. (2020a), Gadelha et al. (2021), Hey et al. (2021), Gharibi et al. (2018), Berta et al. (2017)</td>
</tr>
<tr>
<td>FastText<sup>i</sup></td>
<td>Pauzi and Capiluppi (2021, 2020), Hey et al. (2021)</td>
</tr>
<tr>
<td>SpaCy<sup>j</sup></td>
<td>Pauzi and Capiluppi (2021, 2020), Gadelha et al. (2021), Hey et al. (2021), Gharibi et al. (2018)</td>
</tr>
<tr>
<td>GATE<sup>k</sup></td>
<td>Malik et al. (2016), Mahmood et al. (2015), Arora et al. (2015), Zamani et al. (2014)</td>
</tr>
<tr>
<td>GloVe<sup>l</sup></td>
<td>Effa Bella et al. (2019), Gadelha et al. (2021), Liu et al. (2019), Gharibi et al. (2018)</td>
</tr>
<tr>
<td>Apache OpenNLP<sup>m</sup></td>
<td>Arunthavanathan et al. (2016), Lapeña et al. (2019), Salih et al. (2021), Mahmoud and Niu (2015), Arora et al. (2015), Mahmoud and Williams (2016)</td>
</tr>
</tbody>
</table>
<sup>a</sup>https://wordnet.princeton.edu
<sup>b</sup>https://nlp.stanford.edu
<sup>c</sup>https://lucene.apache.org
<sup>d</sup>Another Tool for Language Recognition: https://www.antlr.org
<sup>e</sup>https://www.dbpedia.org
<sup>f</sup>https://babelnet.org
<sup>g</sup>Natural Language Toolkit: https://www.nltk.org
<sup>h</sup>https://radimrehurek.com/gensim
<sup>i</sup>https://fasttext.cc
<sup>j</sup>https://spacy.io
<sup>k</sup>General Architecture for Text Engineering: https://gate.ac.uk
<sup>l</sup>Global Vectors for World Representation: https://nlp.stanford.edu/projects/glove
<sup>m</sup>https://opennlp.apache.org
as computational costs, time complexities, and hardware carbon footprint (Lauriola et al., 2022).
4.4.3. Translation (language)
Translation of languages is a service that is integral to any traceability solution that involves unifying cross-language artifacts. Dependency on the effectiveness of this service (by the accuracy of cross-language information retrieval output) proves to be a setback to effective traceability. A comparative study done in 2015 observed that different translation services can result in considerably different retrieval behaviours for individual queries for different language pairs and applications (Hosseinzadeh Vahid et al., 2015).
4.4.4. Properties (representation) of artifacts
As we implement traceability solutions using NLP (such as similarities in vectors), software artifact properties constantly change and traceability solutions using NLP do not keep up. Besides change management, this issue is also relevant for the representation of software artifacts throughout different SDLC phases. For example, in the Design phase where UML diagrams are used, some form of parser needs to be implemented to unify these representations with other artifacts from other SDLC phases.
4.4.5. Explainability
The lack of explainable and interpretable models is a key barrier to effective traceability. This becomes more prominent in higher tiers of task complexity as state-of-the-art pre-trained
models, although scoring high in benchmarked NLP tasks, are typically black-box in nature and serve very little purpose in situations where traceability becomes a core component mandated by requirement standards and regulations, such as for medical device software (Regan et al., 2013).
4.4.6. Dependency on tacit knowledge
There is still a considerable amount of dependency on tacit knowledge that is integral to traceability solutions with NLP. This dependency is hampering efforts in automated effective traceability due to the limitations of models in every domain, which is also related to the artifacts property (representation) issue where it is not a one-size-fits-all policy for all SDLC phases.
4.4.7. Scalability
Scaling the solutions in traceability efforts is identified as a key barrier, particularly in large-scale systems. In object-oriented programming, encapsulation of objects helps to improve scalability due to the isolation of internal modifications of any one object (Corriuveau, 1996). Despite this, traceability between software artifacts does not automatically follow this, especially when large systems involve complex trace links with the increasing number of artifacts and developers involved. This is also an extension to the configuration issue where scalability in compute and time complexities are severely affecting effective traceability efforts.
4.4.8. Data availability
In supervised and semi-supervised strategies, we require vast amounts of training data specific to the software engineering domain. In an ideal world, all of this data is annotated and ontologies are well-defined; however, that is not the case in reality. Annotation of data is an expensive and time-consuming laborious task that does not appeal to many – and this has prompted a variety of solutions such as crowdfunding through Amazon Mechanical Turk (Snow et al., 2008).
4.5. RQ5: Open challenges
From these key issues, barriers and setbacks, we identify 3 themes that are presented as open challenges in recent applications of NLP in traceability.
4.5.1. Syntax and semantic similarities in representation across artifacts
Traceability between artifacts stems from identifying components that are linked to one another. To achieve this, the manifestation of concepts (through the artifacts’ components) needs to be synchronised in terms of syntax and semantic similarities. This challenge is one that NLP solutions for traceability continue to face.
4.5.2. Effectiveness in automated software traceability
As software systems continue to evolve in scale and complexity, the call for automated traceability has never been more critical. The number of traceability links that need to be captured exponentially grows with the size and complexity of the software system (Cleland-Huang et al., 2003). Moreover, consistent changes throughout the SDLC pose a significant challenge to the maintenance of traceability links, with studies showing that change can be expected throughout the life cycle of every project (Boehm, 2003). In the noble quest for automated traceability, the effectiveness of these solutions continues to be an open challenge.
4.5.3. Achieving scalable, adaptive, and explainable models
Recent works (especially in deep learning and off-the-shelf solutions) have resulted in an increasing number of black-box NLP services and tools. Traceability solutions need to be transparent, especially when traceability is a factor in requirements validation and tracing of regulations. Moreover, the challenge of scaling and adapting NLP solutions continues to be an open challenge for interoperability. Any trade-offs between implementing an NLP component to achieve successful traceability, and the extra resource it needs, have to be justified.
5. Discussion
To further elaborate our findings based on our research questions outlined in Section 1, we will discuss the results of our study.
5.1. RQ1: Demographics and quality analysis
Fig. 3 shows the percentage spread of publication type, with conference proceedings (62%) and journal articles (34%) making up most of the papers selected. All of the conferences and journals (where the papers selected were published) were peer-reviewed and some were shown as outliers for having higher citation per year metrics compared to the dataset (Fig. 4).
In Computer Science, the citation count of conferences is no higher than in journals. Moreover, analysis has shown that Computer Science, as a discipline, values conferences as a publication venue more highly than any other academic field of study (Vrettas and Sanderson, 2015). As we look into our outliers more closely, we present a summary of the traceability solutions proposed in each and how NLP was applied – shown at Table 6 (only those with cites per year \( \geq 10 \) are shown). As visible in the table, the majority of the outlier papers come from the top publishing venues in software engineering (ACM/IEEE International Conference on Software Engineering and IEEE Transactions on Software Engineering) and the citations reflect a growing trend as long as the paper gets older.
The aforementioned amount of papers published overall, the average citation count of each tier category. 3 3 citations per year (foreach tier) indicate the amount of attention (impact) the research has. Table 7 shows the average citations per year for each tier category.
5.2. RQ2: Trend analysis of NLP techniques and tools for traceability
We look into how the techniques and tools in NLP evolved over the past recent years. Based on Fig. 5, we can see that the majority of NLP efforts are in the Tier 2 category: involving ‘basic’ to ‘intermediate’ tasks, with a prominent spike in 2019. During the early years of our scope (2013–2017), these were used mainly to process text and represent text into vectors, and using the represented vectors in a space model (VSM etc.) to detect similarities. The role of NLP has evolved over recent years due to the proliferation of efforts in combining machine learning with basic text processing. This trend continues, with a focus on deep learning, such as with transformers (Vaswani et al., 2017). The spike in 2020 (for Tier 3) may be attributed to the increasing research interest in state-of-the-art deep learning tools in NLP recently, such as the introduction of Convolutional Neural Networks more commonly (prior) used in Computer Vision (Moreno Lopez and Kalita, 2017), BERT (Devlin et al., 2018), and Huggingface Transformers in 2019 (Wolf et al., 2019).
To further understand the trend beyond using the period of years as our timeline, we should consider the research impact that each tier has (Which areas are being mostly cited? Where is the attention drawing to?). This can be done by using citation analysis for each tier; citation per year (for each tier) indicates the amount of attention (impact) the research has. Table 7 shows the average citations per year for each tier category. 3
From the table, we can see that despite Tier 3 having the least amount of papers published overall, the average citation count per year is the highest of all tiers (4.51). The aforementioned spike in 2020 for Tier 3 is still considerably lower than Tier 2’s spike in 2019; however, this citation analysis may indicate that the research impact in deep learning (for NLP applications in traceability) is the largest. It is still too early to conclude how the trend of deep learning in NLP will go (in the field of traceability), but in general, we can see an upward trend in deep learning across software engineering (Ferreira et al., 2021).
5.3. RQ3: Trend analysis of NLP applications for traceability across SDLC
Based on Fig. 6, we can see the SDLC phases where traceability with NLP occurs more frequently, i.e., relationships involving REQ, CODE, and DES phases. As noticed above, Requirements Engineering is the area with the most traceability activities throughout recent years, followed by Design and Bug Localisation, respectively.
5.3.1. Requirements traceability
The trend of tracing requirements to source code (and vice versa) using NLP is very common throughout the years with a considerable spike in 2019, as seen in Fig. 6. Artifacts pertaining to the REQ phase (such as functional and non-functional requirements) are generally written in natural language. There is no observable unified structure behind the language and syntax. Bi-directional traceability (Salih et al., 2021), linking to UML diagrams (Arunthavathanathan et al., 2016; Salih and Sahraoui, 2018; Kchaou et al., 2019; Salih et al., 2021; Panichella et al., 2015; Kchaou et al., 2017; Effa Bella et al., 2018), fuzzy logic (Thomaz et al., 2013), reducing false positives (Effa Bella et al., 2019; Capobianco et al., 2013b), are some examples of how NLP was used during the REQ phase.
To further develop this trend, some other artifacts, such as UML diagrams and source code, is necessary, and in some cases, mandatory, to adhere to regulatory compliance. For healthcare systems, we have HIPAA (Healthcare Insurance Portability and Accountability Act) (Florez, 2019; Velasco and Aponte Melo, 2019; Lin et al., 2017; Effa Bella et al., 2018). In airspace systems, National Aeronautics and Space Administration (NASA) strives to ensure FAA
### Table 6
<table>
<thead>
<tr>
<th>Citations per year</th>
<th>Paper title & reference</th>
<th>Publication source</th>
<th>Summary of NLP application for traceability</th>
</tr>
</thead>
<tbody>
<tr>
<td>38.11</td>
<td>How to effectively use topic models for software engineering tasks? an approach based on genetic algorithms (Panicella et al., 2013)</td>
<td>2013 35th International Conference on Software Engineering (ICSE)</td>
<td>LDA-GA: Using Genetic Algorithms (GA) to determine near optimal configuration for LDA topic modelling.</td>
</tr>
<tr>
<td>24.71</td>
<td>Combining deep learning with information retrieval to localise buggy files for bug reports (n) (Lam et al., 2015)</td>
<td>2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)</td>
<td>HyLoc: Combining Deep Neural Network (DNN) with rVSM (revised Vector Space Model) for bug localisation.</td>
</tr>
<tr>
<td>19.00</td>
<td>Automated checking of conformance to requirements templates using natural language processing (Aorua et al., 2015)</td>
<td>IEEE Transactions on Software Engineering</td>
<td>Template Conformance Checking (TCC): Text chunking and pattern matching to automate requirements conformance.</td>
</tr>
<tr>
<td>17.67</td>
<td>Why so complicated? simple term filtering and weighting for location-based bug report assignment recommendation (Shokripour et al., 2013)</td>
<td>2013 10th Working Conference on Mining Software Repositories (MSR)</td>
<td>Two phase location-based approach to bug localisation by predicting relevant files and creating a noun index.</td>
</tr>
<tr>
<td>14.00</td>
<td>Concept location using formal concept analysis and information retrieval (Poshyvanyk et al., 2013)</td>
<td>ACM Transactions on Software Engineering and Methodology</td>
<td>Using LSI to map textual descriptions of features or bugs to source code.</td>
</tr>
<tr>
<td>11.63</td>
<td>Compositional vector space models for improved bug localisation (Wang et al., 2014)</td>
<td>2014 IEEE International Conference on Software Maintenance and Evolution (ICSM)</td>
<td>Composing various VSM variants based on Genetic Algorithms (GA) for bug localisation.</td>
</tr>
<tr>
<td>10.00</td>
<td>Traceability transformed: Generating more accurate links with pre-trained bert models (Lin et al., 2021)</td>
<td>2021 43rd International Conference on Software Engineering (ICSE)</td>
<td>Trace BERT (T-BERT): Three step training of T-BERT models to recover links between issues and commits.</td>
</tr>
</tbody>
</table>
### Table 7
<table>
<thead>
<tr>
<th>Category</th>
<th>Total paper count</th>
<th>Average citations per year</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tier 1</td>
<td>15</td>
<td>2.68</td>
</tr>
<tr>
<td>Tier 2</td>
<td>71</td>
<td>3.52</td>
</tr>
<tr>
<td>Tier 3</td>
<td>10</td>
<td>4.51</td>
</tr>
</tbody>
</table>
Hofer, 2022). Isolate the non-natural languages helps the cause (Hirsch and artifact involved in bug localisation. Natural language in bug reporting and locating areas of concern. Examples include comparing bugs to generated patches (Csuvik et al., 2020), between bug reports and source code (Khatiwada et al., 2017; Liu et al., 2019; Wang et al., 2014; Lam et al., 2015; Malhotra et al., 2018; Jiang et al., 2020; Zhou et al., 2017; Gharibi et al., 2018; Shokripour et al., 2013), cross-language bug tracing (Xia et al., 2014), and commit information (Yang and Lee, 2021).
In the current landscape of large evolving software systems, locating bugs (typically within the source code) is a challenging task. Our study looks into traceability between artifacts, and for bug localisation, we have identified bug reports to be the focal artifact involved in bug localisation. Natural language in bug reports is a common target for NLP tasks (such as traceability, which is the entirety of our study), so de-noising these bug reports to isolate the non-natural languages helps the cause (Hirsch and Hofer, 2022).
One common example of bug localisation is tracing the components of a bug report to source code. Bug reports are a form of change request, which serves to change the existing program elements (e.g. source code files) to correct an undesired behaviour of the software (Dilshner et al., 2017). This allows developers to identify what needs to be rectified and modified in the source code to remove the bug, which is a core software maintenance task. Through the lens of traceability using NLP, these components may relate to terms that match between bug reports and source code. Empirical studies have shown that vocabulary used in bug reports was also present in the source code files (Moreno et al., 2013; Saha et al., 2013) — be it an exact or partial match of program elements (i.e. class, method, or variable names and comments). This matching (syntax and semantic similarity) paves a way for NLP to determine bug location more effectively.
5.3.3. Continuous developed tools
We have also identified some tools that were developed continuously across the years (covered by multiple papers reflecting incremental development) across the SDLC phases, namely Software Artifacts Traceability Analyzer (SAT-Analyzer) and TIQ. NLP was first introduced in SAT-Analyzer for addressing artifact inconsistencies due to natural language representation (Arunthavanathan et al., 2016) — it improves the usability of SAT-Analyzer through automated generation of XML input from requirement artifacts, which was then evaluated by a case study on a Point-of-sale (POS) system (Rubasinghe et al., 2018b). SAT-Analyzer was also covered in DevOps practices (Rubasinghe et al., 2018a, 2020); a traceability management tool for continuous integration and multi-user collaboration. TIQ, on the other hand, focuses on trace queries that are generally complex and naturally worded, transforming them into executable SQL statements (Pruski et al., 2014). A more in-depth description of the architecture, design, and heuristic rules was then published in a later paper (Pruski et al., 2015), and a demo was made available online (Lin et al., 2017).
5.4. RQ4: Key issues, barriers, and setbacks
We dive into each of these key points to understand further how the papers have contributed to the aforementioned issues, barriers, and setbacks.
5.4.1. Syntax convention
Our study has found that some assumptions had to be made in the semantic representation of syntax used in artifacts. For example, developers only use expressive, non-abbreviated variable names, such as those that are contained in BERT’s dictionary (Keim et al., 2020b, a).
Lack of generally used annotation of artifacts (Kicsi et al., 2018) and imperfectly appropriate naming (Csuvik et al., 2019b) typically lead to inaccurate links. The added challenge of artifacts, such as non-functional requirements (Mahmoud and Williams, 2016), hinders traceability efforts due to the lack of homogeneity in syntax representation: natural language pertaining to non-functional requirements is less explicit in tracing links. Moreover, the detection of constraints in non-functional requirements becomes more difficult due to the lack of robust modelling and documentation techniques (Mahmoud, 2015).
In a case study for SAT-Analyzer, it was observed that the inaccurate artifact elements extraction and identification with NLP that contain different naming conventions and less meaningful names in requirement artifacts, have led to the lack of accuracy (Rubasinghe et al., 2018b). Semantic ambiguities in artifacts written in natural language pose a challenge in tracing explicit links with other artifacts, based on the syntax used (Khao et al., 2019).
In specific critical contexts, such as healthcare regulations, desired levels of granularity in traceability are often not enough. The regulations related to audit control standards and session expiration in the implementation of healthcare systems were the hardest to trace to source code statements — very few lines and source code structures related to these requirements were successfully mapped (Florez, 2019).
5.4.2. Configuration
Although NLP has been effective in recovering missing and broken links in self-adaptive systems, it can introduce significant overhead (Hariri and Fredericks, 2018). Threshold values of semantic similarity are typically a ‘moving goalpost’ and high confidence values, such as 95% (Singh, 2022), were chosen arbitrarily to represent strong confidence. Selection and tuning of parameters are an impact factor for the accuracy of results, and static configurations are identified as an internal threat to the validity of results (Ali et al., 2015). Automated configurations, such as for Latent Semantic Indexing (Eder et al., 2015), improve the applicability, although computation overhead can be significant.
As mentioned in the previous section, exhaustive evaluation for optimal configuration results in various complications, such as significant computational costs and time complexities. This is exacerbated by the continuously changing nature of artifacts throughout the SDLC phases, rendering traceability efforts to become even more challenging. Achieving this (near) optimal configuration for topic modelling was the goal of one of our papers, which introduced Genetic Algorithms (GA) with LDA to boost accuracy of traceability link recovery (Panichella et al., 2013), among other tasks. This paper also highlighted the need for an efficient method to find the best configuration of parameters, as an exhaustive analysis of all possible combinations is deemed impractical.
Effective traceability is crucially dependent on the performance of the models used, which is determined by their configuration settings. One key aspect of this is the hyperparameter tuning, which often can make the difference between a mediocre performing model to a state-of-the-art (Eggersperger et al., 2015).
5.4.3. Translation (language)
Reported setbacks in these efforts concern the effectiveness of translation services that are readily available (Yildiz et al., 2014; Liu et al., 2020a; Xia et al., 2014). Despite these translation services being mainly black-box in nature, it is critical to the effectiveness of traceability. There is no generic dictionary (model) for all languages, as each language has its own rules of grammar (syntax) and its own semantic interpretation of words used. However, we do have a recent primer publication on pre-trained multilingual embeddings (Doddapaneni et al., 2021), yet to be fully utilised in software engineering.
5.4.4. Properties (representation) of artifacts
In a dynamically integrated environment (Rubasinghe et al., 2018a, 2020, 2018b), artifacts transform constantly and this hampers traceability efforts. In cases where traceability is necessary for regulations (Florez, 2019; Arora et al., 2015), the natural language used in these documents is not represented similarly to other artifacts, such as functional and non-functional requirements. Adaptive standard feedback was also proposed upon the consideration that software artifacts do not share the same properties of natural language documents, on which the standard feedback relies (Panichella et al., 2015).
Semantic similarities can also be challenging with natural language due to polysemy (Wang et al., 2018), non-uniform identifiers (Pauzi and Capiluppi, 2020), ambiguity in content (Kchouk et al., 2019; Pruski et al., 2014), and vocabulary mismatch (Khatiwada et al., 2017).
5.4.5. Explainability
Despite huge successes in large language models, their black-box nature hinders key goals of NLP, particularly in explainability (Lin et al., 2021; Keim et al., 2020b,a). In cases where traceability plays an important role (such as adherence to regulations and auditing), the black-box nature of these advanced solutions proves as a hindrance, as validation of results becomes difficult (Velasco and Aponte Melo, 2019).
5.4.6. Dependency on tacit knowledge
This is more prominent in traceability use cases pertaining to software architecture where experiential knowledge is vital in recovering architectural trace links (Keim and Koziolek, 2019) and links between requirements and process models (Lapeña et al., 2019).
5.4.7. Scalability
Large-scale systems pose a challenge in traceability management due to the complexity of trace links, particularly in visualisation (Rubasinghe et al., 2020; Chen et al., 2018). This also relates to time and compute resource complexities, and becomes even more challenging in environments where constant change is present (Rubasinghe et al., 2018a,b).
5.4.8. Data availability
The amount of labelled data to train classifiers is not as abundant as we ideally need it to be, and this poses a setback for effective training in supervised models for traceability (Chen et al., 2021). The amount of annotated data in some domains is richer than in others, which is heavily dependent on the efforts of the community. This translates to varying levels of model accuracy for different domains, which affects traceability effectiveness. Models can only train on data that is available, and the performance of any model is entirely dependent on the data that it is trained on.
5.5. RQ5: Open challenges
To answer RQ5, we first need to be able to identify the pertinent issues that arise; and second, through understanding the pain points, we can derive and model the open challenges. Fig. 7 shows the mapping of open challenges from the key issues, barriers, and setbacks that were identified in Section 5.4.
5.5.1. Syntax and semantic similarities in representation across artifacts
The first and foremost open challenge of NLP is primarily derived from the most recurring issue reported in our study (see Section 4.4.4), and centred around the role NLP plays in traceability: processing natural language in artifacts. The natural language present in artifacts needs to be represented uniformly in various parts of the SDLC, and achieving similarity in each of those representations is an open challenge that NLP continues to play a major part in solving.
5.5.2. Effectiveness in automated software traceability
Software artifacts are not entirely similar to that of natural language, and NLP advancement efforts are majorly based on use cases pertaining to human communication, such as developing cognitive (intelligent) skills through natural language understanding. This direction is not entirely useful for software engineering purposes, particularly relating to traceability. The open challenge is in leveraging and harnessing the value of NLP techniques, focusing NLP advancement efforts in the field of software engineering. Moreover, pure automation of traceability efforts continues to pose a common challenge despite recent successes in language models.
5.5.3. Achieving scalable, adaptive, and explainable models
NLP models that are involved in traceability efforts face significant challenges to scale and adapt in tandem with how software systems change and evolve throughout the SDLC. This open challenge is a derivative of identified issues pertaining to scalability, data availability, and explainability. Explainable AI is a critical component to adopting machine learning models in any decision-making process, with traceability being no different. In software engineering, the adoption of these models are hindered by the lack of explainability and understanding of how these models work (Tantithamthavorn and Jiarpakdee, 2021).
5.6. Recommendations
Fig. 8 presents a mapping diagram to show the relationships between the open challenges and recommendations. The following are our points of recommendation in addressing the three open challenges, as described above in Section 5.5.
5.6.1. A holistic framework model for NLP solutions to achieve effective traceability
NLP techniques and tools have played a major role in processing and vectorising text; serving as some form of natural language decoder to unify representations across artifacts for traceability. We recommend efforts in developing a holistic framework model to achieve effective traceability, subsequently addressing key open challenges of NLP in traceability. A holistic framework should fulfil the following:
- Techniques and tools in NLP that are representative of the software engineering domain. Currently, efforts are sparse and scattered, focusing on very specific parts of software engineering that are isolated.
A unified ontology across the software engineering domain space, through consolidating and integrating taxonomies across multiple domains in software engineering.
Models that ‘understand’ natural language across various aspects of the SDLC phases. Natural Language Understanding (NLU) is an extension of NLP where models are able to comprehend terms that are specific to the SDLC phases, and across these phases, through classifying intents, confidence scores stability, and extracting entities (Abdellatif et al., 2021).
5.6.2. Towards achieving interoperability and explainability
Models have to be transparent, scalable, and accurate in recovering trace links (i.e. effective traceability). We propose to ensure applications of NLP in traceability to be transparent and explainable. Efforts in NLP research for traceability should not only focus on having the next best model that supersedes the accuracy scores of previous models in determining trace links, but also on proving scalability and providing explainability. We need to have some form of global certification and validation process to be able to certify models as experts. Moreover, we need to incorporate efforts in explainable Artificial Intelligence (AI) and model reasoning to reduce bias and fill in the gap of dependencies on tacit knowledge from human experts; dependencies on experiential knowledge.
6. Threats to validity
In this section, we outline the threats to validity identified throughout our mapping study process. Based on a recent map of threats to validity in systematic mapping studies in software engineering (Zhou et al., 2016), we looked into all possible threats that emerge from conducting our study.
6.1. Construct validity
Our research questions and methodology may not entirely cover every aspect of studying how NLP is used for software traceability. However, we ensured that our research strategy was thorough and comprehensive in fulfilling the secondary study conducted to address key gaps of areas pertaining to NLP in software traceability. We adhered to the guidelines outlined in Petersen et al. (2015). It is important to stress again that a systematic literature review would be less significant to uncover the existing methods and approaches based on NLP, and it would face a larger threat to construct validity than the mapping study presented in this work.
6.2. Internal validity
The search for relevant papers to populate our mapping study was thoroughly executed: multiple library databases were used, including a search aggregate engine that covers a wide range of multiple databases and libraries — Google Scholar engine. Addressing internal threats to validity is critical in mapping studies: the findings need to be unbiased and the search string needs to be reflective of our study scope.
6.3. External validity
The specificity of the techniques and tools and trends analysed in our study may not be able to be generalised outside of our search scope. Research efforts in NLP and traceability continue to evolve rapidly in recent years, and focus choice may affect the results generated. In reducing this threat, and for the sake of generalisability, we proposed tier categorisation for NLP techniques and focused our recommendations on common key issues, barriers, and setbacks rather than specific ones.
6.4. Conclusion validity
The limited availability of published efforts in NLP and software traceability may impact the conclusions derived from our study scope, especially on empirical evidence in the industry for traceability efforts that are not published. Incorporating synonyms of terms using the Google Scholar search engine as part of our data ingestion pipeline helped us reduce this threat, despite returning abundant false positives.
7. Conclusion
This paper presents a systematic mapping study focusing on NLP and its applications, in the context of software traceability. A total of 96 papers were obtained – covering a period of years 2013 to 2021 – during the selection process. We looked into the different ways NLP was leveraged to aid traceability efforts across the various phases of the SDLC. We analysed the trend of techniques and tools used, the trend of traceability activities that were involved, and identified key issues, barriers, and setbacks to these traceability efforts. From these, we identified open challenges and presented key recommendations for addressing these.
The field of research in NLP is continuously evolving, and while major use cases of these efforts are typically related to human communication (i.e. human language), there is great potential value for NLP to be further leveraged effectively in software traceability. By conducting this mapping study, we are able to consolidate recent efforts in attempting to take advantage of these techniques and tools to solve traceability problems, particularly through automating redundant tasks and solving key issues that arise from conventional IR techniques. This study serves as a checkpoint for researchers and practitioners to have a wide angle of view across the various efforts within our scope of the study. Based on the trend analysis done and the open challenges identified, this study has presented two key recommendations in moving forward: a holistic framework for NLP solutions and efforts in achieving interoperability and explainability in NLP models.
CRediT authorship contribution statement
Zaki Pauzi: Conceptualization, Methodology, Software, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Andrea Capiluppi: Validation, Resources, Supervision, Writing – review & editing, Project administration.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
No data was used for the research described in the article.
|
{"Source-Url": "https://pure.rug.nl/ws/portalfiles/portal/588365892/1_s2.0_S0164121223000110_main.pdf", "len_cl100k_base": 15751, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 62702, "total-output-tokens": 20341, "length": "2e13", "weborganizer": {"__label__adult": 0.0004360675811767578, "__label__art_design": 0.0005078315734863281, "__label__crime_law": 0.00031304359436035156, "__label__education_jobs": 0.00208282470703125, "__label__entertainment": 0.00010657310485839844, "__label__fashion_beauty": 0.00019240379333496096, "__label__finance_business": 0.0003719329833984375, "__label__food_dining": 0.00028204917907714844, "__label__games": 0.0011444091796875, "__label__hardware": 0.0004968643188476562, "__label__health": 0.000457763671875, "__label__history": 0.00031256675720214844, "__label__home_hobbies": 0.00010210275650024414, "__label__industrial": 0.0002460479736328125, "__label__literature": 0.0007100105285644531, "__label__politics": 0.00019299983978271484, "__label__religion": 0.0004041194915771485, "__label__science_tech": 0.0206146240234375, "__label__social_life": 0.00013899803161621094, "__label__software": 0.01320648193359375, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.0002727508544921875, "__label__transportation": 0.00037288665771484375, "__label__travel": 0.00019872188568115232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79897, 0.0481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79897, 0.44673]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79897, 0.89981]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4808, false], [4808, 12186, null], [12186, 15557, null], [15557, 18676, null], [18676, 23331, null], [23331, 26884, null], [26884, 30758, null], [30758, 36007, null], [36007, 41669, null], [41669, 46797, null], [46797, 53678, null], [53678, 60716, null], [60716, 67285, null], [67285, 70096, null], [70096, 73266, null], [73266, 73266, null], [73266, 79897, null], [79897, 79897, null], [79897, 79897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4808, true], [4808, 12186, null], [12186, 15557, null], [15557, 18676, null], [18676, 23331, null], [23331, 26884, null], [26884, 30758, null], [30758, 36007, null], [36007, 41669, null], [41669, 46797, null], [46797, 53678, null], [53678, 60716, null], [60716, 67285, null], [67285, 70096, null], [70096, 73266, null], [73266, 73266, null], [73266, 79897, null], [79897, 79897, null], [79897, 79897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79897, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4808, 2], [4808, 12186, 3], [12186, 15557, 4], [15557, 18676, 5], [18676, 23331, 6], [23331, 26884, 7], [26884, 30758, 8], [30758, 36007, 9], [36007, 41669, 10], [41669, 46797, 11], [46797, 53678, 12], [53678, 60716, 13], [60716, 67285, 14], [67285, 70096, 15], [70096, 73266, 16], [73266, 73266, 17], [73266, 79897, 18], [79897, 79897, 19], [79897, 79897, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79897, 0.18444]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
3baa92b3fa7dd68af8bea2092da42be8dbf2bc4f
|
Review of semantic enablement techniques used in geospatial and semantic standards for legacy and opportunistic mashups
Laurent Lefort
CSIRO ICT Centre
GPO Box 664 Canberra, ACT 2601, Australia
laurent.lefort@csiro.au
Abstract
Networks of sensors are increasingly used to monitor essential environmental variables for biodiversity, water, and climate change research. Such multidisciplinary scientific projects require more flexible ways to publish and aggregate sensor observations from different networks as mashable web resources. Semantically-enabled and linkable descriptions of sensors and sensors services can simplify the integration of legacy backend sensor web services and make it easier for mashup developers to opportunistically combine these resources.
This paper reviews linking and annotation techniques applicable to the development of geospatial mashups services. It describes how approaches based on RDFa could supersede existing techniques for the semantic annotation of RESTful services. It highlights specific issues linked to the hybrid nature of mashups combining solutions based on XML, RDF and HTML standards and the failure risks attached to such multi-standards knowledge systems. It points out the pending technical issues, especially the ones where more coherent approaches are needed e.g. the upgrade of existing standards like XLink and SAWSDL or the integration of validation tools developed for each family of standards.
Keywords: semantic web, sensor web, geospatial standards, mashup, XLink, RDFa.
1 Introduction
As networks of sensors are increasingly used to monitor essential environmental variables for biodiversity, water, and climate change research, we need innovative approaches to simplify the integration of sensor observations from different networks into mashable web resources. Pairing geospatial standards developed by the Open Geospatial Consortium (OGC) and semantic web standards developed by the World Wide Web Consortium (W3C) can foster new approaches for applications that are not (or not yet) clear candidates as web standards.
Apart from the Keyhole Markup Language (KML), most OGC standards have been developed prior to the introduction of new mashup engines and technologies based on existing and actively developed semantic web standards. Section 3 reviews the XML, HTML and RDF-based linking, and annotation methods and their applicability in this context. Two practical examples are used in Section 4 to compare the available approaches and to identify the innovative features of RDFa which are applicable to the semantic annotation of RESTful services. The discussion in Section 5 identifies failure risks which are specific to knowledge systems including sources of interfaces problems likely to occur in such multi-standard setups. It also points out the pending technical issues, especially the ones where more coherent approaches are needed e.g. the upgrade of existing standards like XLink and SAWSDL or the integration of validation tools developed for each family of standards.
2 Typology of mashups
2.1 Multi-layered mashup framework
The Model for layered integration tools proposed by Gamble and Gamble (2008) groups pre-Web, Web 1.0 and Web 2.0 technologies into three separate integration zones with decreasing level of integration effort and increasing readiness for opportunistic development. In this framework, legacy mashups require more work because the integration of pre-Web and Web 1.0 resources generally requires the development of custom-made wrappers. First generation mashup engines such as Damia, Yahoo Pipes, Popfly, or Google Mashup Editor (Di Lorenzo et al. 2009, Koschmider et al. 2009) enable the creation of opportunistic mashups based on the most popular Web 2.0 service API (Application Programmable Interfaces). These mashup engines have been very successful even if they are often tied to proprietary APIs or platforms.
Copyright © 2009, Australian Computer Society, Inc. This paper appeared at the Fifth Australasian Ontology Workshop (AOW2009), Melbourne, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 1xx, Thomas Meyer and Kerry Taylor, Eds. Reproduction for academic, not-for-profit purposes permitted provided this text is included.
1 http://www.w3.org/2005/Incubator/ssn/
Figure 1 illustrates the layered model defined by Gamble and Gamble (2008) where the two types of integration approaches cohabit. Legacy services are integrated in the first integration layer as legacy mashups. The resulting services are exploited in the second integration layer with more lightweight mashup methods.

**Figure 1: Multi-layered mashups**
### 2.2 Non-semantic mashups
Geospatial and Sensor web service-oriented platforms can combine Web 2.0 technologies like Ajax to global geospatial data resources like Google to enable the online publication of geospatial and sensor datasets and services. Mashable APIs are now available for geospatial and sensor web resources like Google Maps\(^2\) or Pachube\(^3\) and from popular GIS tools like ArcGIS\(^4\).
Figure 2 presents a simple example of multi-layered geospatial mashup. ArcGIS can be used to integrate data from OGC web services and expose it through proprietary Javascript APIs\(^5\) which can be further mashed up in Web 2.0 tools like Google maps.

**Figure 2: A simple geo-mashup based on Arc GIS**
### 2.3 Semantic mashups
The lack of extensibility of existing APIs is driving the development of the next generation of **semantic mashup** engines based on semantic web standards developed by W3C. SAWSDL (Kopecký et al. 2007) uses semantic descriptions to enable the composition of web services for **legacy semantic mashups**. These rich semantic descriptions help to compose geospatial services (Lemmens et al. 2007, Vaccari et al. 2009). Custom-made operators are often developed to transform the data from XML to RDF (Henson et al. 2009) and to better manage its provenance (Sahoo et al. 2008).
**Opportunistic semantic mashups** generally use RDF (triple stores) resources applying the Linking Open Data conventions (Bizer et al. 2007) via standard APIs based on SPARQL (Prud’hommeaux et al. 2008, Clark et al. 2008) or via proprietary query languages offered by Web-based development environments such as Metaweb ACRE\(^6\) or Yahoo Pipes\(^7\) designed to offer the possibility for end users to develop and share their mashups.
Opportunistic semantic mashups can also source data from HTML pages, especially from RDFa (Adida et al. 2008) snippets embedded in web pages. RDFa, originally designed as an extension of XHTML2 and now ported\(^8\) to HTML5\(^9\) is a hybrid method devised to sprinkle RDF data or metadata in a web page and make it available for further content aggregation down the track, e.g. at the level of search engines (Benjamins et al. 2008). Search platforms like Google and Yahoo SearchMonkey\(^10\) exploit RDFa content to improve search results and use it in search engine results as richer snippets (Goel et al. 2009).
DERI Pipes (Le Phuoc et al. 2009), MashQL (Jarrar and Dikaiakos 2009) and TopQuadrant’s SparqlMotion\(^11\) are three examples of semantic mashup engines which allow end users to chain (or pipe) simple URI-based data from XML using XQuery, from RDF using SPARQL and extract embedded RDFa and microformat data from HTML using purpose-built operators. Figure 3 presents a semantic mashup architecture implemented by Le Phuoc and Hauswirth (2009) which combines a semantic wrapper for Sensor Observation Service similar to SemSOS (Henson et al. 2009) with a SensorMasher application based on DERI pipes. In this implementation, SPARQL is used to query data from the sensor ontologies and from the sensor data streams.

**Figure 3: A multi-layered semantic mashup**
### 2.4 Semantic enablement methods
There are four basic **semantic enablement** methods for legacy and opportunistic mashups applicable at different levels of the multi-layered scheme described in Figure 1:
- Inclusion of remote RDF (or SKOS/OWL) resources in XML using XLink,
- Annotation of web services with SAWSDL,
- Annotation of RESTful web services using hRESTs (or SA-REST, MicroWSMO),
- Inclusion of remote RDF (or SKOS/OWL) resources in HTML using RDFa.
---
\(^2\) [http://code.google.com/apis/maps/](http://code.google.com/apis/maps/)
\(^3\) [http://www.pachube.com/](http://www.pachube.com/)
\(^5\) [http://www.esri.com/javascript](http://www.esri.com/javascript)
\(^7\) [http://pipes.yahoo.com/](http://pipes.yahoo.com/)
\(^8\) [http://dev.w3.org/html5/rdfa/rdfa-module.html](http://dev.w3.org/html5/rdfa/rdfa-module.html)
\(^9\) [http://www.w3.org/TR/html5/](http://www.w3.org/TR/html5/)
The next section reviews the basic XML, HTML and RDF-based linking and annotation standards and their relevance to the four semantic enablement methods defined above. For this purpose, the following terminology is used. *Mashable content* corresponds to any type of remotely managed resources which can be used in a mashup. *Links* specifies the inclusion of remotely managed resources. *Semantic annotations* define how to map service capabilities to semantic definitions to enable the discovery or composition of web services. The transition from XML-based services to RDF-based services is called a *lifting* operation (Farrell and Lausen 2007). The inverse one, from RDF to XML is called a *lowering* operation.
### 3 Linking and annotation methods
#### 3.1 Handling mashable content with javascript
Mashable content can be extracted from XML, RDF (OWL) and HTML resources, and from RDFa snippets included in web pages. Different javascript libraries (see Table 1) can be used to process data sourced from different origins.
<table>
<thead>
<tr>
<th>Mashed up content</th>
<th>Javascript library</th>
</tr>
</thead>
<tbody>
<tr>
<td>XML resource</td>
<td>JQuery <a href="http://jquery.com/">http://jquery.com/</a></td>
</tr>
<tr>
<td>RDF resource</td>
<td>JSON <a href="http://www.json.org/">http://www.json.org/</a> used to serialise SPARQL results <a href="http://www.w3.org/TR/rdf-sparql-json-res/">http://www.w3.org/TR/rdf-sparql-json-res/</a></td>
</tr>
<tr>
<td>OWL resource</td>
<td>JOWL (JQuery extension) <a href="http://jowl.ontologyonline.org/">http://jowl.ontologyonline.org/</a></td>
</tr>
<tr>
<td>HTML snippet</td>
<td>JQuery <a href="http://jquery.com/">http://jquery.com/</a></td>
</tr>
<tr>
<td>RDFa snippet</td>
<td>rdfQuery (JQuery extension) <a href="http://code.google.com/p/rdfquery">http://code.google.com/p/rdfquery</a></td>
</tr>
<tr>
<td>Microformat snippets</td>
<td>A custom-made javascript library is needed for each different microformat</td>
</tr>
</tbody>
</table>
Table 1: Types of mashable content
Interest for RDFa is growing fast because the prospect for being able to extend documents without having recourse to standards organisations is enormous and because the addition of RDFa content to already published web pages can be done without forcing the web site designers to change the look of their sites.
Microformats are available for a number of specific applications with various levels of popularity and support. The HTML5 Microdata proposal is an attempt to offer a generic alternative to the existing Microformat coding conventions. It is not reviewed here because this set of requirements (Hickson 2009) can be considered as a subset of the requirements addressed by RDFa.
#### 3.2 Linking methods
*Links* are defined here as mechanisms used to extend available content from any type of resources with information sourced from remotely managed content (type or instance). Links are possible between two documents of the same type or between documents of different types. Table 2 lists the techniques used to link documents to each other on a range of use cases which can occur in mashups.
<table>
<thead>
<tr>
<th>Linked resource type</th>
<th>Linking method</th>
<th>Type of link</th>
</tr>
</thead>
<tbody>
<tr>
<td>XML</td>
<td>XLink</td>
<td>XML to XML</td>
</tr>
<tr>
<td>XML</td>
<td>XLink</td>
<td>XML to URNs</td>
</tr>
<tr>
<td>XML</td>
<td>XLink</td>
<td>XML to RDF</td>
</tr>
<tr>
<td>XML</td>
<td>RDFa</td>
<td>XML to RDF</td>
</tr>
<tr>
<td>RDF</td>
<td>OWL mapping properties or weaker alternatives like umbel:isLike</td>
<td>RDF to RDF</td>
</tr>
<tr>
<td>SKOS</td>
<td>SKOS mapping properties</td>
<td>SKOS to SKOS</td>
</tr>
<tr>
<td>OWL</td>
<td>OWL mapping properties</td>
<td>OWL to OWL</td>
</tr>
<tr>
<td>HTML</td>
<td>Microformats</td>
<td>HTML to “data”</td>
</tr>
<tr>
<td>HTML</td>
<td>RDFa or Common Tag</td>
<td>HTML to RDF</td>
</tr>
</tbody>
</table>
Table 2: Linking methods
The XML Linking language or XLink (DeRose et al. 2001) is a W3C standard which allows the creation of links between XML resources. It is commonly used in OGC standards to include references to external vocabularies managed with URNs.
To link RDF-based vocabularies, ontologies or Linking Open Datasets (LOD) content, the most common approach is to use the basic relationships defined in the Web Ontology Language OWL: owl:sameAs, owl:equivalentClass, owl:equivalentProperty although for plain LOD content, weaker alternatives may be preferable like the one proposed by the UMBEL12 developers. SKOS13 offers a richer range of properties (exactMatch, closeMatch, broaderMatch, narrowerMatch) to specify the relationships between concepts.
#### 3.3 Semantic annotation methods
Different semantic annotations methods are needed for WSDL web services and RESTful web services.
Upgrading WSDL web services into semantically enabled services can be done with the help of SAWSDL (Kopecký et al. 2007), now a W3C Recommendation (Farrell and Lausen 2007). The SAWSDL specification has three main features:
- Semantic definitions (in a RDF-based format like OWL) may be included in the WSDL file.
- A small set of elements and attributes can be added in different parts of the WSDL service description to create links from XML schemas elements and attributes to their *model references* which are semantic definitions.
- And finally, additional attributes can be used to associate a schema type or element with a mapping script describing lifting transformation from XML to RDF and lowering transformation from RDF to XML.
Upgrading REST web services into semantically enabled services requires different tools because the service
12 http://www.umbel.org/
13 http://www.w3.org/TR/skos-reference/
declaration is generally made within a HTML web page and does not use an XML-based description format. SA-REST (Latham et al. 2007, Sheth et al. 2007) and MicroWSMO (Kopecký et al. 2009) are two related efforts which use the same semantic annotation microformat, hRESTs (Kopecký 2008). The SA-REST approach is more closely related to the SAWSDL standard while MicroWSMO uses a different ontology: WSMO-Lite.
3.4 Types of lifting operations
GRDDL (Connolly 2007) defines the syntax to embed the reference to a lifting script in any type of well-formed XML format. The file to which the GRDDL annotation has been added is used as the input of the specified lifting operation. The RDF output depends on the location of the GRDDL markup. If the corresponding transformation is available, any HTML files containing microformat-based annotations can use this mechanism to be transformed into RDF.
SAWSDL, SA-REST and MicroWSMO also require the development of custom-made scripts. A major difference is that these scripts specify how to process the XML data manipulated by the service, not the content of the file containing the annotations.
RDFa defines a generic lifting mechanism to transform the annotations included in an HTML file into RDF. In this case, there is no need for user-developed scripts.
Lifting scripts may use languages like XSLT or XQuery. Lowering scripts may use hybrid approaches like XSPARQL (Akthar et al. 2008), a W3C Member Submission which mixes XQuery and SPARQL. RDFa users can also use alternative implementations such as the ones available in javascript (Table 1).
4 Comparison of key linking methods
A short summary of the key features of each method is provided below. A more direct comparison is also done on two examples to complete this analysis in relation to two critical issues:
- Choice between the hRESTs microformat and RDFa for the semantic annotations of REST-based services and consistency of these approaches with existing ones (SAWSDL).
- Choice between XLink and RDFa as the linking technique used for XML content.
The first example focuses on semantic annotation requirements to guide the future work on REST services and also bridge the gap between these new methods and what can currently be used for WSDL.
The second example illustrates the differences between the XML-friendly solution based on XLink and the alternative approach based on RDFa.
### 4.1 Key attributes for each approach
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
<th>Intended RDF</th>
</tr>
</thead>
<tbody>
<tr>
<td>about</td>
<td>The identification of the resource (to state what the data is about)</td>
<td>rdf:about of domain resource</td>
</tr>
<tr>
<td>typeof</td>
<td>RDF type(s) to associate with a resource</td>
<td>rdf:about of class of a resource</td>
</tr>
<tr>
<td>href</td>
<td>Partner resource of a relationship (resource object)</td>
<td>rdf:about of range resource</td>
</tr>
<tr>
<td>property</td>
<td>Relationship between a subject and some literal text ('predicate')</td>
<td>rdf:about of datatype property</td>
</tr>
<tr>
<td>rel</td>
<td>Relationship between two resources ('predicate')</td>
<td>rdf:about of object property</td>
</tr>
<tr>
<td>rev</td>
<td>Reverse relationship between two resources ('predicate')</td>
<td>rdf:about of (inverse) object property</td>
</tr>
<tr>
<td>src</td>
<td>Base resource of a relationship when the resource is embedded 'resource object')</td>
<td>rdf:about of domain resource</td>
</tr>
<tr>
<td>resource</td>
<td>Partner resource of a relationship that is not intended to be 'clickable' ('object')</td>
<td>rdf:about of range resource</td>
</tr>
<tr>
<td>datatype</td>
<td>Datatype of a property</td>
<td>XML type range of datatype property</td>
</tr>
<tr>
<td>content</td>
<td>Machine-readable content ('plain literal object')</td>
<td>Value for datatype property</td>
</tr>
</tbody>
</table>
Table 3: RDFa attributes
In RDFa, the about and resource attributes plays the role of rdf:about and rdf:resource attributes in RDF. They can be encoded as compact URIs or CURIES (Birbeck and McCarron 2009), a syntax inspired by the prefix management conventions used in SPARQL. The content of a datatype property can be included as an extra attribute (content) or retrieved from the element content.
**hRESTs**: hRESTs focuses on the capture of mapping information between the service description and a reference ontology. The additional information is provided through the coding of the lifting script applicable to the service outputs. The hRESTs microformat specification used here is the one published by Kopecký et al. (2009) and the associated examples.
14 http://www.w3.org/TR/xslt20/
15 http://www.w3.org/TR/xquery/
16 http://www.w3.org/Submission/2009/01/
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
<th>Intended RDF</th>
</tr>
</thead>
<tbody>
<tr>
<td>class</td>
<td>Type of XML or WSDL element (service, operation, address, method, input, output, label)</td>
<td>rdf:about of class of domain resource</td>
</tr>
<tr>
<td>href next to rel="model"</td>
<td>association between a WSDL or XML schema component and a concept in some semantic model</td>
<td>rdf:about of range class = modelReference</td>
</tr>
<tr>
<td>href next to rel="lifting"</td>
<td>Lifting script URL</td>
<td>N/A</td>
</tr>
<tr>
<td>href next to rel="lowering"</td>
<td>Lowering script URL</td>
<td>N/A</td>
</tr>
<tr>
<td>id</td>
<td>Locally declared id of WSDL element (to be combined with the document URL)</td>
<td>rdf:about of domain resource</td>
</tr>
</tbody>
</table>
Table 4: hRESTs Microformat attributes
The HRESTs microformat mandates the use of blocks with class elements in a rigid parent-child hierarchy (e.g. service contains operation) which will be implicitly transposed in the resulting RDF file.
**XLink:** For the purpose of this review, we will use the XLink guidelines documented for the Geography Markup Language standard (Portele 2007) rather than the original W3C specification Xlink (DeRose et al. 2001). Table 5 summarises the attributes defined by this specification.
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
<th>Intended RDF</th>
</tr>
</thead>
<tbody>
<tr>
<td>xlink:href</td>
<td>Identifier of the resource which is the target of the association, given as a URI</td>
<td>rdf:about of range resource</td>
</tr>
<tr>
<td>xlink:role</td>
<td>Nature of the target resource, given as a URI</td>
<td>rdf:about of class of range resource</td>
</tr>
<tr>
<td>xlink:arcrole</td>
<td>Role or purpose of the target resource in relation to the present resource, given as a URI</td>
<td>rdf:about of object property linking domain element to range resource</td>
</tr>
<tr>
<td>xlink:title</td>
<td>Text describing the association or the target resource</td>
<td>rdfs:comment</td>
</tr>
</tbody>
</table>
Table 5: XLink attributes
### 4.2 Feature comparison: hRESTs and RDFa
Kopecký et al. (2009) also specify how hRESTs can be expressed in RDFa. Table 6 is based on this input. The main difference is that hRESTs in RDFa allows the user to specify the target ontology through the definitions of the typeof, rel, property and datatype attributes.
<table>
<thead>
<tr>
<th>RDF mapping</th>
<th>hRESTs in Microformats</th>
<th>hRESTs in RDFa</th>
</tr>
</thead>
<tbody>
<tr>
<td>Domain instance</td>
<td>id (URL-prefixed)</td>
<td>about</td>
</tr>
<tr>
<td>Domain class</td>
<td>class (closed list)</td>
<td>typeof</td>
</tr>
<tr>
<td>Object property</td>
<td>ref="model"</td>
<td>rel</td>
</tr>
<tr>
<td>Inverse object property</td>
<td></td>
<td>rev</td>
</tr>
<tr>
<td>Range instance</td>
<td>href or resource</td>
<td></td>
</tr>
<tr>
<td>rdf:about of range class</td>
<td>href</td>
<td>typeof</td>
</tr>
<tr>
<td>Datatype property</td>
<td>property</td>
<td></td>
</tr>
<tr>
<td>Datatype property type</td>
<td>datatype</td>
<td></td>
</tr>
<tr>
<td>Range value</td>
<td>content or element content</td>
<td></td>
</tr>
</tbody>
</table>
Table 6: Comparison of RDFa and hRESTs
### 4.3 Feature comparison: XLink and RDFa
The direct comparison done in Table 7 can help to locate the major difference between XLink and RDFa which is that the two specifications cover different types of RDF triples:
- **XLink:** predicate (role) and object (href) for object properties
- **RDFa:** subject (about), predicate (rel) and object (href) for object properties and subject (about), predicate (property) and object (content or element content) for datatype properties
<table>
<thead>
<tr>
<th>RDF mapping</th>
<th>XLink</th>
<th>RDFa</th>
</tr>
</thead>
<tbody>
<tr>
<td>Domain instance</td>
<td>about or src</td>
<td></td>
</tr>
<tr>
<td>Domain class</td>
<td>typeof</td>
<td></td>
</tr>
<tr>
<td>Object property</td>
<td>arc role</td>
<td>rel</td>
</tr>
<tr>
<td>Inverse object property</td>
<td></td>
<td>rev</td>
</tr>
<tr>
<td>Range instance</td>
<td>href</td>
<td>href or resource</td>
</tr>
<tr>
<td>Range class</td>
<td>role</td>
<td>typeof</td>
</tr>
<tr>
<td>Datatype property</td>
<td>property</td>
<td></td>
</tr>
<tr>
<td>Datatype property type</td>
<td>role</td>
<td>datatype</td>
</tr>
<tr>
<td>Range value</td>
<td>content or element content</td>
<td></td>
</tr>
</tbody>
</table>
Table 7: Comparison of XLink and RDFa
### 4.4 Examples of semantic annotations
The National Digital Forecast Database is a web service developed by the U.S. National Weather Service to test the Digital Weather Markup Language (DWML). This forecast service (see also Al-Muhammed et al. 2007) is used here because it is simultaneously implemented as a WSDL service and as a REST service. Figure 4 shows an example of SAWSDL annotation in the WSDL file.
---
17 [http://www.nws.noaa.gov/ndfd/technical.htm](http://www.nws.noaa.gov/ndfd/technical.htm)
Table 8 lists the concepts defined in the SWEET 2.0 ontologies\(^ {18} \) which can be used as model references for the message parts of the NFDGen operation. Model references for service parameters like the product type (Time series or “glance”) and the output type are specific to DWML and are not available in SWEET 2.0.

Many REST services are only documented through a web page. This is why semantic annotation methods like SAWSDL or MicroWSMO can use any type of web page describing a service. The two options are to annotate the HTML page (or form) used to run the service (Figure 5) or a “WSDL-inspired” documentation page (Figure 6).

The two following examples present two types of annotations: hRESTs Microformat (Figure 7), and RDFa (Figure 8) applicable to the HTML form.
The hRESTs example (Figure 7) only includes semantic references for the sawsdl:modelReference attributes in SAWSDL. While the hRESTs solution may seem easier to use, it also requires extra effort for the end user to learn how the mapping between the class annotations used in the microformat (operation, action, input ...) and the ontology used for the generated RDF content. This mapping may depend on the hRESTs toolset and on the availability of custom-made lifting and lowering scripts.

\(^{18} \) [http://sweet.jpl.nasa.gov/ontology/](http://sweet.jpl.nasa.gov/ontology/)
The RDFa example (Figure 8) includes semantic references defining the type of annotations (e.g. sarest:operation). This approach gives more control to the end user for the choice of the service ontology and simplifies the task for the programming of tools which interprets the annotations. The RDFa specification (Adiba et al. 2008) defines processing rules which helps to combine these two types of semantic references seamlessly.
The second use case corresponds to the inclusion of a “model reference to an ontological description”. In this case, the XLink annotation use the xlink:arcrole attribute to define the type of the referenced object (Figure 11). The definition attribute in the SWE schemas and the descriptionReference in the GML schemas are scoped for this particular usage.
4.5 Examples of semantic links
OGC standards like GML (Portele 2007) define the use of XLink to add annotations in XML files. These annotations can point to extra sources of information (e.g. a file) or to Uniform Resource Name (URN).
The first use case is described in the GML specification as “composition by inclusion of remote resources”: in this case, the XLink annotation use the xlink:href attribute to reference an external file containing additional data (Figure 9).
The example above shows that the current use of XLink in OGC schemas can be mirrored in RDFa.
In our generalised mashup approach, the semantic annotations should be exploitable by generic or user-defined lifting operators to create the corresponding RDF statements. When this RDF is lowered back into XML, there is a risk of losing some of the information previously available. XLink can be used to maintain some of this lowered content. Table 7 defines the mappings between the two approaches which are possible with the present XLink specification. It also shows that there are other usages which are possible in RDFa but not in the “simple” style of XLink.
5 Directions for future work
5.1 Guidelines for the application of hRESTs
For RESTful services, the format of the HTML content which should be annotated is not specified by the proposed specifications. This is an issue which should be addressed. The form-embedded annotation approach is preferable to the description-based one in general for the part of the description which describes how to run the service, because the annotated form can still be used to test that the service works. For the part of the description which covers the output data (results and error messages), a different approach is required, to be based on an embedded XML schema (this is what WADL does) or on another form of testable content.
5.2 SAWSDL vs. hRESTs in RDFa
The relative complexity and rigidity of the SAWSDL and of the hRESTs Microformat specification contrasts with the flexibility of the approaches based on RDFa (e.g. hRESTs in RDFa), where the choice of the service ontology can be made by the end user without requiring any new developments for the lifting of the semantic annotations into semantic web tools.
This extra flexibility is important not just for RESTful services. Further work is required to upgrade SAWSDL so that it can also let the end user select the service ontology they want if they are not satisfied by the
definitions brought by the SA-REST or WSMO-Lite ontologies.
5.3 Ontologies for other types of services
Other service description languages like WADL (Hadley 2009) and WSDL 2.0\(^\text{20}\) may provide a better basis for RESTful services. The hybrid ontology and rule-base framework proposed by Zhao and Doshi (2009) handles three categories of composable RESTful services to add access and transform resources.
SensorML (Botts and Robin 2007) is an OGC-developed markup language for the description of sensors. It includes a process model which is comparable to the other service ontologies discussed above. The challenge for the W3C Semantic Sensor Network Incubator Activity is to develop an ontology describing sensor services based on SensorML and use it for semantic annotations in a context where the boundary between the application-specific ontologies and the service ontologies and between non-semantic and semantic mashups is harder to define.
5.4 Replacement of custom-made lifting scripts
Any solution requiring the development of custom-made lifting mechanisms should be avoided if alternative approaches based on standards which fully specify this critical step like RDFa are available. The dependency on user-developed transformations for the lifting scripts is one of the factors which have slowed down the adoption of semantic annotation standards for services like SAWSDL and hRESTs/SA-REST/MicroWSMO.
As discussed above, the hRESTs in RDFa format provides a generic approach for the transformation of the semantic annotations into a RDF-based format and it should be possible to develop a similar approach for SAWSDL and to also suppress the requirement to develop custom-made scripts for this purpose.
But, it is not yet possible to automatically derive the lifting script for the second type of lifting operation discussed in 3.4, where the script goal is to process the XML data manipulated by the service and not the file containing the annotations. The MyMobileWeb project (Berrueta et al. 2009) has been looking at RDFa for a similar problem, to describe the bindings to data sources and enable multi-device mobile access to semantically enriched information portals.
5.5 Controlled upgrade of legacy standards
Ad hoc semantic upgrade of legacy standards such as XLink should be monitored closely to minimise the risks of failure caused by problematic extensions by end users.
In many cases, techniques bound to one family of standards (XML) have been later adapted to a different context without any assurance that the new usage respects the original intent of the specification. Hybrid ad hoc approaches may also import conflicting or ambiguous definitions from different standard families.
Some parts of SensorML uses XLink annotations to embed “model reference to an ontological description” in the sensor description (e.g. swe:phenomenon). These use cases are a possible source of confusion because they answer to requirements which can potentially be better addressed through new approaches based on semantic web technologies.
For example, to handle all the annotations requirements identified for RDFa in an XML context, a simple approach would be to add a new “style” to XLink for RDFa as an extension to the current XLink specification. For organisations like OGC who already use XLink and maintain a large number of XML schemas, this approach would have two advantages.
- To limit the impact on existing schemas to changes in the XLink schema,
- To provide a mechanism to isolate semantic XLink snippets from normal ones.
This upgrade of XLink should not be done without a careful consideration of the present usage of XLink in OGC standards and also in other standards like SVG\(^\text{21}\).
5.6 Failure risk analysis
Combining legacy and opportunistic mashups will require robust and mashable validation tools to prevent and diagnose failures. Opportunistic mashups depends on external resources which may disappear or evolve without notice, especially mashable services and semantic resources, so the risks of failure are greater and more diverse than in other environments.
In a multi-layered mashup environment, it is important to support validation at every possible step of integration and to leverage the validation methods which are specific to each family of standards: XML, HTML and RDF individually. In this context, it is very important to check the availability of validators and their ability to check the content (markup validators) as well as the added annotations or links to remote resources and also the flexibility and robustness of these tools.
The Unicorn\(^\text{22}\) (Universal Conformance Observation and Report Notation) project at the W3C is a validator mashup combining a HTML validator, a CSS validator and a HTML link checker. Extending this approach to the other families of the W3C\(^\text{23}\) and OGC standards used in the type of mashups discussed above would be very useful.
6 Conclusion
There are multiple semantic enablement techniques which can be used in geospatial and semantic standards for legacy and opportunistic mashups. For the insertion of semantics links in XML content formatted according to OGC standards, the less disruptive approach identified in this review may be to add a new style to the existing XLink specification transposing all the RDFa attributes and processing rules defined for the HTML context.
The hRESTs-in-RDFa annotation format is preferred for the annotation of RESTful services. The arguments
\(^{20}\) http://www.w3.org/TR/wsdll20/
\(^{21}\) http://www.w3.org/Graphics/SVG/
\(^{22}\) http://www.w3.org/QA/Tools/Unicorn/
\(^{23}\) W3C specifications and validators are listed in http://www.w3.org/QA/TheMatrix
formerly raised (Graf 2007) to prefer Microformats to RDFa to add semantic annotations or links to HTML have been invalidated by the W3C decision to make RDFa available in HTML 5. The analysis presented above shows that solutions based on Microformats prevent the implementation of generic lifting services with scripting languages such as XSL. Transformations, XQuery or XSPARQL or with javascript libraries like rdfQuery which plays an essential role in opportunistic mashups.
The SAWSDL specification should also be upgraded to offer the same possibility for the user to select the service ontology.
Finally, in complex mashups, the risk of failure is greater and the validation methods are different for standards belonging to the XML, HTML and RDF families. There should be a limited number of methods to combine these standards together to lower the cost of development of new markup validators and link checkers. If possible, these new validation services should also be mashable to simplify the creation of more integrated validation services.
7 References
|
{"Source-Url": "https://publications.csiro.au/rpr/download?pid=csiro:EP091963&dsid=DS1", "len_cl100k_base": 8505, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36252, "total-output-tokens": 10908, "length": "2e13", "weborganizer": {"__label__adult": 0.0003371238708496094, "__label__art_design": 0.0007643699645996094, "__label__crime_law": 0.0005536079406738281, "__label__education_jobs": 0.0010404586791992188, "__label__entertainment": 0.00016796588897705078, "__label__fashion_beauty": 0.0002378225326538086, "__label__finance_business": 0.0006313323974609375, "__label__food_dining": 0.0003771781921386719, "__label__games": 0.0005545616149902344, "__label__hardware": 0.0012655258178710938, "__label__health": 0.0005888938903808594, "__label__history": 0.0009665489196777344, "__label__home_hobbies": 0.0001068115234375, "__label__industrial": 0.0007028579711914062, "__label__literature": 0.0006432533264160156, "__label__politics": 0.0007228851318359375, "__label__religion": 0.0006413459777832031, "__label__science_tech": 0.364990234375, "__label__social_life": 0.00015985965728759766, "__label__software": 0.04901123046875, "__label__software_dev": 0.57421875, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.0007925033569335938, "__label__travel": 0.00034809112548828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41585, 0.02744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41585, 0.50659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41585, 0.8089]], "google_gemma-3-12b-it_contains_pii": [[0, 4333, false], [4333, 9234, null], [9234, 14500, null], [14500, 18964, null], [18964, 23032, null], [23032, 24652, null], [24652, 27910, null], [27910, 33657, null], [33657, 38956, null], [38956, 41585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4333, true], [4333, 9234, null], [9234, 14500, null], [14500, 18964, null], [18964, 23032, null], [23032, 24652, null], [24652, 27910, null], [27910, 33657, null], [33657, 38956, null], [38956, 41585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41585, null]], "pdf_page_numbers": [[0, 4333, 1], [4333, 9234, 2], [9234, 14500, 3], [14500, 18964, 4], [18964, 23032, 5], [23032, 24652, 6], [24652, 27910, 7], [27910, 33657, 8], [33657, 38956, 9], [38956, 41585, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41585, 0.25483]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
30060835f8e5ea9b8ef8870308be1f3a43a6c435
|
PDF hosted at the Radboud Repository of the Radboud University Nijmegen
The following full text is an author’s version which may differ from the publisher’s version.
For additional information about this publication click this link.
http://hdl.handle.net/2066/107653
Please be advised that this information was generated on 2017-11-19 and may be subject to change.
Reasoning about explicit strictness in a lazy language using mixed lazy/strict semantics
Marko van Eekelen, Maarten de Mol
Department of Computer Science
University of Nijmegen, the Netherlands
{marko,maartenm}@cs.kun.nl
Abstract. Many functional programmers are familiar with the concept of enforcing strictness for making applications fit their time and space efficiency requirements. Few functional programmers however, are familiar with the consequences of enforcing strictness for formal reasoning about their programs.
This paper attempts to fill the gap between the few and the many. Some typical examples are given of the use and the meaning of explicit strictness. We show how formal reasoning can be made easier by the introduction of auxiliary functions in the program.
John Launchbury’s [Lau93] natural lazy semantics for lazy evaluation is extended with an explicit strict let construct. We show that this rule extends the semantics in a natural way. In fact, using our mixed semantics it is possible to express in the language itself the semantical difference between $\Omega$ and $\lambda x.\Omega$ while in Launchbury’s model these two expressions can only be distinguished from outside the language.
1 Introduction
Discrepancies between formal reasoning and implementation considerations can be solved by including formal reasoning in the programming process. Although it has often been stated that functional programming languages are well suited for formal reasoning, there is in practice little specific support for reasoning about functional programs. Of course, there are well-established theorem provers, such as PVS[ONRSC99], Coq[Tea98] and ISABELLE[Pau01], but they are set up for theoreticians. They do not support the full semantics of functional languages and can only be used if the program is translated first, making them difficult to use for a programmer. To make formal reasoning available for functional programmers, recently SPARKLE[dMvEP01] was developed.
SPARKLE is a semi-automatic theorem prover that can be used to reason about any program written in the functional language CLEAN[vEP98]. It supports all functional concepts and has a semantics based on lazy graph rewriting. Using SPARKLE programmers can easily state and prove properties of parts of programs. This on-the-fly proving can only be accomplished if reasoning requires little effort and time. This is already achieved by SPARKLE for smaller programs, mainly due to the possibility to reason on source code level and the
support for automatic proving. It is the intention that Sparkle will be further integrated in the language Clean and its IDE. Sparkle can be downloaded at http://www.cs.kun.nl/~clean and at http://www.cs.kun.nl/sparkle.
It is the experience gained in the Sparkle project that gave rise to the authors' opinion that more theoretical and pragmatical background was needed for formal reasoning with explicit strictness in a lazy language.
This paper deals with semantical aspects concerning strictness in the context of lazy functional languages in general and of the languages Haskell[Hud00] and Clean in particular. Programming examples will be written in Clean.
In section 2 the need for mixed lazy/strict semantics is motivated. Section 3 first introduces the required semantical definitions. Then, in this section the required properties, such as correctness and computational adequacy are shown to be true.
An example of a proof using these mixed semantics can be found in section 4. Finally, sections 5 and 6 discuss related work and give concluding remarks.
2 Mixed lazy and strict reasoning
2.1 Explicit strictness
Although it is seldomly mentioned in papers and presentations, explicit strictness is present in almost every lazy language (and in almost every program) that is used in real-world examples.
In these programs, strictness is used:
- for improving the efficiency of data structures (e.g. strict lists),
- for improving the efficiency of evaluation (e.g. functions that have arguments which are declared strict due to strictness analysis or due to the programmers annotations),
- for enforcing the evaluation order in interfacing with the outside world.
Language features that are used to denote this strictness include:
- type annotations (in functions and in data structures: Clean),
- special data structures (unboxed arrays: Clean, Haskell),
- special primitives (force and seq: Haskell),
- special inside implementations (monads: Clean, Haskell),
- special language constructs (let!, #: Clean),
- special tools (strictness analyzer: Clean).
Implementers of real-world applications make it their job to know about strictness aspects, because it is essential in order to make their applications satisfy the efficiency requirements. For reasoning about these programs, however, they tend to forget strictness all together. This can lead to unexpected non-termination when programs are changed by hand or automatically transformed relying on such reasoning. For reasoning with strictness, their is only little theory and little practical guidance available so far.
Definition 1 (Mathematical Strictness). A function \( f \) is mathematically strict in an argument \( x \) if the result of \( f \) applied to \( x \) is undefined if \( x \) is undefined.
Usually, this is denoted as follows:
\[
f \perp = \perp
\]
The identity function and the equality function that are commonly found in functional languages, are usually mathematically strict in (both) arguments.
Definition 2 (Operational Strictness). A function \( f \) is operationally strict in an argument \( x \) if the argument is always reduced to weak head normal form before the function application is evaluated and the result of the function is undefined if the argument is undefined.
If a function is mathematically strict in its argument then that function can be made operationally strict without changing its semantics.
Definition 3 (Notational Strictness). A function \( f \) is notationally strict in an argument \( x \) if the argument is somehow explicitly annotated as such.
In CLEAN strictness is denoted by adding an \(!\) before the argument in the type of the function (or data structure).
With taking strictness into account arguments of a function \( f \) that are notationally strict, will not only be treated are interpreted operationally strict by the compiler, but also mathematically strict by the semantics. Consequently, **when a mixed semantics is used notational strictness implies both operational and mathematical strictness**!
It is often recommended to use notational strictness in mixed semantics only when in lazy semantics mathematical strictness holds (because the meaning of a program is not affected then). However, this recommendation is in many cases (interfacing, strict data structures, efficiency) not sensible at all since for these cases it is simply the intention to change the meaning of the program from lazy to strict.
2.2 Some reasoning examples using explicit strictness
When a strict semantics is used, many "intuitive" properties suddenly turn out to be untrue. For instance, the following property which describes a relation between the mathematical = an the operational == does not hold:
\[
\forall a \in \text{Eq} \forall x \in a \forall y \in a \left[ x = y \iff (x \equiv y) = \text{True} \right]
\]
If both \( x \) and \( y \) are undefined, then the equality \( x = y \) holds but the expression \( x \equiv y \) is neither False nor True but undefined. The reason is that the function \( \equiv \) is a predefined operationally strict function.
If you would indicate strictness in the type information by adding a \(!\)-annotation before the strict argument (as is done in CLEAN) then the type of \( \equiv \) is specified as follows:
A property that does hold is the following:
\[\forall a \in \text{Eq} \forall x \in a \forall y \in a [x \neq \bot \land y \neq \bot \Rightarrow (x = y \Leftrightarrow x == y)]\]
Many other properties like e.g.
\[\forall a \in \text{Eq} \forall x \in a \forall y \in a [x == y \Leftrightarrow y == x]\]
do hold unconditionally.
Note that the function == may not be mathematically strict on all objects.
Suppose we are comparing strings (represented as unboxed arrays of characters).
If the first character of the first argument is different from the first character
of the second argument then the result could mathematically be determined
regardless whether the other characters are defined or not.
An indication of the need for additional strictness conditions can be obtained
from the standard library of Sparkle: of the 115 supplied theorems no fewer
than 19 have one or more strictness conditions.
A statement that is not present in this library is the following:
\[\forall a \in \text{Eq} \forall x \in a \forall y \in a [x == y \Leftrightarrow y == x]\]
The CLEAN definitions that this property refers to, are listed below:
```plaintext
isMember :: a !\[a\] -> Bool | Eq a
isMember x [y:ys] = y == x || isMember x ys
isMember x [] = False
filter :: (a -> Bool) !\[a\] -> \[a\]
filter p [x:xs]
| p x = [x:filter p xs]
| otherwise = filter p xs
filter p [] = []
```
The definitions above make use of several CLEAN-specific notations and features:
- "| Eq a" denotes class restriction (HASKELL: (Eq a) => . . .);
- "[ . . . : . . ]" for lists (HASKELL: ;, an infix constructor);
- "!" indicates notational strictness, so isMember x ⊥ = ⊥ and filter x ⊥ = ⊥;
- The functions "==" and "||" are defined in the standard library (StdEnv)
of CLEAN and are notationally strict ("==" in both arguments, "||" only in
the first argument).
Note that due to laziness isMember ⊥ [ ] = False. So, isMember ⊥ [ ] ≠ ⊥.
Similarly, due to laziness it holds that True || ⊥ = True but due to notational
strictness ⊥ || True = ⊥.
In the rest of this paper, we will use the following abbreviation:
1 Note that for predicates the "= True" is not explicitly written anymore.
\[ P(x, xs, p) := \text{isMember } x (\text{filter } p \ xs) = \text{isMember } x \ xs \land p \ x \]
Intuitively, many lazy functional programmers will consider \( \forall x \forall xs \forall p [P(x, xs, p)] \) to be valid without restrictions. There are several situations, however where it fails:
1. \( x = \bot \land xs \neq [] \land \forall y : p y = \text{False} \)
Then:
- \( \text{filter } p \ xs = [] \)
- \( \text{isMember } x (\text{filter } p \ xs) = \text{False} \)
- \( \text{isMember } x \ xs = \bot \)
- \( p \ x = \text{False} \)
So, \( \text{False} \neq \bot \land \text{False} \).
2. \( x \neq \bot \land xs = [\bot, x] \land \forall y : p y = \text{False} \)
Then:
- \( \text{filter } p \ xs = [] \)
- \( \text{isMember } x (\text{filter } p \ xs) = \text{False} \)
- \( \text{isMember } x \ xs = \bot \)
- \( p \ x = \text{False} \)
So, \( \text{False} \neq \bot \land \text{False} \).
3. \( xs = [] \land \forall y : p y = \bot \)
Then:
- \( \text{filter } p \ xs = [] \)
- \( \text{isMember } x (\text{filter } p \ xs) = \text{False} \)
- \( \text{isMember } x \ xs = \text{False} \)
- \( p \ x = \bot \)
So, \( \text{False} \neq \text{False} \land \bot \).
In fact, several conditions are required to ensure that \( P \) holds:
\[ x \neq \bot \land \text{IsFiniteAndFullyNonBottom}(xs) \land \forall y : \text{Total}(p, y) \rightarrow P(x, xs, p) \]
This last statement introduces two special conditions \text{IsFiniteAndFullyNonBottom} and \text{Total}. It is, however, not easy to formalize these conditions. One needs to define a special class of functions \text{eval} that return \text{True} for completely defined arguments (i.e. reduced completely to normal form) and that are undefined otherwise. This can be done on CLEAN using notational strictness as follows (the definitions below are present in an extension of \text{StdEnv: StdSparkle}):}
```clean
// Sparkle
class eval a :: !a -> Bool
// similar instances are available for other types
instance eval Int where
eval :: !Int -> Bool
eval x = True
instance eval [a] | eval a where
eval :: ![a] -> Bool | eval a
eval [] = True
eval [x:xs] = eval x && eval xs
```
5
Using this eval function, the last property with the predicates *IsFiniteAndFullyNonBottom* and *Total* can be expressed as follows:
\[ \text{eval } x \land \text{eval } xs \land \forall y : \text{eval } y \rightarrow \text{eval } (p y) \rightarrow P(x, xs, p) \]
Note that the condition \( \forall y : \text{eval } y \rightarrow \text{eval } (p y) \) can be weakened: it only needs to hold for all \( y \) in the list \( xs \). This can be expressed using the following auxiliary function:
\[
\text{evalFilter} :: (a \rightarrow \text{Bool}) \to [a] \rightarrow \text{Bool} \\
\text{evalFilter} p [] = \text{True} \\
\text{evalFilter} p [x:xs] = \text{eval} (p x) \land \text{evalFilter} p xs
\]
The complete statement can now be expressed in (and, of course, also proved by) Sparkle as follows:
\[ \text{eval } x \land \text{evalFilter } p xs \rightarrow P(x, xs, p) \]
By expressing properties about auxiliary CLEAN functions it is possible to write quite expressive and elegant statements. Another example of such a useful function is given below (it expresses finiteness of an argument list).
\[
\text{finite} :: ![a] \rightarrow \text{Bool} \\
\text{finite} [x:xs] = \text{finite } xs \\
\text{finite } [] = \text{True}
\]
### 2.3 Extensionality
The property of extensionality is often considered to be universal.
Unfortunately, there is a (rather obscure) example of a function for which the property of extensionality does not hold. This example does not make use of strictness and it is therefore valid both for lazy and mixed semantics.
// example of invalid extensionality
\[
\text{H} :: a \rightarrow b \\
\text{H } x \equiv \text{H } x \\
\text{F} :: a \\
\text{F} \equiv \text{F}
\]
With the definitions above \( F x = \text{H } x \) for all \( x \) since the meaning of both is undefined. Surprisingly, the property \( F = H \) does not hold, since \( H \) has a weak head normal form (and is thus defined) while \( F \) is undefined. It is therefore not safe to replace \( F \) by \( H \) in programs.
The problem can be corrected by strengthening the property of extensionality as follows:
**Definition 4 (Extensionality).**
\[ (f = \bot \iff g = \bot) \Rightarrow [ \forall x \ f x = g x \Rightarrow f = g ] \]
This extra condition is needed in lazy semantics as well as in mixed semantics. So, in fact referential transparency is *conditional*.
---
6
2.4 Reducing the workload to one single construct
As seen in section 2.1, there are several ways to express strictness in a functional program. Because the concept used is the same, it is possible to translate all these alternatives to one single, universal strictness construct.
This universal construct is the strict let, which is a non-recursive strict variant of a normal let. The difference is that the strict let is not-recursive and it enforces the stored expressions to evaluate to weak head normal form before all evaluation continues with the let expression. It is denoted in Clean by #!.
Below we list typical examples of translations to the strict let. It is not our intention to completely describe the most efficient translation that a compiler would use. We just want to show that these different kinds of strictness can all be translated to a single construct.
First consider a typical example concerning a strictness annotation in a function type \( F :: a \to b \to c \). The idea is simply to add a #! for each strict argument in each application of \( F \).
```clean
// expressing function strictness using a strict let construct in Clean
// F type definition with strict annotation
F :: a !b -> c
Start = F x y
// is replaced by
Start
#! y_strict = y = F x y_strict
```
In general \( E(F e_1 e_2) \) is replaced by \( E(\text{let}! y_{\text{strict}} = e_2 \text{ in } F e_1 y_{\text{strict}}) \).
Then, consider a typical example of a (partially) strict data type definition:
```clean
// expressing data type strictness using a strict let construct in Clean
// type definition with strict annotation: tail strict lists
:: TailStrictList a = TCCons a !(TailStictList a) | TCNil
Start = TCCons a as
// is replaced by
\[ \text{for a curried application of } F \text{ replace } E(F) \text{ by } E(\lambda x. \lambda y. \text{let}! y_{\text{strict}} = e_2 \text{ in } F e_1 y_{\text{strict}}) \]
Again, we add a \#! for each strict argument in each application of \( F \).
In general \( E(TCCons\; e_1\; e_2) \) is replaced by \( E(let!\; y_{\text{strict}} = e_2\; \text{in}\; TCCons\; e_1\; y_{\text{strict}}) \). So, the general transformation for data constructors is quite similar to the one for functions.
The required syntactical transformations can all be easily formalized. So, semantically only the extension with a rule for a non-recursive strict let (denoted in the semantics as let!) is needed in order to express all these different kinds of explicit strictness.
### 3 Natural semantics with explicit strictness
We first recall the basic semantical rules of Launchbury’s natural semantics [Lau93].
\[
\Gamma : \lambda \; x.e \Downarrow \quad \Gamma : \lambda \; x.e \quad \text{Lambda}
\]
\[
\frac{\Gamma : e \Downarrow \Delta : \lambda \; y.e' \quad \Delta : e'[x/y] \Downarrow \Theta : z}{\Gamma : e \Downarrow \Theta : z} \quad \text{Application}
\]
\[
\frac{\Gamma : e \Downarrow \Delta : z}{(\Gamma, x \mapsto e) : x \Downarrow (\Delta, x \mapsto z) : z} \quad \text{Variable}
\]
\[
\frac{\Gamma : let\; x_1 = e_1 \cdots x_n = e_n\; \text{in}\; e \Downarrow \Delta : z}{(\Gamma, x_1 \mapsto e_1 \cdots x_n \mapsto e_n) : e \Downarrow \Delta : z} \quad \text{Let}
\]
A rule for reducing strict lets must be added to the system. This rule is quite similar to the rule for a normal let, but it adds a condition to enforce the evaluation of the expression to be shared:
\[
\frac{\Gamma : e_1 \Downarrow \Theta : e'_1 \quad (\Gamma, x_1 \mapsto e_1) : e \Downarrow \Delta : z}{\Gamma : let!\; x_1 = e_1\; \text{in}\; e \Downarrow \Delta : z} \quad \text{StrictLet}\quad ^4
\]
---
^3 For a curried application of \( TCCons \) replace \( E(TCCons) \) by \( E(\lambda x.\lambda y.\text{let!}\; y_{\text{strict}} = e_2\; \text{in}\; TCCons\; e_1\; y_{\text{strict}}) \)
^4 Clearly, it would also have been possible to define the StrictLet rule writing \( x_1 \mapsto e'_1 \) instead of \( x_1 \mapsto e_1 \) since this is closer to how reduction is actually performed. But in this way, the theory is more close to [Lau93].
Note, that the environment need not be changed for the evaluation of \( e_1 \) since the strict let is not recursive.
From this definition it is intuitively clear that a strict let will behave the same as a normal let when \( e_1 \) has a weak head normal form. Otherwise, no derivation will be possible for the strict let.
If we would replace all let!’s by standard let’s, the weak head normal forms would not change. However, if we would replace all let!’s by let! in an expression, then the weak head normal form would be either the same or it would be undefined.
These properties will be proven in the next section.
3.1 Proving Normalization, Relation to Denotational Semantics and Computational Adequacy
[most proofs in this section will be filled in later.....]
We first extend the meaning function of [Lau93] with the meaning of the new let! construct.
As in [Lau93] we have a function domain following Abramsky and Ong [Abr90], [AO93], and we use \( \text{Fn} \) and \( \downarrow \text{Fn} \) as lifting and projection functions. An environment is a function from variables to values where the domain of values is some appropriate domain, also containing functions on values and a least element \( \perp \). We also use the special semantic environment function \( \{ \{ \} \} \rho \). It resolves the possible recursion and is defined as :
\[
\{ \{ x_1 \mapsto e_1 \cdots x_n \mapsto e_n \} \} \rho = \mu \rho'. \rho \cup (x_1 \mapsto [e_1]_{\rho'} \cdots x_n \mapsto [e_n]_{\rho'})
\]
Furthermore we use the same ordering on environments expressing that larger environments bind more variables but have the same values on the same variables: \( \rho \leq \rho' \) means \( \forall x. (\rho(x) \neq \perp \Rightarrow \rho(x) = \rho'(x)) \).
**Definition 5 (Meaning Function).**
\[
\begin{align*}
\llbracket \lambda x. e \rrbracket_\rho &= \text{Fn} (\lambda v. [e]_{\rho \cup (x \mapsto v)}) \\
\llbracket e \rrbracket_\rho &= (\llbracket e \rrbracket_\rho) \downarrow \text{Fn} (\llbracket x \rrbracket_\rho) \\
\llbracket x \rrbracket_\rho &= \rho(x) \\
\llbracket \text{let } x_1 = e_1 \cdots x_n = e_n \text{ in } e \rrbracket_\rho &= [e]_{\rho \cup (x_1 \mapsto e_1 \cdots x_n \mapsto e_n)} \\
\llbracket \text{let! } x_1 = e_1 \text{ in } e \rrbracket_\rho &= \perp, \text{ if } [e_1]_\rho = \perp \\
&= [e]_{\rho \cup (x_1 \mapsto e_1)}
\end{align*}
\]
In extension to [Lau93] we defined above a meaning for the let!-expressions. This meaning is given by a case distinction. If the meaning of the expression to be shared is \( \perp \), then the meaning of the let!-expression as a whole becomes \( \perp \). Otherwise, the meaning is simply the same as the meaning of the corresponding normal let-expression.
Before establishing the required properties, we would first like to study the correspondence between the meaning function defined here and the meaning function defined in [Lau93].
**Definition 6 (Replacement of let! by let for expressions).** The function \( \overline{\cdot} \) is defined on expressions such that \( e^{\overline{\cdot}} \) is the expression \( e \) in which every let!-expression is replaced by the corresponding let-expression:
\[(x)^{-1} = x\]
\[(\lambda x.e)^{-1} = \lambda x.(e^{-1})\]
\[(e^{-1})^{-1} = (e^{-1})(x^{-1})\]
\[(\text{let } x_1 = e_1 \cdots x_n = e_n \text{ in } e)^{-1} = \text{let } x_1 = e_1^{-1} \cdots x_n = e_n^{-1} \text{ in } e^{-1}\]
\[(\text{let! } x_1 = e_1 \text{ in } e)^{-1} = \text{let } x_1 = e_1^{-1} \text{ in } e^{-1}\]
**Definition 7 (Replacement of let! by let for environments).** The function \(-!\) is defined on environments such that \(\Gamma^{-!}\) is the environment \(\Gamma\) in which in every binding every expression \(e\) is replaced by the corresponding expression \(e^{-!}\):
\[
(\Gamma, x \mapsto e)^{-!} = (\Gamma^{-!}, x \mapsto e^{-!})
\]
Note that in the definition above the empty environment is indicated by \(\{\}\).
Below, we will indicate the meaning function of [Lau93] (which is exactly the same as our meaning function with the exception of the rule for let!) by \(\sem{\cdot}^{\text{lazy}}\). The next theorem establishes a close relation between the semantics with let! and the semantics without let. In fact, the only difference is that more terms are assigned the meaning bottom. Consequently, every term that has meaning non-bottom in the mixed semantics will also have meaning non-bottom in the lazy semantics.
**Theorem 1 (Compare Meanings).** The meaning of expressions with let! is the same as Launchbury’s meaning for expressions with let, with the exception that more expressions get the meaning \(\bot\).
\[
\sem{e}\sem{\Gamma}_\rho \neq \sem{e^{-!}}^{\text{lazy}}_{\Gamma^{-!}}_\rho \Rightarrow \sem{e}\sem{\Gamma}_\rho = \bot
\]
**Proof.** To be filled in later....
Note that in the definition above the initial semantic environment is indicated by \(\rho_0\).
As a direct consequence of theorem 1 the following holds:
\[
\sem{e}\sem{\Gamma}_\rho \neq \bot \Rightarrow (\sem{e^{-!}}^{\text{lazy}}_{\Gamma^{-!}}_\rho = \sem{e}\sem{\Gamma}_\rho)
\]
Similarly to \(\sem{\cdot}^{\text{lazy}}\), we will indicate the reduction semantics of [Lau93] with \(\Downarrow^{\text{lazy}}\). Again a close relationship between reduction with let! and reduction without let! can be established. For all cases where \(e\) reduces to \(z\) in the mixed semantics, \(e^{-!}\) also reduces to \(z^{-!}\) in the lazy semantics and consequently the lazy meaning of \(e^{-!}\) is non-bottom. This can even be strengthened: if \(e^{-!}\) reduces to \(z^{-!}\) in the lazy semantics, then either \(e\) reduces to \(z\) in the mixed semantics or the meaning of \(e\) is bottom.
**Theorem 2 (Compare Reduction).** The reduction semantics of expressions with let! is the same as Launchbury’s reduction for the expressions with let, with the exception that less expressions have a weak head normal form in mixed semantics.
\[ \Gamma : e \Downarrow \Delta : z \quad \Rightarrow \quad \Gamma^{-1} : e^{-1} \Downarrow^{laz} \Delta^{-1} : z^{-1} \]
\[ \Gamma^{-1} : e^{-1} \Downarrow^{laz} \Delta^{-1} : z^{-1} \quad \Rightarrow \quad \Gamma : e \Downarrow \Delta : z \vee \{e\}_{\rho} = \bot \]
**Proof.** To be filled in later....
In establishing standard semantical properties we will follow the structure of Launchbury’s paper [Lau93]. We show that each of the theorems for the natural semantics also holds for the extension with explicit strictness.
**Theorem 3 (Distinct Names).** If \( \Gamma : e \Downarrow \Delta : z \) is distinctly named, then every heap/term pair occurring in the proof of the reduction is also distinctly named.
**Proof.** We only have to consider the StrictLet rule, which is trivial since no renaming takes place there. The proof for the other cases is unchanged with respect to [Lau93].
Our correctness theorem must differ slightly from [Lau93] in order to make the induction work for the let! case. The problem is the recursive \( \{e\}_{\Gamma[\rho]} \neq \bot \) condition in the definition of the meaning function. It requires us to prove \( (\Gamma : e \Downarrow \Delta : z \implies \{e\}_{\Gamma[\rho]} \neq \bot) \) and \( (\Gamma : e \Downarrow \Delta : z \implies \{e\}_{\Gamma[\rho]} = \{z\}_{\Delta[\rho]} \) at the same time.
**Theorem 4 (Correctness).** If \( \Gamma : e \Downarrow \Delta : z \) then for all environments \( \rho \),
\[ \{e\}_{\Gamma[\rho]} \neq \bot \wedge \{e\}_{\Gamma[\rho]} = \{z\}_{\Delta[\rho]} \wedge \{\Gamma\}_{\rho} \leq \{\Delta\}_{\rho} \]
**Proof.** Induction on the structure of \( e \), to be filled in later.
**Theorem 5 (Computational Adequacy).**
\[ \{e\}_{\Gamma[\rho]} \neq \bot \iff (\exists \Delta, z . \Gamma : e \Downarrow \Delta : z) \]
**Proof.**
\( \Leftarrow: \) Follows immediately from theorem 4.
\( \Rightarrow: \) **proof sketch** The \( \Rightarrow \) part requires somewhat more effort.
Using theorem 1 we find
\[ \{e\}_{\Gamma[\rho_0]} \neq \bot \Rightarrow (\{e^{-1}\}^{laz}_{\Gamma^{-1}[\rho_0]} = \{e\}_{\Gamma[\rho_0]}) \]
Then, we know from [Lau93] that \( \exists \Delta, z . \Gamma : e \Downarrow \Delta : z \). If we take the derivation that shows that \( e \) reduces to \( z \), then this will also be a proof in our extended semantics.
4 Example proofs using the mixed semantics
4.1 An example that distinguishes between $\Omega$ and $\lambda x. \Omega$
A semantically interesting aspect of explicit strictness is that it allows the programmer to distinguish between $\lambda x. \Omega$ and $\Omega$.
The standard lazy semantics [Lau93] makes it possible to yield these values as different results. However, in that semantics it is not possible to write a function $F$ that produces a different result depending on which one is given as an argument. We say that two terms produce a different result if either a different basic value (like 1 or True) can be produced or one term does not terminate and the other produces a basic value.
So, in lazy natural semantics these two different values belong to a single equivalence class of which the members cannot be distinguished by the programmer.
With mixed semantics a definition for such a function $F$ is certainly possible. Below a Clean definition for such an $F$ is given. The result of $F$ on $\lambda x. \Omega$ will be 42 and the result of $F$ on $\Omega$ will be $\bot$. Note that it is not possible to return anything else than $\bot$ in the $\Omega$ case.
\[
\begin{align*}
H & : a \to b \quad // \text{H is the typed equivalent of (Lambda } x. \text{Omega)} \\
H x & = H x \quad // \text{and H 1 is the typed equivalent of Omega}
\end{align*}
\]
\[
\begin{align*}
F & : a \to \text{Int} \\
F x & = K 42 y \\
K & : a \ b \to a \\
K x y & = x
\end{align*}
\]
Start :: Int
Start = F H ___ // --> reduces to 42 ___
___ // Start = F (H 1) ___ // --> bottom, infinite reduction
4.2 Proving the example with mixed semantics
To illustrate the way proofs of reduction can be made using the mixed semantics, we show the reduction proofs of this example below.
First we have to transform the definitions slightly in order to fit the logical framework. We define (using without loss of generality $\Omega$ as equivalent to $H 1$ in order to shorten the proofs):
\[
\begin{align*}
H & \equiv \lambda x. \Omega \\
H 1 & \equiv \Omega \\
(\lambda x. xx)(\lambda x. xx) & \equiv \Omega \\
K & \equiv \lambda a. \lambda b. a \\
F & \equiv \lambda x. (\text{let } y = x \text{ in } (K 42 y))
\end{align*}
\]
We will prove two properties:
\[ \exists \Delta, \{ \} : FH \Downarrow \Delta : 42 \quad (1) \]
\[ \forall \rho, \llbracket F(H1) \rrbracket_\rho = \bot \quad (2) \]
To work with numerals we need an extra rule for dealing with them:
\[ \Gamma : n \Downarrow \Gamma : n \quad \text{Numerals} \]
We write down proofs similar to [Lau93] with sub-derivations contained within square brackets.
\[
\begin{array}{c}
\Gamma : e \\
\ \ \ \text{a sub-proof} \\
\ \ \ \text{another sub-proof} \\
\Delta : z
\end{array}
\]
The proof of the first property is given below:
For the proof of the second property theorem 5 is used. It then suffices to prove that it is impossible to construct a derivation for $F(H1) \Downarrow e$ for any $e$. This is shown by:
There is no finite derivation tree for this reduction, because building this tree inevitably leads to an infinite sequence of subtrees.
5 Related work
This work extends the work of Launchbury [Lau93] with explicit laziness. As shown in section 4.2, our extension makes it possible to write programs that distinguish between terms that are different, but indistinguishable, in Launchbury’s semantics.
A formal semantics that is similar to Launchbury’s, has been defined independently by Barendsen and Smetsers [BS99]. They address strictness but only consider it in the context of a typing system to derive mathematical strictness. So, no mixed semantics is given. It might be worth-while to establish a formal correspondence between Launchbury’s and Barendsen-Smetsers semantics such that results can be transferred back and forth.
The SPARKLE project [dMvEP01] aims to further integrate programming and formal reasoning. A proof assistant[dMvE99] for CLEAN has been developed and work is in progress to fully describe the underlying semantical issues (including single-step semantics with strictness in a fully formal semantics for the complete language).
Another project that aims to integrate programming, properties and validation is a project of the Pacific Software Research Center in Oregon: the Progra-
Not much information is available about this project (the papers section on the site only makes 3 out of 8 papers available). The aim of the project seems to be much broader than just functional programming and verification. A wide range of validation techniques for programs written in different languages is intended to be supported. For functional languages they use a logic (P-logic) based on a modal $\mu$-calculus (in which also undefinedness can be expressed). The precise relation between these semantics and the semantics of [Lau93] is unknown.
6 Conclusions
We have discussed reasoning about programs with explicit strictness. We have introduced the use of auxiliary functions on the programming level in order to facilitate the reasoning on the logic level.
We have shown that it is possible and worthwhile to extend the natural lazy semantics of [Lau93] with a construct for explicit strictness.
The resulting derivation system is shown to be correct and computationally adequate. The mixed semantics system is a proper extension of Launchbury’s natural semantics. With our mixed semantics it is possible to write expressions that distinguish between terms that have different natural semantics but cannot be distinguished by a term within that semantics.
We hope to have shown that strictness is not just for functional hacking but that it is also possible to reason properly and formally about programs that use strictness explicitly.
References
<table>
<thead>
<tr>
<th>Reference</th>
<th>Description</th>
</tr>
</thead>
</table>
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/107653/107653.pdf?sequence=1", "len_cl100k_base": 8792, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 53584, "total-output-tokens": 10769, "length": "2e13", "weborganizer": {"__label__adult": 0.0004296302795410156, "__label__art_design": 0.00030922889709472656, "__label__crime_law": 0.00041961669921875, "__label__education_jobs": 0.0005960464477539062, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00018167495727539065, "__label__finance_business": 0.0001723766326904297, "__label__food_dining": 0.0004811286926269531, "__label__games": 0.0006160736083984375, "__label__hardware": 0.00067138671875, "__label__health": 0.0006537437438964844, "__label__history": 0.0002281665802001953, "__label__home_hobbies": 9.03010368347168e-05, "__label__industrial": 0.0004153251647949219, "__label__literature": 0.0004420280456542969, "__label__politics": 0.00030112266540527344, "__label__religion": 0.0006008148193359375, "__label__science_tech": 0.0163116455078125, "__label__social_life": 9.97781753540039e-05, "__label__software": 0.003368377685546875, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0003638267517089844, "__label__transportation": 0.0006046295166015625, "__label__travel": 0.00021314620971679688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35567, 0.01557]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35567, 0.47692]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35567, 0.84073]], "google_gemma-3-12b-it_contains_pii": [[0, 368, false], [368, 2897, null], [2897, 5493, null], [5493, 8197, null], [8197, 10372, null], [10372, 12597, null], [12597, 14986, null], [14986, 16914, null], [16914, 19063, null], [19063, 22252, null], [22252, 25010, null], [25010, 27343, null], [27343, 29568, null], [29568, 30133, null], [30133, 30319, null], [30319, 31635, null], [31635, 34240, null], [34240, 35567, null]], "google_gemma-3-12b-it_is_public_document": [[0, 368, true], [368, 2897, null], [2897, 5493, null], [5493, 8197, null], [8197, 10372, null], [10372, 12597, null], [12597, 14986, null], [14986, 16914, null], [16914, 19063, null], [19063, 22252, null], [22252, 25010, null], [25010, 27343, null], [27343, 29568, null], [29568, 30133, null], [30133, 30319, null], [30319, 31635, null], [31635, 34240, null], [34240, 35567, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35567, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35567, null]], "pdf_page_numbers": [[0, 368, 1], [368, 2897, 2], [2897, 5493, 3], [5493, 8197, 4], [8197, 10372, 5], [10372, 12597, 6], [12597, 14986, 7], [14986, 16914, 8], [16914, 19063, 9], [19063, 22252, 10], [22252, 25010, 11], [25010, 27343, 12], [27343, 29568, 13], [29568, 30133, 14], [30133, 30319, 15], [30319, 31635, 16], [31635, 34240, 17], [34240, 35567, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35567, 0.02521]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2125df49d537f32a10ace024fca68ba5f8cdf1a4
|
Application areas and added value of knowledge base systems
Citation for published version (APA):
DOI:
10.1016/0378-7206%2893%2990057-Z
10.1016/0378-7206(93)90057-Z
Document status and date:
Published: 01/01/1993
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Research
Application areas and added value of knowledge base systems
R.V. Schuwer and R.J. Kusters
Eindhoven University of Technology, Eindhoven, Netherlands
A knowledge base system is characterized by a separation between application-dependent knowledge and application-independent deduction rules. When used in a business environment, it is not clear what added value this separation has, over conventional systems. It also is not clear what characteristics make a problem tractable for a solution using a knowledge base system. This paper tries to formulate answers to these questions. In order to obtain a sound basis for discussion, a formal model of a knowledge base system is presented.
Keywords: Knowledge base system; Expert system; Application areas; Formal definition
Introduction
In recent years many organizations have invested in research aimed at the applicability of knowledge base systems as a solution to business problems. Often prototypes were developed. Criteria for choosing the right problem were generally based on rules of thumb, such as those in Waterman [1986]: the problem must be neither too complex nor too easy; there must be an expert available who is able to formulate his or her way of working etc. These criteria do not say anything about the characteristics of a knowledge base system (namely the separation between knowledge and inference), nor do they give an indication when it is best to tackle a problem with a knowledge base system. In this paper we suggest some answers to this last question. For this it is necessary to have a reasonable idea of the important features of a knowledge base system.
1. Knowledge base systems and backgrounds
In the literature, many different definitions of a knowledge base system can be found. Here we use the following architectural definition [Mars, 1988]: “A knowledge base system is a computer program, in which as good as possible a separation has been made between application-independent inference rules and application-dependent knowledge.” The difference between a knowledge base system and an expert system is vague. An expert system can be considered a knowledge base system with almost the performance of a human expert. We will use the term “knowledge base system” here.
The evolution of knowledge base systems can be explained in different ways. One starts with
Artificial Intelligence (AI) as the basis, but adding on the explanation and emulation of human acting and thinking. With the development of a knowledge base system one tries to understand how humans think and act. This might result in a computer taking over certain tasks of a human expert. Examples of AI-products are robots, neural networks, computer vision, and natural language processors.
The evolution of knowledge base systems can also be explained as a trend towards a modular construction of software. The goal then is to provide higher quality with better interfaces for maintenance of software and control of the development process. The evolution of knowledge base systems will be discussed from this viewpoint.
The trend towards a more modular construction of software has been an issue since the beginning of automation. The following partition is traditionally found in software systems:
- Operating system and utilities;
- Data;
- Application programs.
Initially these three components existed in a single computer program: each program contained load-, read- and print-functions; e.g. in the Gamma ET-computer of Bull (1957).
The first partitioning of components occurred in a separation of system and application software. Greater efficiency could be reached by generalizing the system tasks. At first this system software contained only simple functions, but the development of multiprogramming led to more complex operating systems with memory management; e.g., that of the IBM 1410 (1961).
The second partitioning was the separation between data and application software; e.g., in COBOL (1961). The simultaneous use of data by several users at one time led to the separation of data management functions from the application software and the development of DBMS; e.g. with DBMS IDS (1965).
The next logical step was the separation of knowledge from the application software. This resulted in knowledge base systems. The assumption was made that an application program contains two types of knowledge: domain dependent knowledge of data and the way in which the data can be manipulated and domain independent deduction (inference) rules. In knowledge base systems, these two types of knowledge are separated from the application program and stored apart in a knowledge component, which consists of:
1. domain dependent knowledge about the data,
2. domain dependent deduction rules,
3. domain independent deduction rules,
The first two form the application dependent knowledge; the last are the application indepen-
---

dent inference rules. In the application program there is also a user interface and the program operates under program control.
Figure 1 presents this development in a schematic way, where a conventional information system (not included the system software) usually only contain:
- Data;
- Data management component;
- Application program component.
Thus the following differences can be found:
- The knowledge base system has a separate knowledge component. This is part of the application program in conventional information systems.
- In a knowledge base system there is no component for the management of knowledge. Future development may lead to Knowledge Base Management Systems (KBMS).
The knowledge component is the characteristic component of a knowledge base system. A formal description of a knowledge base system is therefore valuable for our discussion.
2. A formal description of a knowledge base system
Several questions are not answered in the informal definition of a knowledge base system:
- What exactly are facts and rules?
- What is an inference mechanism?
- How do facts, rules, and the inference mechanism influence each other?
Also, the definition does not give information about the design of a knowledge component. To get a clearer view of these issues, it is necessary to start with a formal description or model, providing an unambiguous description of the essential characteristics of a knowledge base system. Furthermore the working of and the cooperation between the different components is clearer. For an exact mathematical description of the model, see Schuwer and Eiben [1991].
To illustrate the definition we use the following example:
Buying a kitchen will confront us with a large number of possible configurations which are subject to several constraints. A system which supports the configuration of a kitchen must answer questions as “Is it possible to combine a Philips microwave with a certain type of Bauknecht oven?” or “Which types of dishwashers do fit within a kitchen-cupboard with a width of 45 cm?” The system must also be able to check if a configuration complies to all constraints.
When building a knowledge base system, in a similar way to a conventional system, the important relations of interest in the domain must be input. In our example, this is a set of relation symbols, combined with variables and the logical NOT-sign (¬), such as:
- micro(id#,type,price,colours)
- dishwasher(id#,type,price,colours,width,contents)
- colour_ok_o_m(id#_oven,id#_microwave)
- colour_ok_m_d(id#_microwave,id#_dishwasher)
- colour_ok_o_d(id#_oven,id#_dishwasher)
- cupboard(id#,type,width)
Also the “constant” elements must be given. In the example those are the types of the pieces of apparatus (Philips, Bauknecht, Miele,…), the colours (white, grey, black,…), the prices etc. Relation symbols can be combined with constants. A combination of a relation symbol with variables or constants with or without the “¬” will be called a literal. When the literal does not contain variables, it is called ground. A set of ground literals is called a database-state (DB-state). In this paper an element of a DB-state is called a fact. A DB-state will therefore often be called a factbase. This part of a knowledge base system is comparable with the data component (database) of a conventional information system. Examples of such DB-states are:
- \( u_1 = \{ \text{micro}(20, \text{philips}, 500, \text{white}), \text{oven}(10, \text{philips}, 4000, \text{white}) \} \)
- \( u_2 = \{ \text{colour}_\text{ok}_\text{o}_\text{m}(10,20), \text{colour}_\text{ok}_\text{m}_\text{d}(20,30), \text{colour}_\text{ok}_\text{o}_\text{d}(10,30) \} \)
- \( u_3 = \{ \text{colour}_\text{ok}_\text{o}_\text{m}(10,20), \text{colour}_\text{ok}_\text{m}_\text{d}(20,30) \} \)
- \( u_4 = \{ \text{micro}(20, \text{philips}, 500, \text{white}), \text{micro}(20, \text{bauknecht}, 600, \text{white}) \} \)
\( u_1, u_2 \) and \( u_4 \) are examples of DB-states which represent the “Universe of Discourse” correctly. DB-state \( u_3 \) however has some redundancy. When it is known, that colour_ok_o_m(10,20) and colour_ok_m_d(20,30) then one also “knows”, that
colour_ok_popd(10,30) is correct. These dependencies between facts can be given in the form of rules. The general form of such a rule is:
\[
\text{IF} \ \ (a \ \text{number of facts are known}) \ \ \text{THEN} \ \ (a \ \text{new fact may be concluded})
\]
In the case of \( u_2 \) the rule is as follows:
\[
\text{IF} \ \ \text{colour}_\text{ok}_\text{o}_\text{m}(X,Y) \ \text{AND} \ \ \text{colour}_\text{ok}_\text{m}_\text{d}(Y,Z) \ \ \text{THEN} \ \ \text{colour}_\text{ok}_\text{o}_\text{d}(X,Z)
\]
(1)
A finite set of rules is called a rulebase.
DB-state \( u_4 \) shows, that constraints must be given to ensure, that a DB-state really gives a correct representation of the Universe of Discourse. In the case of \( u_4 \) this constraint will state, that id-numbers must be unique within a DB-state. Such a constraint is like a filter for a DB-state. In the sequel we are only concerned with so called feasible DB-states, which fulfill all formulated constraints. The set of all feasible DB-states will be called the feasible Database-universe (DB-universe).
Rules from the rulebase can be used on a DB-state to extend it. New facts are added such that a new feasible DB-state will be obtained (so another element of the DB-universe is created). In the example, rule (1) can be used with DB-state \( u_3 \) to create a new DB-state (in this case the DB-state \( u_2 \)). This can be done repeatedly until no new facts can be deduced. In this way a rulebase gives a structure to the feasible DB-state: it can be split up into a set of “basic” facts (always present in the database) and a set of deducible facts. Furthermore there is a set of “forbidden information”: due to the demand of consistency of a DB-state, all literals, that are the opposite to those in the feasible DB-state are FALSE. One could call the union of the set of basic facts, the set of deducible facts, and the set of forbidden information the range of knowledge of \( u \) and \( R \). The remaining set of literals are all those literals, where no assertions can be made (inaccessible information). This structure can be described with a function \( k \). For each feasible DB-state \( u \), \( k(u) \) is the union of \( u \) and the set of deducible facts. \( k(u) \) itself is also a feasible DB-state. Because knowledge from the rulebase is used we will call the function \( k \) a knowledge function.
When \( L \) denotes the set of all possible literals that can be made with the relation symbols and the constants \((u \in U, R \text{ and } k)\), the different subsets of \( L \) are:
- \( u \) which contains the basic facts.
- \( k(u) \setminus u \) the deducible facts.
- \( \{ \neg A \mid A \in k(u) \} \) the set of forbidden information
- \( L - (k(u) \cup \{ \neg A \mid A \in k(u) \}) \) the set of inaccessible information
This leads to the following definition: Suppose a set of constraints is given. A knowledge model is a tuple \( \langle U, R, k \rangle \) such that:
- \( U \) is a feasible DB-universe,
- \( R \) is a set of rules (the rulebase),
- \( k \) is the knowledge function, determined by \( U \) and \( R \).
An element \( u \) of \( U \) and the rulebase \( R \) together form a knowledge base. A knowledge base thus is a set of facts and rules. This agrees with most definitions found in the literature (e.g. [Waterman, 1986]).
A knowledge model is the foundation of a knowledge base system (KBS). We first define a KBS as a program to compute the range of knowledge of a knowledge base. This process starts with a query from the user. A query is a set of literals (not necessarily ground) whose elements will be called hypotheses. For each hypothesis the KBS has to select the set which contains it or to find facts whose constants can be substituted into the hypothesis to get the literal. The ultimate goal is to let the KBS do this for the whole query in order to give an answer (to prove it). This process can be described step by step. After each step, the state of the KBS can be described by the state of the database (it can have grown by adding proven facts) and the state of the query (it can have grown by adding sub-hypotheses or it can have diminished by deleting hypotheses, that have been proven). For this purpose a KBS consists of an inference procedure able to do this reasoning process. The process can be characterized by two sets of metarules:
- In the query, a hypothesis will be chosen. This is determined by one or more metarules, the goal-selection rules.
- When the hypothesis cannot be answered with the existing DB-state, the system will deduce
facts, until the hypothesis can be answered, or \( k(u) \) is reached. For this process of deduction one or more rules are selected and used with the DB-state \( u \). For this the system will use another set of metarules, the rule-selectionrules.
In practice the knowledge base and the inference procedure are not enough to answer the query. One often has to go across the border of the range of knowledge of the knowledge base and has to do an assertion about a piece of inaccessible information. For this purpose the KBS uses a set of extended metarules to answer the query. Those will be called E-rules. A well-known example is the Closed World Assumption: assume the (ground) hypothesis False when all attempts to proof the hypothesis fail. This rule assumes that all knowledge is available in the system.
Strategies for executing a reasoning process can be described in the way an inference procedure uses goal-selectionrules, rule-selectionrules, and E-rules. When the system uses a “backward chaining” strategy, the rule-selectionrule selects a rule, where the head matches the hypothesis-to-prove. When a “forward chaining” strategy is used, the rule selected must satisfy all the predicates in the body (that is they must be TRUE).
This gives the following definition: A knowledge base system is a computer program which consists of:
- \(-U\) a feasible DB-universe
- \(u \in U\) a feasible DB-state
- \(-R\) a rulebase
- \(-G\) a set of goal-selectionrules
- \(-S\) a set of rule-selectionrules
- \(-E\) a set of E-rules
- \(-IP(G,S,E)\) an inference procedure
This is illustrated in the appendix. The model can be used to identify knowledge base systems. It can therefore be used to evaluate AI-tools (languages or shells). This gives an indication of problems that can be expected when using a tool in a specific situation.
Although a more or less precise description of a knowledge base system has now been presented, it is not yet obvious what added value is provided by the separation of knowledge and deduction rules.
3. The added value of a knowledge base system
In the terminology of Bemelmans [1987], in evaluating the added value of an information system a distinction is made between:
---
**Fig. 2. Software quality characteristics tree.**
- functional requirements: those requirements that indicate which data have to be processed and supplied (WHAT the system must do?),
- non-functional requirements (also called performance or quality indicators): the conditions under which data processing and supply must take place (HOW will the system do it?).
The added value of a knowledge base system can also be considered in this way.
The specific architecture of a knowledge base system will not add to its functionality. In principle, all functionality that can be provided by a knowledge base system can also be provided by a traditional information system. From this point of view it comes as no surprise that a system originally designed as a knowledge base system, is often implemented in a traditional way.
Of course in the end all programs can be compiled into machine language. At that point, it makes no difference how the statements were derived. Therefore there is no reason to assume a difference in functionality. Thus the added value of a knowledge base system is found in its ability to fulfill certain non-functional requirements with less effort.
In Boehm et al. [1978] a classification of these non-functional requirements is presented. This so-called 'quality tree' is represented in Figure 2 hierarchically. Looking at the lowest level, the knowledge base system will have an advantage in the following ways:
**Consistent**
Explicit definition of the knowledge provides for better checks of the knowledge.
**Accessible**
The fact that the different components of the system are explicitly defined makes it possible to access them separately.
**Structure**
From the definition it follows that a knowledge-base system is well structured.
**Self-description**
Since no information is provided of the knowledge other than the knowledge itself, self-description is assured.
**Legible**
Separation of knowledge makes it easier to acquire information.
**Augmentable**
Since the components are defined separately, it is easier to add to these modules.
If we take these characteristics and look at the software quality characteristics tree we see that on higher level the characteristics testable, understandable and modifiable are influenced. Following this through to the next higher level, we see that maintainable is the high-level non-functional requirement that is influenced by the decision to design a system as a knowledge-base system. This means that using a knowledge-base system to implement a particular problem has advantages that increase the effectiveness and the efficiency of managing the knowledge in the system.
When designing a knowledge base system it is necessary to map the knowledge in terms of the representation technique to be used in the eventual implementation. The development of a knowledge model does not necessarily have to remain restricted to knowledge base systems. When the knowledge used is documented on a conceptual level, insight in this knowledge will increase. This will increase insights into the whole system, providing for greater maintainability.
### 4. Recognizing problem characteristics
Our analysis is aimed at the components of the knowledge base, namely the sets u, an element of the feasible database universe U, and R, the rulebase. We consider properties of these sets that might indicate the advisability of a knowledge base system solution.
We will first consider the set u, where the facts and relations between them are represented (data and datastructure). If we are talking about a set of data where high demands are required use is made of a Data Base Management System (DBMS), justified by the following properties [Everest, 1986]:
- data are used by multiple users and multiple applications,
- the size of the set of data is large,
- changes in the data occur regularly.
Looking at the rulebase R, we also have to find properties that would make it advisable to be able to manage it. In general, it is desirable to be
able to control a situation when it is complex, or changes regularly, or when several parties are involved in it. Compare this with the DB-state. We now translate these general properties into those that have meaning within the setting of a rulebase. This results in the following properties:
- High complexity and/or size of the knowledge. When these increase, there will be a demand for better control of this rulebase, and this will be facilitated within the architecture of a knowledge base system. It is easier to control the knowledge base when it is represented separately then when it is "hidden" within code.
- If changes in the knowledge occur relatively often, the changes are easier to make when the knowledge is stored separately. This way one avoids large parts of the application having to be rewritten each time a change occurs. A specific case occurs when the knowledge base is being developed: it can then be considered incomplete.
The way the set R is handled is also of importance:
- The knowledge is shared by several users or applications. This occurs relatively seldom but security problems will result; e.g. classified facts may be inferred from others. In such a case admittance control is required.
- The order in which rules can be used is dependent on the specific state of u. It is possible that, in a certain DB-state, rule A has to be used before rule B, while in another DB-state the reverse may be the case; e.g., when little data are available in DB-state u and R is relatively large, it is advisable to choose a forward chaining strategy. However, when u is large and R is small, a backward chaining strategy would be in order. Thus the choice of the rules to be used and the order in which to use them depends on the contents of u. Flexible use of rules is aided by storing them in a knowledge base.
We have now described properties that argue the advisability of proper management of the sets; the potential for this management is offered by a knowledge base system solution. This leads to the following classification:
**Situation 1:** Both for R and for u, there is no necessity for extra effort that provides better
<table>
<thead>
<tr>
<th>Table 1</th>
<th>Overview of possible solutions, given the properties of the problem area.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>u</td>
</tr>
<tr>
<td>R</td>
<td></td>
</tr>
<tr>
<td>weak</td>
<td>no DBMS/KBMS</td>
</tr>
<tr>
<td>(situation 1)</td>
<td></td>
</tr>
<tr>
<td>strong</td>
<td>Knowledge Base System</td>
</tr>
<tr>
<td>(situation 3)</td>
<td></td>
</tr>
</tbody>
</table>
management. In this situation a knowledge base system is not needed.
**Situation 2:** Properties of u indicate that data management is needed, but no such indications exist for R. Then a DBMS is indicated as solution.
**Situation 3:** Management for R is needed, but not for u. This can be tackled using the present generation of knowledge base systems. These systems provide sufficient support for the management of knowledge but are lacking in the management of data.
**Situation 4:** If management of both u and R is required, then both the present generation knowledge base system (insufficient capabilities for data management) and the present DBMS (insufficient capabilities for knowledge management) are incapable of fulfilling the demand. In this situation, the need for a Knowledge Base Management System (KBMS) arises. This is summarized in Table 1.
In a KBMS, apart from the database, rulebase, sets of metarules, and the inference procedure, functions have to be available for the maintenance of these components.
This gives the following definition of a KBMS: A KBMS is a group of computer programs in which:
- the components of a knowledge base system can be defined,
- the database can be maintained,
- the rulebase can be maintained,
- the sets G, S and E can be maintained,
- the security of these components can be managed.
Note that the functionality of a KBMS encompasses both the functionality of a DBMS as that
of a "traditional" knowledge base system. As an example of an implementation of a KBMS, see Van Herwijnen et al [1990].
5. Conclusions
The knowledge base systems can be considered a logical step in historic evolution. In order to determine if a problem can be adequately solved using a knowledge base system, the knowledge must be analyzed into a set of data (\(u\)) and a set of rules (\(R\)). Based on the properties of these sets (size, complexity, completeness, robustness and the order of use of rules), the appropriateness of the implementation mechanism can be determined. The main argument that is used for this choice is the need for management of the sets \(u\) and \(R\).
Note:
The authors would like to thank Prof. Dr. T.M.A. Bemelmans for his comments on earlier versions of this paper.
References
Mars, N., Onderzoek van niveau: Kennistechnologie in wording (High level research: the growth of knowledge technology), Informatie, Vol. 30, nr. 2, pp. 84–90, 1988 (in Dutch).
Appendix
The domain of the KBS refers to the configuration of a kitchen. For the sake of simplicity, only part of the configuration will be considered. There will be rules on possible combinations of a micro-wave and an oven. Combinations are restricted by the following rules:
- If the oven has a built-in micro-wave, one is not allowed to choose a separate micro-wave as well. Such ovens are offered by Philips and cost over $2000.
- Only certain colour-combinations of an oven and a micro-wave are allowed.
The following literals will be used:
\[
\text{Oven(Id\#, Type, Price, Colour)}
\]
\[
\text{Micro(Id\#, Type, Price, Colour)}
\]
\[
\text{Cook\_ok(Id\# \_ Oven, Id\# \_ Micro)}
\]
\[
\text{Colour\_ok(Id\# \_ Oven, Id\# \_ Micro)}
\]
\[
\text{Combination\_ok(Id\# \_ Oven, Id\# \_ Micro)}
\]
The meaning of "Id\# = 0" will be "Not chosen". A "-" will denote a "don't care" (it doesn't matter which value the corresponding field will have).
Given the following knowledge base \(\langle u, R \rangle\) and metarules:
\[
u = \{ \text{oven(10, philips, 4000, white)}, \\
\text{oven(11, philips, 1500, white)}, \\
\text{oven(12, bauknecht, 1100, white)}, \\
\text{micro(20, philips, 500, white)}, \\
\text{micro(21, philips, 600, grey)} \}
\]
\[
R = \{ \text{combination\_ok(X, Y), colour\_ok(X, Y)}, \\
\quad \text{cook\_ok(X, Y)}, \quad (1) \\
\quad \text{oven(X, B, P, -), B = philips, P > 2000,} \\
\quad \text{micro(Y, -,-), Y = 0}, \quad (2) \\
\quad \text{oven(X, -,-, Y), X \neq 0, micro(Y, -,-, Y), Y \neq 0}, \quad (3) \\
\quad \text{colour\_ok(X, Y),} \quad (4) \\
\quad \text{oven(X, -,-, grey), micro(Y, -,-, white), Y \neq 0}, \quad (5) \\
\quad \text{oven(X, -,-, white), micro(Y, -,-, black), Y \neq 0}, \quad (6) \\
\quad \text{colour\_ok(X, Y),} \quad (7) \\
\quad \text{G = \{ "Select the hypothesis according to the textual order" \}} \\
\quad \text{S = \{ "Select the rules according to the textual order" \}} \\
\quad \text{E = \{ "IF the hypothesis\_to\_solve is a ground hypothesis} \\
\quad \text{THEN answer is "FALSE"} \\
\quad \text{ELSE answer is "No solution" ENDIF \}}
\]
The inference procedure
\[ IP(G,S,E) = \{ \]
- “Select rules from R according to the backward chaining strategy” (1)
- “IF hypothesis_to_solve is selected THEN Look into database for unification” (2a)
- “IF no success THEN Select rule_to_use” (2b)
- “ENDIF”
- “ENDIF” (3)
Note that the E-rule in this situation follows the Closed World Assumption.
The set of hypotheses to be answered is:
\[ H = \{ \text{cook_ok}(11,20), \text{colour_ok}(15,21) \} \]
The questions (the hypotheses formulated in H) are answered in the following way:
1. Goalselection
Result (according to goal-selectionfunction): \text{cook_ok}(11,20).
2. Metarule (2a) from IP(G,S,E).
Result: no success.
3. Metarule (2b) from IP(G,S,E)
(according to metarule 1 from IP(G,S,E), the KBS will look for a rule, where the head matches the hypothesis_to_solve).
Result (according to S): rule (1).
4. (Again, according to metarule 1 from IP(G,S,E))
Add subgoals to H.
Result (taken into account the goal-selection-function):
\[ H = \{ \text{combination_ok}(11,20), \text{colour_ok}(11,20), \text{colour_ok}(15,21) \} \]
5. Goalselection
Result: combination_ok(11,20).
6. Metarule (2a) from IP(G,S,E).
Result: no success.
7. Metarule (2b) from IP(G,S,E).
Result (analogous to steps 3 and 4, rule (2) is chosen from R):
\[ H = \{ \text{oven}(11,B,P,-), B = \text{philips}, P > 2000, \text{micro}(20,-,-,-), 20 = 0, \text{combination_ok}(11,20), \ldots \} \]
8. Goalselection
Result: oven(11,B,P,-).
9. Metarule (2a) from IP(G,S,E).
Result: oven(11,philips,1500,grey).
(The subgoal is removed from H and all occurrences of B and P in H are substituted)
\[ H = \{ \text{philips} = \text{philips}, 1500 > 2000, \ldots \} \]
10. Goalselection
Result: philips = philips (TRUE).
(The subgoal is removed from H)
\[ H = \{ 1500 > 2000, \text{micro}(20,-,-,-), \text{combination_ok}(11,20), \ldots \} \]
11. Goalselection
Result: 1500 > 2000 (FALSE).
(This situation is analogous to the situation, that no rule_to_use can be found to unify this hypothesis. Therefore, according to metarule (3) from IP(G,S,E), backtracking takes place. All subgoals, which were added at step 7, will be removed)
\[ H = \{ \text{combination_ok}(11,20), \text{colour_ok}(11,20), \text{cook_ok}(11,20), \text{colour_ok}(15,21) \} \]
12. Metarule (2b) from IP(G,S,E)
Result (according to S): rule (3).
13. (analogous to step 4, subgoals are added to H).
Result:
\[ H = \{ \text{oven}(11,-,-,-), 11 \neq 0, \text{micro}(20,-,-,-), 20 \neq 0, \text{combination_ok}(11,20), \ldots \} \]
14. (Analogous to step 9 the subgoals, which were added in the last step all can be removed from H, because they can be unified.
The subgoal combination ok(11,20) has now be proven and can be removed from H and added to u.)
Result:
\[ H = \{ \text{colour_ok}(11,20), \text{cook_ok}(11,20), \text{colour_ok}(15,21) \} \]
\[ u = \{ \text{combination_ok}(11,20), \text{oven}(10,\text{philips},4000,\text{white}), \ldots \} \]
15. Goalselection
Result: colour_ok(11,20).
16. Metarule (2a) from IP(G,S,E)
Result: no success.
17. Metarule (2b) from IP(G,S,F)
Result (according to S): rule (4).
18. (Analogous to step 4, the subgoals will be added to H)
Result:
\[ H = \{ \text{oven}(11,-,-,\text{white}), \text{micro}(20,-,-,\text{white}), 20 \neq 0, \text{colour}\_\text{ok}(11,20) \ldots \} \]
19. Goal selection
Result: \text{oven}(11,-,-,\text{white}).
20. Metarule (2a) from IP(G,S,E)
Result: success
(The subgoal will be removed from \( H \))
\[ H = \{ \text{micro}(20,-,-,\text{white}), 20 \neq 0, \text{colour}\_\text{ok}(11,20) \ldots \} \]
21. (Analogous to steps 19 and 20 the two remaining subgoals which were added in step 18 can be proven. As a result the subgoal \text{colour}\_\text{ok}(11,20) has also been proven and will be removed from \( H \) and added to \( u \).)
Result:
\[ H = \{ \text{cook}\_\text{ok}(11,20), \text{colour}\_\text{ok}(15,21) \} \]
\[ u = \{ \text{colour}\_\text{ok}(11,20), \text{combination}\_\text{ok}(11,20), \text{oven}(10,\text{philips},4000,\text{white}) \ldots \} \]
22. (The hypothesis \text{cook}\_\text{ok}(11,20) has been proven and will be removed from \( H \) and added to \( u \))
Result:
\[ H = \{ \text{colour}\_\text{ok}(15,21) \} \]
\[ u = \{ \text{cook}\_\text{ok}(11,20), \text{colour}\_\text{ok}(11,20), \text{combination}\_\text{ok}(11,20), \text{oven}(10,\text{philips},4000,\text{white}) \ldots \} \]
23. Goal selection
Result: \text{colour}\_\text{ok}(15,21)
24. Metarule (2a) from IP(G,S,E)
Result: no success.
25. Metarule (2b) from IP(G,S,E)
Result: rule (4) has been selected from \( R \)
\[ H = \{ \text{oven}(15,-,-,\text{white}), \text{micro}(21,-,-,\text{white}), 21 \neq 0, \text{colour}\_\text{ok}(15,21) \} \]
26. Goal selection
Result: \text{oven}(15,-,-,\text{white})
27. Metarule (2a) from IP(G,S,E)
Result: no success.
28. Metarule (2b) from IP(G,S,E)
Result: no success.
29. Metarule (3) from IP(G,S,E)
Result: none
30. (Analogous to steps 25 to 29 also rules 5, 6 and 7 will be selected from \( R \). None of the rules leads to a proof for the hypothesis. The hypothesis must therefore be solved with the E-rule.)
Result (answer is FALSE. The hypothesis can be removed from \( H \)):
\[ H = \{ \} \]
Because \( H \) is the empty set now, the query is solved. The answer, which will be given by the KBS is “FALSE”.
|
{"Source-Url": "https://pure.tue.nl/ws/files/1513722/396045.pdf", "len_cl100k_base": 8616, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30101, "total-output-tokens": 10040, "length": "2e13", "weborganizer": {"__label__adult": 0.0002913475036621094, "__label__art_design": 0.0004906654357910156, "__label__crime_law": 0.0005779266357421875, "__label__education_jobs": 0.004375457763671875, "__label__entertainment": 7.271766662597656e-05, "__label__fashion_beauty": 0.0001684427261352539, "__label__finance_business": 0.0011396408081054688, "__label__food_dining": 0.00034427642822265625, "__label__games": 0.000576019287109375, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0005950927734375, "__label__history": 0.0003614425659179687, "__label__home_hobbies": 0.0001653432846069336, "__label__industrial": 0.0004758834838867187, "__label__literature": 0.0005345344543457031, "__label__politics": 0.00020754337310791016, "__label__religion": 0.0003600120544433594, "__label__science_tech": 0.08721923828125, "__label__social_life": 0.00012576580047607422, "__label__software": 0.0428466796875, "__label__software_dev": 0.857421875, "__label__sports_fitness": 0.0001932382583618164, "__label__transportation": 0.0004363059997558594, "__label__travel": 0.0001728534698486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35772, 0.03072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35772, 0.832]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35772, 0.88649]], "google_gemma-3-12b-it_contains_pii": [[0, 2209, false], [2209, 4571, null], [4571, 7166, null], [7166, 11360, null], [11360, 15962, null], [15962, 18234, null], [18234, 22215, null], [22215, 26249, null], [26249, 30320, null], [30320, 33614, null], [33614, 35772, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2209, true], [2209, 4571, null], [4571, 7166, null], [7166, 11360, null], [11360, 15962, null], [15962, 18234, null], [18234, 22215, null], [22215, 26249, null], [26249, 30320, null], [30320, 33614, null], [33614, 35772, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35772, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35772, null]], "pdf_page_numbers": [[0, 2209, 1], [2209, 4571, 2], [4571, 7166, 3], [7166, 11360, 4], [11360, 15962, 5], [15962, 18234, 6], [18234, 22215, 7], [22215, 26249, 8], [26249, 30320, 9], [30320, 33614, 10], [33614, 35772, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35772, 0.02402]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
4954e4c46ad336177701586172b56daa877e75c3
|
Rampart: Protecting Web Applications from CPU-Exhaustion Denial-of-Service Attacks
Wei Meng, Chinese University of Hong Kong; Chenxiong Qian, Georgia Institute of Technology; Shuang Hao, University of Texas at Dallas; Kevin Borgolte, Giovanni Vigna, and Christopher Kruegel, University of California, Santa Barbara; Wenke Lee, Georgia Institute of Technology
https://www.usenix.org/conference/usenixsecurity18/presentation/meng
This paper is included in the Proceedings of the 27th USENIX Security Symposium.
August 15–17, 2018 • Baltimore, MD, USA
ISBN 978-1-931971-46-1
Open access to the Proceedings of the 27th USENIX Security Symposium is sponsored by USENIX.
Rampart: Protecting Web Applications from CPU-Exhaustion Denial-of-Service Attacks
Wei Meng†, Chenxiong Qian‡, Shuang Hao*, Kevin Borgolte§
Giovanni Vigna§, Christopher Kruegel§, Wenke Lee‡
†Chinese University of Hong Kong, ‡Georgia Institute of Technology
*University of Texas at Dallas, §University of California, Santa Barbara
Abstract
Denial-of-Service (DoS) attacks pose a severe threat to the availability of web applications. Traditionally, attackers have employed botnets or amplification techniques to send a significant amount of requests to exhaust a target web server’s resources, and, consequently, prevent it from responding to legitimate requests. However, more recently, highly sophisticated DoS attacks have emerged, in which a single, carefully crafted request results in significant resource consumption and ties up a web application’s back-end components for a non-negligible amount of time. Unfortunately, these attacks require only few requests to overwhelm an application, which makes them difficult to detect by state-of-the-art detection systems.
In this paper, we present Rampart, which is a defense that protects web applications from sophisticated CPU-exhaustion DoS attacks. Rampart detects and stops sophisticated CPU-exhaustion DoS attacks using statistical methods and function-level program profiling. Furthermore, it synthesizes and deploys filters to block subsequent attacks, and it adaptively updates them to minimize any potentially negative impact on legitimate users.
We implemented Rampart as an extension to the PHP Zend engine. Rampart has negligible performance overhead and it can be deployed for any PHP application without having to modify the application’s source code. To evaluate Rampart’s effectiveness and efficiency, we demonstrate that it protects two of the most popular web applications, WordPress and Drupal, from real-world and synthetic CPU-exhaustion DoS attacks, and we also show that Rampart preserves web server performance with low false positive rate and low false negative rate.
1 Introduction
Denial-of-Service (DoS) attacks are a class of attacks that aim to deteriorate the target system’s availability and performance. They prevent the system from handling some or even all requests from legitimate users, by overwhelming its available resources, e.g., network bandwidth, disk space, memory, or CPU time. Consequently, users might experience long delays when interacting with the victim system, or they might be completely unable to access it. Availability and performance are essential to high-profile web servers, such as those operated by banks, news organizations, and governments, however, which are regular targets of DoS attacks [9, 21].
To degrade the performance of web servers, a common practice is to launch Distributed DoS attacks (DDoS) that flood the target system with numerous requests. Specifically, among other attacks, attackers might command thousands of computers (or more) to send attack traffic, or they might spoof the victim’s IP address to launch reflected attacks [29, 34]. Fortunately for defenders, these attacks incur comparatively high cost for the attackers (e.g., acquiring a large-size botnet to mount the attack) and they can often already be detected by state-of-the-art network-level defense mechanisms [23–25, 30, 31].
Unfortunately, sophisticated DoS attacks gained significant traction recently. In sophisticated attacks, attackers use low-bandwidth, highly targeted, and application-specific traffic to overwhelm a target system [8, 12, 14, 22]. Different from traditional DDoS attacks that rely on flooding a victim system with an extensive amount of traffic, sophisticated DoS attacks require less resources and utilize a lower volume of intensive requests to attack the victim system’s availability. Specifically, attackers target expensive or slow execution paths of the victim system. For example, an intensive attack might request the system to calculate computationally-expensive hashes for millions of times by specifying an unusually high iteration count for the bcrypt function. Particularly problematic is that sophisticated DoS attacks are difficult to detect by state-of-the-art defenses, such as source address filtering or traceback mechanisms, because they were designed to mitigate large-scale network-layer DDoS attacks [18, 23–25, 30, 31, 36, 37].
In this paper, we design and implement a defense mechanism, **Rampart**, to protect a web application’s back end from sophisticated DoS attacks. **Rampart** aims to mitigate attacks that overwhelm the available **CPU resources** (CPU time) of a web server through **low-rate application-layer attack traffic**, which we call **CPU-exhaustion DoS attacks**. Therefore, we design **Rampart** to accurately and efficiently detect and stop suspicious intensive attacks that may cause CPU exhaustion, and to be capable to block future attacks, without negatively affecting the application’s availability for legitimate users.
Developing such a defense is challenging. First, attack requests can blend in well with normal requests: Similar to requests sent by legitimate users, they also arrive at a low rate. Moreover, attack requests are generally well-formed, and, thus, do not cause the application to crash or throw an exception except for possibly resource exhaustion exceptions (e.g., a stack overflow exception). In turn, it is difficult to differentiate these two kinds of requests, i.e., it is non-trivial to block only attack requests without also incorrectly blocking legitimate requests. Since a legitimate request can be mistakenly labeled as suspicious, the defense system has to quickly detect and revoke any false positive filter that blocks legitimate requests, to not reduce the application’s availability unnecessarily.
To address these challenges, we leverage statistical methods and fine-grained context-sensitive program profiling, which allows us to accurately detect and attribute CPU-exhaustion DoS attacks. Specifically, **Rampart** actively monitors all requests to precisely model the resource usage of a web application at the function-level. It then dynamically builds and updates statistical execution models of each function by monitoring the runtime of the function called under different contexts. Upon arrival of a new request, the request is then constantly checked against the statistical models to detect suspicious deviation in execution time at runtime. **Rampart** lowers the priority of a request that it labeled as suspicious by aborting or temporarily suspending the application instance that is serving it, depending on the server’s load. To prevent pollution attacks against the statistical models, **Rampart** collects only profiling measurements of normal requests that do not cause a CPU-exhaustion DoS and that do not deviate much from the norm observed in the past. It also enforces a rate limit by network address.
**Rampart** can deploy filters to prevent future suspicious requests from over-consuming the server’s CPU time. It employs an **exploratory** algorithm to tackle the problems of false positive requests and false positive filters. Specifically, when a true positive attack request is detected, a filtering rule is deployed to block similar suspicious requests, which might include legitimate requests (false positives). **Rampart** dynamically removes the deployed filter once the attack ends, to recover service for any legitimate users who might have been affected by the filter. Similarly, a false positive filter might be created if a legitimate request was incorrectly identified as suspicious. To not negatively impact an application’s availability for future legitimate requests, **Rampart** periodically evaluates (explores) all generated filter policies and deactivates false positive filters. In turn, this algorithm allows **Rampart** to rapidly and intelligently discover false positive rules, while simultaneously thwarting true attacks.
We design **Rampart** as a general defense against CPU-exhaustion DoS attacks. Importantly, to be protected by **Rampart**, it is not necessary to modify a web application or its source code in any way. To emphasize the practicality of **Rampart**, we implemented a prototype of **Rampart** for PHP, which remains the most popular server-side programming language today [5]. Moreover, we thoroughly evaluated our prototype implementation, and we find that it incurs negligible performance overhead of less than an additional 3 ms for processing a request, i.e., roughly 0.1% of the median website load times [33].
Finally, we demonstrate that **Rampart** can effectively preserve the availability and performance of real-world, non-trivial web applications when they are victim of CPU-exhaustion DoS attacks. We focus on two of the most popular open-source content management systems: Drupal and WordPress. For example, when launching known attacks without **Rampart**’s protection, then the average CPU usage increases from 32.21% to 95.05% for attacks on Drupal and from 42.21% to 94.14% for attacks on WordPress. However, if protected by **Rampart**, then the average CPU usage remains comparatively stable at no more than 39.62% for Drupal and 51.40% for WordPress. Last, we demonstrate **Rampart**’s ability to protect the two applications from unknown vulnerabilities.
We make the following technical contributions:
- We present **Rampart**, which is a defense that detects and mitigates sophisticated CPU-exhaustion DoS attacks against web applications by using statistical models and function-level program profiling.
- We implement **Rampart** as an extension for the PHP Zend engine. Our prototype has negligible performance overhead and it can be readily deployed for 83% of websites worldwide without requiring source code modifications.
- We develop algorithms to reduce the false positive rate when detecting attacks and to mitigate any negative impact of a false positive. In turn, **Rampart** has a low false positive rate of less than 1%.
- We thoroughly evaluate **Rampart** with both real-world and synthetic vulnerabilities in two popular web applications, and we demonstrate that it effectively mitigates the impact of low-rate CPU-exhaustion DoS attacks and preserves application availability and server performance.
2 Rampart
In this section, we discuss the design of Rampart, our defense mechanism to detect and mitigate sophisticated application-layer CPU-exhaustion DoS attacks (Section 2.1). Precisely, Rampart performs context-sensitive function-level profiling to learn precise execution models for each endpoint of an application (Section 2.2). Whenever the server is overwhelmed, the system terminates or suspends anomalous prolonged application instances that it suspects to be suffering from an attack (i.e., instances it suspects are attempting to serve an attack request), to reduce the server’s workload (Section 2.3). Rampart employs a probabilistic algorithm to limit the false positive rate when stopping attacks (Section 2.4) and it constructs filtering rules to adaptively block future attacks using an exploratory algorithm (Section 2.5). Finally, we discuss how to optimize the performance of Rampart (Section 2.6) and we detail our prototype implementation (Section 2.7).
2.1 Threat Model and Challenges
Threat Model. We consider a remote attacker that can send arbitrary HTTP(S) requests to a server serving a web application that is vulnerable to CPU-exhaustion DoS attacks. The attacker can exploit the vulnerability by sending carefully crafted requests that will consume a significant amount of the web server’s CPU time. Her goal is to occupy all available CPU resources (cores) by sending multiple requests in parallel at a low rate. Attack requests are well-formed, and, thus, they cannot be easily distinguished from legitimate requests through statistical features, such as the size, or the values of the payload. She can also send legitimate requests to hide her attack among legitimate traffic. She does not, however, send numerous attack requests within a very short time window, i.e., flooding the target server, because volumetric attacks with a high attack rate can be easily detected by complementary network-based defenses, and a low attack rate is already sufficient to overwhelm the web server. Therefore, remote attackers who flood the web server with numerous requests at a time are outside the scope of our threat model.
To detect and stop low-rate CPU-exhaustion DoS attacks efficiently, we have to address five core challenges: Detection. Different from conventional DDoS attacks, low-rate application-layer DoS attacks are difficult to detect because they do not overwhelm a web server with large number of concurrent requests. In turn, existing state-of-the-art network-layer defense mechanisms [18, 23–25, 30, 31, 36, 37] cannot detect these sophisticated DoS attacks.
Attribution. It is not straight-forward how to attribute an attack to its corresponding request(s). In fact, it is particularly difficult because attack requests exercise legitimate functionality of the web application and they do not crash the application. Indeed, they do not even hijack the application’s control flow.
Prevention. Developing a mitigation strategy that effectively stops the attacks while not negatively impacting the application’s availability to normal users is not trivial. For example, simplistic URL-based requests filtering techniques are ill-suited because attackers send requests to endpoints that normal users may also visit. Relying on hand-crafted features and payload values is similarly problematic because they do not scale across applications or attacks, and because real attack payloads can depend on other parameters and they may even vary per user or time for some (unknown) vulnerabilities [1].
False Positives. Naturally, any defense mechanism relying on statistical properties may have false positives, i.e., legitimate requests that are blocked by a filter, or requests that might incorrectly be identified as attack requests, and, hence, might cause a false positive filter to be deployed. Considering the nature of low-rate application-layer DoS attacks, minimizing the false positive rate and the impact of false positive filters is a major challenge.
Performance. Lastly, our defense mechanism must not introduce significant performance overhead to the protected application. In particular, users must not notice any performance degradation when the application is running at normal load.
2.2 Web Application CPU Usage Modeling
Rampart monitors and learns profiles (models) of a web application to establish the resources it normally requires. We use the models as reference to detect suspicious requests (Section 2.3). Web application commonly provide multiple endpoints for interaction. Users can request each of those endpoints under different contexts (e.g., anonymous or authenticated), and each requires different and diverse processing resources. Therefore, a profile at the application-level or request-level is not suitable to differentiate attack requests from normal requests.
To precisely model the resource usage of a web application in different states, Rampart employs context-sensitive function-level program profiling. Specifically, Rampart records the CPU time spent in a function (including time spent by the operating system’s kernel on behalf of the function) instead of its wall clock time, because an application instance can be interrupted and rescheduled by the operating system before the function returns. Rampart associates the measured execution time with a unique ID, representing the application’s cur-
current execution state. The ID is obtained from the calling context of the function and its name. In particular, we encode the execution state (ID) by calculating the hash value of the application’s past states and the name of the function being invoked. We compute the state when a function \( c \) is invoked by its parent function \( p \) as follows:
\[
\text{state}(c) = \text{hash(state}(p), c).
\]
As a result, the ID of a function frame depends on all of its parent callers. To keep track of previous application states, \textsc{Rampart} maintains a shadow call stack, where each function frame stores the application state when it is called. We push a covering main function to the bottom of the call stack to measure the total CPU time spent in an endpoint. We employ the name of an endpoint (e.g., /login) as the initial state to differentiate functions with the same name (e.g., main) for different endpoints.
When calculating the ID, we do not consider sibling functions, because a varying numbers of sibling functions may have returned, and they represent a similar state in the program. In addition, executed sibling functions may not necessarily influence the execution of pending functions. For example, suppose that a parent function \( p \) calls a child function \( s \) for a random number of times at runtime in a loop, before calling another child function \( c \). If we consider the previous sibling function \( s \), we might have to maintain hundreds or thousands of records for different instances of it, even though they consume very similar amounts of resources. Moreover, we would have different IDs for \( c \) for each run of the program. Similarly, we do not use the argument values to encode the state of a function frame because they can also be dynamic.
### 2.3 CPU-Exhaustion DoS Attack Detection
A straw-man approach to detect CPU-exhaustion DoS attacks is to set a global timeout in the web application because a key characteristic of such attacks is that their requests take considerable time and consume numerous CPU cycles of the victim server. However, legitimate requests can also time out and could be mistakenly identified as attack attempts. For example, a user may upload a large file that could take a long time to transfer or process.
Instead of such a straw-man approach, \textsc{Rampart} monitors the CPU usage of a web server to detect CPU-exhaustion DoS attacks, which works because attackers want to occupy as many CPU cores as possible, so that the victim server is less responsive. Compared with a (global) timeout, abnormally high CPU usage is a more accurate indicator. \textsc{Rampart} continuously monitors the CPU usage of the server in a fixed interval \( T \), and computes the average CPU usage \( r_S \) over the last \( S \) observations, where \( S \) is a parameter that a system administrator configures to control the detection sensitivity. If \( r_S \) is greater than a pre-defined threshold \( R_{CPU} \) (e.g., 90\%), \textsc{Rampart} raises an alarm, thus, indicating that the server is overloaded, and likely victim to a CPU-exhaustion DoS attack.
Intuitively, the requests that consumed the most CPU time can be identified as the culprits that caused the CPU-exhaustion. However, this can quickly lead to false negatives. Considering a similar upload example to before, i.e., a few users are uploading large files while a real attack is being launched. If the upload requests consumed slightly more CPU time than the attack requests, then these legitimate requests would be incorrectly detected as the responsible request (false positives) and the real attack requests would evade detection (false negative), although they might always take this long to process.
Instead, \textsc{Rampart} leverages the function execution models it learned (Section 2.2) to detect suspicious requests that are statistically different from the historical profile. \textsc{Rampart} periodically (e.g., every 250 ms) checks the CPU time spent in functions that have not returned yet, then it compares the time with the corresponding records in the profiling database, and, finally, it identifies one request as suspicious using the following method:
Let \( T_{min} \) and \( T_{max} \) be the minimum and maximum timeout thresholds. \( T_C \) is the CPU time of a function \( f \) in the stack; \( \mu \) and \( \sigma \) are the mean and standard deviation of \( T_C \) with the ID \( \text{state}(f) \) in the database; \( k \) is a parameter that represents the distance from the mean. We rely on the Chebyshev inequality (Equation 1) to estimate how likely one observation differs from the mean without assuming any underlying distributions. In particular, the probability of a random variable \( X \) that is \( k \)-standard deviations away from the mean is no more than \( 1/k^2 \).
\[
P(|X - \mu| > k\sigma) \leq \frac{1}{k^2} \quad (1)
\]
\[
T_C > \min(\max(\mu + k \times \sigma, T_{min}), T_{max}) \quad (2)
\]
Thus, \textsc{Rampart} labels a request as suspicious if \( T_C \) of function \( f \) is more than \( k\sigma \) away from the mean (Equation 2). \textsc{Rampart} can then terminate the application instances that serve such prolonged suspicious requests to release the occupied resources only when the web server is overloaded. Otherwise, it repeats the same process until all functions have returned. The minimum threshold \( T_{min} \) prevents \textsc{Rampart} from reporting a request as suspicious if a deeper function with very short execution time (e.g., hundreds of microseconds) times out.
The above method effectively detects suspicious requests for which the required CPU time deviates significantly from what \textsc{Rampart} observed previously. When serving attack requests, then \( T_C \) will be significantly higher for some frames in the call stack compared to legitimate requests. On the contrary, when serving the file-uploading requests and if \( T_C \) for all functions will be close
to the means, then these requests will not be marked as suspicious (the requests always take this long to process). If they are not close to the means, however, then RAMPART aborts these requests if the server is overwhelmed, because they are indistinguishable from attack requests.
A limitation of RAMPART is that it requires at least one observation of a function call before it can rely on the function to determine if a request is suspicious. In practice, this training phase can be completed automatically by using a fuzzer, a crawler program to traverse the web application, or an existing test harness. In fact, developers can easily collect training data when testing their applications before deploying them to production. To reduce detection variance, we recommend letting RAMPART make at least N observations (e.g., we use N = 5, Section 4) for each endpoint. Although RAMPART might have not collected execution profiles for all states (function calls) of a web application, it knows the execution profile of each endpoint and it can start detecting attack requests.
Another limitation is that an attacker could pollute the profiling records of an application state she selects by gradually increasing the CPU time. We make such pollution harder by sampling requests to be written into the profiling database at random. Additionally, we restrict the number of samples that can be selected from a single network address or network prefix each day. To further increase the difficulty for an attacker to pollute or drift profiling records, one can consider strategies that assign higher importance (weight) to older measurement records
### 2.4 Probabilistic Request Termination
RAMPART marks a request as suspicious when a function consumes significantly more CPU time than it normally does. It stops serving such suspicious requests when the server is overloaded, due to a real attack or a surge in visitor traffic. While this approach stops real attacks, it can also negatively impact normal users. For example, a user may make requests that RAMPART falsely detects as an attack because they take slightly more time than the threshold that RAMPART calculated (Equation 2). Such requests, together with real attack requests, would then be terminated by RAMPART until the CPU usage is reduced below \(R_{CPU}\).
To reduce the impact of false positives, RAMPART can rely on a probabilistic algorithm to determine if a suspicious request should be dropped. The observation is that suspicious user requests usually do not consume as much CPU time as attack requests. Instead of aborting all suspicious requests immediately, RAMPART can be lenient initially and allow some requests to require slightly more time at a lower priority. Periodically, RAMPART then checks whether these requests have timed out and becomes stricter as the execution time of a timed-out function increases. In other words, a suspicious request that is fast is likely to be completely processed before it would be killed. On the contrary, a slow suspicious request is probably an attack (a true positive) and will be aborted eventually.
We also consider the server workload when determining the probability to abort a suspicious request. Specifically, the probability increases with the average CPU usage so that less CPU time is allocated to slow suspicious requests. RAMPART suspends the allowed suspicious requests temporarily to free CPU time for other requests, i.e., allowed suspicious requests have lower priority.
RAMPART's algorithm to decide whether a request should be aborted or suspended is shown in Algorithm 1. The Init procedure is executed at a function timeout event. \(\hat{R}_{CPU}\) is the (upper) CPU usage threshold. \(\sigma\) is the standard deviation of CPU time of the function frame. \(T_0\) is the minimum interval that RAMPART periodically evaluates if the suspicious request should be suspended or aborted. A CPU timer that expires at every interval \(i\) is set in line 6. The number of timeouts for a timer is \(c\). \(\omega\) and \(\beta\) correspond to the weights of the counter and CPU usage.
\begin{algorithm}
\begin{enumerate}
\item \textbf{procedure} Init
\item \(c \leftarrow 0, \omega \leftarrow 1, \beta \leftarrow 1\)
\item \(T_0 \leftarrow 10\,\text{ms}, s \leftarrow 5\,\text{ms}, \hat{R}_{CPU} \leftarrow 75\%\)
\item \(\sigma \leftarrow \text{StdDev}()\)
\item \(i \leftarrow \text{Max}(T_0, \sigma)\)
\item \text{Timer}(\text{Check}, i)
\item \textbf{procedure} Check
\item \(c \leftarrow c + 1\)
\item \(r \leftarrow \text{UsagECPU}()\)
\item \textbf{if} \(r > \hat{R}_{CPU}\) \textbf{then}
\item \(p \leftarrow (c \times \omega + r \times \beta)\)
\item \textbf{if} Random(0,100) \(\leq p\) \textbf{then}
\item \text{AbortRequest()}\)
\item \textbf{else}
\item \text{SuspendRequest}(s)
\end{enumerate}
\end{algorithm}
This procedure is called after Init and whenever the evaluation timer expires. If the web server’s average CPU usage \(r\) is greater than \(\hat{R}_{CPU}\), then we calculate the probability \(p\) (in percent), and abort the request probabilistically (if it is larger than a random value, line 12). Otherwise, the request is suspended. In either case, the web server can serve other normal requests first.
2.5 CPU-Exhaustion DoS Attack Blocking
**Rampart** can detect and stop CPU-exhaustion DoS attacks already, but the above design of **Rampart** does not prevent such attacks from affecting the victim server. **Rampart** lets an attack request be served until it has consumed a significant amount of CPU time. For example, we demonstrate in Section 4.1.1 that attackers can still occupy the web server’s CPU and cause CPU-exhaustion DoS by continuously sending such requests. Thus, **Rampart** needs to block follow-up attack requests to further mitigate CPU-exhaustion DoS attacks.
We face two challenges in designing a prevention strategy. First, it is difficult to extract features to properly distinguish attack requests from legitimate requests. According to our threat model (Section 2.1), the two kinds of requests can be very similar. The only reliable information **Rampart** has learned about an attacker is the network address (which can be spoofed) and the endpoints that are used to exploit the vulnerability. Therefore, **Rampart** builds filtering policies using the source IP (network) address, the requested URI, and the request parameters (e.g., the query string and post data, i.e., keys and values of PHP’s `GET` and `POST` arrays) of an attack request. **Rampart** then immediately rejects a follow-up request matching any filter without further processing it.
An attacker cannot evade the filter by supplying decoy parameters because each parameter is matched independently. She can, however, try to evade using spoofed IP addresses. However, IP address spoofing is an orthogonal problem because:
1. **Rampart** is a host-based defense system;
2. IP address spoofing is commonly used in reflected DDoS attacks, which are out of scope of our work;
3. Defenses exist against network-based attacks (e.g., ingress filtering, unicast reverse path forwarding) [17].
Second, a filter should be deployed neither perpetually nor ephemeraly. False positives cannot be completely eliminated due to randomness in web applications. On the one hand, a user could be blocked forever by a persistent filter, unless she switches to a different IP address not used by an attacker. On the other hand, if the lifespan of a filter is too short, then an attacker can wait and launch another round of attacks.
To address the above challenge, we design an exploratory algorithm to adaptively adjust the lifespan of a filter, instead of setting a fixed lifespan. Specifically, each filter is assigned with a primary lifespan when it is first created. A matching request is immediately dropped during the filter’s primary lifespan. The filter transitions into an inactive state with a secondary lifespan when its primary lifespan expires. During the secondary lifespan, **Rampart** lets the application serve one matched request at a time to explore the result of removing the filter. **Rampart** aborts this request if a CPU-exhaustion DoS attack attempt is detected, and it renews the filter with a longer primary lifespan to penalize the attacker. Otherwise, the filter is removed because it might have been created as a false positive or the attacks have stopped.
We present the exploratory algorithm in Algorithm 2. The Init-Rule procedure is invoked when a filtering rule is first created. \( T_p \) and \( T_s \) are the rule’s default primary and secondary lifespans (in seconds), which are set by the server’s administrator. The primary lifespan expires at time \( t_{\text{expiry}} \). \( \hat{R}_{CPU} \) and \( \hat{R}_{CPU} \) are the upper and lower CPU usage thresholds. Together with parameter \( \alpha \) and \( \beta \), they control if **Rampart** should explore a matched request (line 13-16). exploring represents **Rampart**’s exploration state and is initialized to false.
**Rampart** calls the Check-Rule procedure when a new request arrives. **Rampart** drops all incoming requests (line 10) that match the rule (line 8) if it is still active (line 9). After it transitions into the inactive state (line 11), **Rampart** may start an exploration if no one is active (line 12). Other matching requests received during exploration are dropped (line 22). **Rampart** decides if it should explore a request (line 12-15) with a probability depending on the current average server CPU usage \( r \), and the parameters \( \hat{R}_{CPU} \), \( \hat{R}_{CPU} \), \( \alpha \) and \( \beta \) (line 5-6). During exploration (line 16-20), the request is aborted immediately if it is detected as suspicious (line 17). The counter \( c \) is incremented by one to set a larger new primary lifespan (line 18-19). The rule is deleted if the secondary lifespan has expired (line 24).
This algorithm controls the upper bound of the rate that one attacker can cause CPU-exhaustion DoS on a web server with a unique combination of the fields in a filter. In particular, in any \( T_p + T_s \) window, an attacker can cause at most two attacks, which **Rampart** immediately detects and stops. She cannot evade detection by sending benign requests to hide attacks, because the rule would not be destroyed unless the attacker sends only one attack request in a \( T_p + T_s \) window. She is further penalized for sending an attack request during the filter’s second lifespan with a growing primary lifespan. Therefore, an optimal attacker can cause only one successful attack in every \( T_p + T_s \) interval (other attacks are quickly stopped).
In turn, our algorithm allows **Rampart** to recover the service’s availability for a false positive user as soon as the server has sufficient resources. **Rampart** is unlikely to detect a false positive user request it explores as suspicious again, because the server load is expected to be lower than the upper CPU usage threshold that is used to detect attacks. Otherwise, requests for one endpoint by a user leading to a false positive would temporarily
be refused as the server is overloaded and it assigns the suspicious requests a lower priority. The user can still access other parts of the application as long as they do not depend on the blocked one.
2.6 Performance Optimizations
Rampart is an in-line dynamic analysis system and, hence, may incur significant performance overhead. Next, we discuss how we optimized its performance.
First, Rampart needs to make two system calls to measure the CPU time of a function call: one before the actual function call and one after it. Here, the system call overhead can be magnitudes larger than the raw execution time when profiling some built-in functions, e.g., arithmetic functions. Therefore, we want to avoid unnecessary system calls while profiling applications at a fine granularity. One might consider the unprivileged RTDSC(P) instruction of x86 processors to query the Time Stamp Counter (TSC) efficiently. Unfortunately, TSC is a global counter and shared among all processes running on the same processor, including unrelated processes, which is why we cannot use it as per-process CPU counter. Instead, we disable profiling for built-in functions, as they take almost constant or negligible time. The execution time of some functions, e.g., string manipulation, however, does strictly depend on its input and we need to take them into account. Fortunately, their execution time is included when Rampart profiles their parent functions, thus, we do not measure them separately.
We also introduce a parameter Max_Prof_Depth to control the overall profiling granularity. It specifies the maximum number of function frames that Rampart profiles. If Max_Prof_Depth is set to 1, then only the covering main function is profiled. If Max_Prof_Depth is large, more functions are profiled, which may be inefficient as the measured CPU time is inclusive. Practically, Rampart still blocks CPU-exhaustion DoS effectively with low overhead when trading some profiling precision for performance (Section 3 and Section 4).
Second, some overhead may be the result of input and output operations on past measurements. To improve write performance, Rampart writes measurements in batch after each request has been completely processed. To further mitigate contention, Rampart offloads database operations to a dedicated daemon that regularly processes the measurement data.
Rampart also sets a wall clock timer to periodically query for historical profiling records of function frames that have not yet returned. To improve performance here, Rampart can clear the timer after the first query to avoid interrupts because it knows when the request will be marked as suspicious. Thus, Rampart can wait until then or until the request was processed, whichever comes first.
Finally, Rampart can optionally sample one measurement every $X$ requests, and, in turn, avoid the system calls to write out measurements for $X - 1$ requests. The first set of system calls remain required to measure the elapsed CPU time in case of an attack. Sampling also helps to defend against pollution attacks (Section 2.3).
2.7 Implementation
We implemented a prototype of Rampart as an extension to the PHP Zend engine in roughly 2,000 lines of C code. The Rampart PHP extension is loaded in each PHP process and thread for function profiling and to monitor CPU usage. We use the function getrusage provided by Linux to measure the CPU time of a function spent by both the user code and the system calls. The daemon for processing the profiling results is implemented in 400 lines of Python code. We implemented Rampart for PHP because it remains the most popular server-side programming language today with a market share of 83% [5]. Rampart is language-agnostic, and it can be implemented for other server-side programming languages as it does not rely on any language-specific features.
3 Performance Evaluation
Rampart is an in-line defense and therefore introduces some performance overhead during normal execution, which we evaluate in this section. We also investigate the performance degradation when a web application is the victim of a CPU-exhaustion DoS attack. For our evaluation, we protect two open-source web applications: Drupal 7.13 and WordPress 3.9.0. We evaluate Rampart on these specific applications and versions because of their popularity and because they contain known real-world CPU-exhaustion DoS vulnerabilities. Following, we first describe our experiment settings and the baseline performance of the two applications (Section 3.1), then we evaluate the performance overhead introduced by Rampart (Section 3.2), and last, we look at the performance degradation caused by sophisticated DoS attacks with and without Rampart (Section 3.3).
3.1 Setup and Baseline Performance
For our experiments, we use two machines, one being web server and one being the client. Both machines are running Debian Stretch (Linux Kernel 4.9.0). The web server runs Apache 2.4.25 with PHP 7.0.19-1 on an Intel Xeon X3450 quad-core CPU with 2.67 GHz and 16 GB RAM. The client is an Intel Xeon W3565 quad-core CPU with 3.2 GHz and 16 GB RAM. Both machines are on the same local area network (LAN) to eliminate any randomness that might result from sending requests over the Internet.
We created 256 user accounts after a fresh installation of each application, and we saved the application database to disk so that we can recover the state for reproducibility. Afterward, we used some accounts to interact with the two applications. We used OWASP Zed Attack Proxy (ZAP) as a network proxy to capture the interactions between the clients (users) and the applications. We also crawled all the endpoints of each web application with ZAP’s spider program, and we stored the correspond requests for replay. We then removed requests for static files (e.g., JavaScript, Cascading Style Sheets, etc.) and we merged the remaining requests (generated by humans and the spider program) into the user trace for each application. Based on this user trace, we developed a traffic generator that can replay the trace’s requests sequentially. It mimics multiple parallel users (replaying multiple interactions in parallel), of whom each is assigned one user account.
To evaluate overall server performance, we measure performance of each web application with various traffic loads (number of users). After each round of experiments, we reset the application to its initial state. We repeated each experiment five times to report average performance metrics ($N = 5$). Importantly, the traffic generator sends two consecutive requests with a 0.1 s pause in-between to simulate a large number of concurrent connections. In practice, however, the interval between consecutive requests sent by a legitimate user are much larger. For each request, we record the timestamps when it was sent ($T_{\text{start}}$) and when the corresponding response was received ($T_{\text{end}}$), and we compute the request processing time ($RPT = T_{\text{end}} - T_{\text{start}}$). Throughout the experiments, we also monitor the server’s CPU usage.
The baseline performance of the server running the two applications is shown in Table 1. Naturally, the average server CPU usage increases as the traffic load increases. With modest loads of no more than 32 user instances, the average RPT (ARPT) of WordPress did not vary much. However, both applications exhibited significant performance degradation in their ARPT once load became heavier (64 user instances and higher). For a fair evaluation, we use 32 user sessions in the remaining experiments.
3.2 Performance Overhead
Based on the same parameters, we measure the overhead that our prototype implementation may incur. We report ARPT and average CPU usage in Table 2 for various values of Max_Prof_Depth, which is Rampart’s parameter to control how many function frames are profiled. Unsurprisingly, if more function frames are profiled (higher Max_Prof_Depth), then performance degrades more. Specifically, for Drupal, the parameter does not negatively affect the ARPT, but its increase correlates with higher CPU usage. For WordPress, the server performance remains close to its baseline performance (Table 1) while Max_Prof_Depth was less than five, but performance degrades when more function frames are profiled.
To investigate how Max_Prof_Depth might influence server performance, we recorded the number of profiled function frames and the time spent processing the measurement results by our analysis daemon. For each analysis iteration, our single-threaded analysis daemon sampled up to 100 measurement files because it could not process all files in real time if Max_Prof_Depth was greater than nine. The time to process 100 measurements, the average number of unique profiled function frames, and the average number of profiled function frames are shown in Table 2. The daemon’s performance decreases and it can handle less files per second as more functions are profiled, which is the case because more measurement data is being generated by Rampart per received request that the daemon must analyze.
We find that Max_Prof_Depth $= 5$ results in a reasonable performance for both applications. For Drupal, Rampart’s CPU overhead is 3.31% and we do not ob-
serve any overhead in Drupal’s request processing time. For WordPress, the CPU overhead is 5.65% and RAMPART introduces an additional 0.2 ms (0.83%) for the request processing time on average. Overall, WordPress incurs slightly higher overhead than Drupal because more functions are profiled (Table 2).
Finally, we investigate the RPT of Drupal with 32 concurrent user instances with RAMPART enabled (Figure 1). The bottom of the figure shows the 5th percentile, mean, and 95th percentile of the RPTs for requests sent for each one second interval. The x-axis is the time elapsed since the start of experiment and the y-axis is the RPT. The number of in-flight requests (RIF) in each one-second window are shown in a green solid line, and the average server CPU usage is shown in a blue dashed line in the top figure. Evidently, CPU usage remains modest throughout the experiment. Following, we show how a only few attack requests can quickly exhaust the CPU (Section 3.3), and how RAMPART preserves server performance (Section 4).
### 3.3 DoS Attack Performance Degradation
We measure the performance degradation of the server when a CPU-exhaustion DoS attack was launched against a web application. Specifically, we evaluate two kinds attacks for both web applications: XML-RPC for both Drupal and WordPress (CVE-2014-5266 [4]), PHPass for Drupal (CVE-2014-9016 [2]) and Wordpress (CVE-2014-9034 [3]). The XML-RPC attacks allow remote attackers to cause a CPU-exhaustion DoS by sending a large XML document containing a significant number of elements. The PHPass attacks allow remote attackers to cause a CPU-exhaustion DoS by supplying a long password that is improperly handled by the password hashing functions. We also evaluated several other CVEs (e.g., CVE-2012-1588, CVE-2013-2173, and CVE-2014-5019), which can similarly cause CPU-exhaustion DoS, which we omit due to space limitations.
We use our traffic generator to send attack traffic from the client machine to the server. Each generated attack payload takes Drupal and WordPress between 10 and 30 seconds to process. We then launch multiple attackers concurrently via our traffic generator. For each attacker session, the generator sends two consecutive requests with five seconds break in-between. Assuming that the RPT for an attack request is 25 seconds, then the attack traffic rate with 30 attacker sessions is one attack request per second. This rate is significantly lower than that of a typical DDoS attack (tens of thousands of requests per second or more). Indeed, such sophisticated application-layer DoS attacks require significantly fewer resources to be successful.
In our experiments, we configure the user traffic generator to run 32 user sessions (Section 3.2), and the attack traffic generator to operate 8 or 16 attacker sessions. We launch the attack traffic generator five seconds after we started the user traffic generator. As in our baseline performance experiments, we repeat each experiment five times to measure the average performance metrics, i.e., the server’s CPU usage, the number of in-flight requests each second (RIF), and the request processing time (RPT) of user sessions and attacker sessions. RAMPART is disabled for all of these experiments.
<table>
<thead>
<tr>
<th>Application</th>
<th>Benchmark</th>
<th>Max_Prof_Depth</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>ARPT (ms)</td>
<td>1 3 5 7 9 11 13</td>
</tr>
<tr>
<td>Drupal</td>
<td>397.6</td>
<td>389.0 400.9 393.0 413.6 412.6 410.9</td>
</tr>
<tr>
<td></td>
<td>CPU (%)</td>
<td>34.53 34.80 35.62 36.32 38.52 40.94 44.20</td>
</tr>
<tr>
<td></td>
<td>Number of Unique Functions</td>
<td>12 76 567 1,421 2,473 4,019 5,405</td>
</tr>
<tr>
<td></td>
<td>Number of Functions</td>
<td>341 2,167 12,677 31,152 53,263 80,186 110,606</td>
</tr>
<tr>
<td></td>
<td>Processing Time (ms)</td>
<td>11.3 29.5 142.5 321.8 543.7 886.7 1,147.1</td>
</tr>
<tr>
<td>WordPress</td>
<td>ARPT (ms)</td>
<td>23.7 23.7 23.5 24.6 29.1 36.4 41.6</td>
</tr>
<tr>
<td></td>
<td>CPU (%)</td>
<td>44.25 43.12 49.08 56.56 61.60 69.37 68.41</td>
</tr>
<tr>
<td></td>
<td>Number of Unique Functions</td>
<td>17 199 846 3,186 7,909 13,337 17,410</td>
</tr>
<tr>
<td></td>
<td>Number of Functions</td>
<td>422 4,479 15,314 42,957 89,080 136,910 170,904</td>
</tr>
<tr>
<td></td>
<td>Processing Time (ms)</td>
<td>11.4 46.1 169.1 572.8 1,470.2 2,653.7 3,529.0</td>
</tr>
</tbody>
</table>
Table 2: Web server performance and daemon statistics for Rampart with 32 users for different Max_Prof_Depth values.
Figure 2: CPU usage and RPT over time for 8 PHPass attackers on Drupal without Rampart.
For each figure, the middle and bottom graphs show the 5th percentile, mean, and 95th percentile of the RPT of user requests (middle) and attack requests (bottom) that were sent in each one second window. The green and red solid lines in the top figure represent the RIF of user sessions and attacker sessions, and the blue dashed line shows the server’s CPU usage. A red solid vertical line in each three graphs indicates when we started the attack.
Launching 8 PHPass attacker sessions attack against Drupal (Figure 2), the server spends on average 42 seconds on processing one attack request. The CPU remains almost fully occupied once we launch the attack, except for the five seconds break when we paused the attack. In fact, the results show that an attacker sending only 0.17 requests per second (8 / (42 + 5)) can already exhaust CPU resources of a vulnerable server. Performance degrades severely with 16 parallel attacker sessions, at which point the CPU usage stays close to 100% throughout the experiment. Corresponding to doubling the number of attacker sessions, the server has to spend almost twice as much time (82 seconds, or 1.95x) to serve each request, likely because of the operating system’s process scheduling. For 16 attackers, the required attack rate is 0.18 requests per second (16 / (82 + 5)).
The results for the other three attacks, XML-RPC on Drupal, PHPass on WordPress, and XML-RPC on WordPress, are shown in Figure 3, Figure 4, and Figure 5.
The mean CPU usage and the ARPT for all the experiments is summarized in Table 3. For Drupal, the two attacks consume between 52.4% and 62.84% additional CPU time and they cause a 36% slowdown in processing user requests. The ARPT of WordPress is more sensitive to both attacks, causing an increase of 40% to 118% in ARPT and consuming between 41.65% and 51.93% additional CPU time.
4 Mitigation Evaluation
For Rampart to be an effective defense, it must successfully preserve the availability of a web application from CPU-exhaustion DoS attacks. Therefore, we first investigate whether Rampart can correctly detect and stop attacks exploiting known real-world CPU-exhaustion DoS vulnerabilities (Section 4.1). Next, we look at whether Rampart can effectively protect web applications from unknown CPU-exhaustion DoS attacks (Section 4.2).
We also study if Rampart may mistakenly mark a legitimate request as an attack request, i.e., a false positive, and what the consequences are. For example, a user may initiate slow requests that appear similar to attack requests. Blocking such requests while an active attack is occurring is acceptable because there is no good way to differentiate such requests from the attack requests (Section 2.1). However, it is unnecessary and undesirable to constantly reject such legitimate requests when the application is not under attack.
4.1 Mitigation of Known Attacks
We evaluate how Rampart can mitigate attacks exploiting the real-world vulnerabilities that we studied (Section 3.3). We are particularly interested in understanding:
1. How well does Rampart help preserve server performance and availability when attacks occur?
2. How long stays an aborted attack request alive before it is terminated by Rampart?
3. How many attack requests are not aborted by Rampart, i.e., what is the false negative rate (FNR)?
4. How many user requests are aborted, i.e., what is the false positive rate (FPR)?
To answer these questions, we perform the following experiments: First, we evaluate Rampart’s ability to detect attack requests in the stop-only experiments (Section 4.1.1). Here, Rampart uses the probabilistic algorithm (Algorithm 1) to lower a suspicious request’s priority by either aborting or suspending it, but it does not deploy any filters to block requests. In turn, Rampart checks all the requests sent by attackers. Next, we evaluate whether Rampart can preserve server performance by stopping and filtering suspicious requests. In the stop-and-filter experiments (Section 4.1.2), Rampart additionally uses the exploratory algorithm (Algorithm 2) to synthesize and deploy filters to block future attack requests. Here, we set the primary lifespan ($T_p$) to 10 seconds and the secondary lifespan ($T_s$) to 30 seconds. We assign a unique local IP address to each user/attacker session, so that Rampart can distinguish the different instances.
We evaluate two threshold values (50% and 75%) for the CPU usage threshold $\hat{R}_{CPU}$, which Rampart uses to determine if a server is under attack. We report the average request processing time (ARPT), average server CPU usage, FPR, and FNR for user requests and attack requests over five runs per configuration. The RPT of false positive requests that Rampart aborted are not included in the user ARPT.
4.1.1 Stop-Only Experiments
We summarize the results of the stop-only experiments in Table 4. We observed no false negative in our experiments, i.e., all attack requests were detected and eventually aborted, which demonstrates that Rampart accurately detects CPU-exhaustion DoS attacks.
However, some user requests were also aborted by Rampart as false positives in the Drupal PhPAss experiment with 8 attacker sessions. Upon closer investigation of the logs and traffic traces of Drupal, some requests took the server more than several seconds to process, even when it was not under attack (black spikes in Figure 1). Some of those requests were marked as suspicious because several function frames deviated from their execution models. However, the overall impact was limited:
1. Not all such requests were aborted by Rampart.
2. Requests of only a few users were aborted, although all users sent the same requests.
This is the case because Rampart only terminated application instances serving a suspicious request when the server was overloaded. Nevertheless, the FPR is always less or equal to 0.33%, i.e., less than 18 out of 5,344 user requests were mistakenly aborted by Rampart.
At the same time, Rampart helps to preserve server performance and availability substantially, compared to the attack results without Rampart (Table 3). The ARPT for user requests (ARPT-U) during the PhPAss attacks on Drupal and WordPress are close to their baseline counterparts (Table 1). However, ARPT-U during the XML-RPC attacks on the web applications did not improve significantly. On the other hand, the ARPT for attack requests (ARPT-A) is long, with attack requests being processed for up to 2,294 ms (Drupal) and 787 ms (Wordpress) before Rampart aborted them. This explains why average CPU usage did not drop back to the baseline (Table 1) but remained slightly higher. We also observe that PhPAss attack requests consumed more CPU resource with a higher CPU usage threshold $\hat{R}_{CPU}$.
Finally, we look at 8 attacker sessions launching the PhPAss attack against Drupal with $\hat{R}_{CPU}$ set to 50% (Figure 6). The magenta dashed lines in the middle and bottom graphs represent the number of aborted user requests (middle) and attack requests (bottom). In the first 20 seconds of the experiment, Rampart quickly aborted all attack requests because the server’s CPU usage was above the threshold. Some requests were aborted even when the CPU usage in the top figure appears to be lower than the 50% threshold, which is because Rampart monitors CPU usage at a shorter interval (10 ms), while the CPU data in the top figure was collected each second using the mpstat command. When the server load decreased, the attack requests could occupy the CPU for up to five seconds until the CPU usage crossed the threshold again. In turn, this behavior demonstrates the need for deploying filters to block suspicious requests to prevent CPU usage oscillation. Nevertheless, Rampart detects and blocks attacks much earlier with a CPU threshold close to but above the expected CPU usage during normal operation.
4.1.2 Stop-and-Filter Experiments
We present the results of the stop-and-filter experiments in Table 5. Analog to the stop-only experiments, we observed no false negative in the stop-and-filter experiments. However, the FPR increased compared to the stop-only experiments because RAMPART drops any request matching a filter created from false positive requests until the filter’s primary lifespan has expired. In fact, these events are evident in the Drupal pHPass experiment with 8 attacker sessions and $R_{CPU} = 50\%$ (orange dashed line in Figure 7, which represents the number of requests that were dropped because of a filter). Around the 35th second and 39th second, two user requests were detected and aborted as false positives and two matching filters were created. As a result, 16 additional requests from these two users were also dropped in the following $T_p$ seconds. The primary lifespan of the last rule then expired at the 49th second. RAMPART then explored a matching request (the blue dashed line) at around the 58th second according to the exploratory algorithm (Algorithm 2) and it detected that the filtering rule was a false positive. RAMPART’s FPR in stop-and-filter mode is still negligible at less than 0.69%.
Although RAMPART’s stop-and-filter mode blocked some legitimate requests, it also immediately blocked the majority of attack requests (86.5%) and entirely prevented them from consuming any additional CPU time. The remaining 21 attack requests (13.5%) were also all detected as suspicious and aborted. In fact, 8 of the aborted requests were the initial requests sent by the 8 attackers, i.e., the earliest that any defense could have detected them as suspicious. RAMPART explored the remaining 13 requests and eventually also detected them as suspicious. Since the attackers sent requests at an interval of five seconds, which is shorter than $T_p$, RAMPART incremented the primary lifespan of a filter as penalty each time an exploring request was detected as suspicious.
Because RAMPART blocked most of the attack requests immediately, it preserved the web server’s performance as if no attack had occurred (Table 5). In particular, the average CPU usage and the ARPT of user requests are much closer to their baseline (Table 1) compared to the stop-only experiments (Table 4). The ARPT of attack requests is an order of magnitude smaller. Overall, the results illustrate that RAMPART can effectively protect web applications from known CPU-exhaustion DoS attacks using the exploratory algorithm (Algorithm 2).
The results for the remaining three experiments with \( R_{CPU} = 50\% \), namely, XML-RPC on Drupal, PHPass on WordPress, and XML-RPC on WordPress, are shown in Figure 8, Figure 9, and Figure 10.
### 4.2 Mitigation of Synthetic Attacks
Compared to static vulnerability analysis tools that look for specific features in the source code, RAMPART does not require an application’s source code, nor does it require any knowledge about specific CPU-exhaustion DoS vulnerabilities. Instead, RAMPART is a generic defense that automatically detects known and unknown application-level CPU-exhaustion DoS attacks at runtime dynamically.
We demonstrate RAMPART’s ability to detect and mitigate such attacks in web applications. Beyond the vulnerabilities that we explored, we automatically inserted CPU-exhaustion DoS vulnerabilities into the source code of the two web applications at random locations. We configured RAMPART to record all invoked functions when serving a request for the two web applications, and we then inserted a vulnerability (Listing 1) into a function that was randomly chosen. The vulnerable code calculates the hash value of a variable \( v \) by repeatedly invoking the \texttt{md5} function (line 11). The number of iteration in the loop is controlled by the parameter \( \texttt{exp} \), which an attacker can set through the \texttt{dos-exp} query parameter. In our experiment, attacker requests set \( \texttt{exp} \) to 24 to cause CPU-exhaustion DoS (i.e., \( 2^{24} \texttt{md5} \) invocations).
For each application, we randomly chose 50 vulnerabilities (requests) and launched 16 attacker sessions. We set the average CPU threshold \( R_{CPU} \) to 75%. All 50 vulnerabilities in WordPress were successfully exploited, while only 21 vulnerabilities in Drupal could be exploited because the other 29 vulnerable functions were not invoked. They could not be invoked because they require to be set up by other requests beforehand, which we did not replay.
We report the results with and without RAMPART (Table 6). The average CPU usage threshold to determine if RAMPART successfully mitigated an attack against Drupal is 45\% and for WordPress it is 55\%. RAMPART successfully mitigates all attacks with \( R_{CPU} = 50\% \). However, some attack requests were incorrectly classified as benign. These false negatives occurred for Drupal because the server load was light (less than the 50\% threshold) when those requests arrived. Although RAMPART did not abort those requests, it flagged them as suspicious.
Figure 8: CPU usage and RPT over time for 8 XML-RPC attackers on Drupal with Rampart enabled in the stop-and-filter experiment.
Figure 9: CPU usage and RPT over time for 8 PhPass attackers on WordPress with Rampart in the stop-and-filter experiment.
Figure 8: CPU usage and RPT over time for 8 XML-RPC attackers on Drupal with Rampart enabled in the stop-and-filter experiment.
Figure 9: CPU usage and RPT over time for 8 PhPass attackers on WordPress with Rampart in the stop-and-filter experiment.
Table 6: Web server performance in the synthetic attack experiments with Rampart being enabled and disabled.
<table>
<thead>
<tr>
<th>Application</th>
<th>Benchmark</th>
<th>Rampart</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Enabled</td>
</tr>
<tr>
<td>Drupal</td>
<td>Successful Attacks</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td>ARPT-U (ms)</td>
<td>436.5</td>
</tr>
<tr>
<td></td>
<td>ARPT-A (ms)</td>
<td>290.5</td>
</tr>
<tr>
<td></td>
<td>CPU (%)</td>
<td>39.15</td>
</tr>
<tr>
<td></td>
<td>FPR (%)</td>
<td>0.03</td>
</tr>
<tr>
<td></td>
<td>FNR (%)</td>
<td>1.31</td>
</tr>
<tr>
<td></td>
<td>Successful Attacks</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td>ARPT-U (ms)</td>
<td>25.8</td>
</tr>
<tr>
<td></td>
<td>ARPT-A (ms)</td>
<td>157.5</td>
</tr>
<tr>
<td></td>
<td>CPU (%)</td>
<td>51.05</td>
</tr>
<tr>
<td></td>
<td>FPR (%)</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td>FNR (%)</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 6: Web server performance in the synthetic attack experiments with Rampart being enabled and disabled.
Overall, the synthetic attacks experiments demonstrate that Rampart can detect and mitigate CPU-exhaustion DoS attacks regardless of the location of the vulnerable code, i.e., it can detect and mitigate attacks not only for front-facing code, but it can also detect and mitigate attacks for (third-party) library functions. Our prototype is implemented as an extension to the PHP engine (and can be similarly implemented for other languages), and, thus, it can adapt to any change of an application’s source code without requiring any manual interaction or reconfiguration. Rampart can automatically detect new vulnerabilities that might be introduced by unintentional source code modifications. On the contrary, a developer using a static vulnerability detection tool would need to run it each time she modifies the code. Considering Rampart’s effectiveness and low overhead, Rampart is a practical defense to protect applications from CPU-exhaustion DoS attacks.
5 Related Work
We compare Rampart to the most relevant work, i.e., sophisticated DoS vulnerability detection, program profiling techniques, and anomaly detection.
DoS Vulnerability Detection. CPU-exhaustion DoS attacks received significant attention from researchers over the past years. Existing research focused on finding vulnerabilities (bugs) that can be exploited to launch sophisticated DoS attacks. In turn, prevention of the attacks is manual by fixing the detected bugs before an application is deployed. Safer performs static taint analysis and control-dependency analysis to identify loops and recursive calls whose execution can be controlled by a remote attacker [10]. Similarly, SaferPHP uses static taint anal-
Figure 10: CPU usage and RPT over time for 8 XML-RPC attackers on WordPress with RAMPART in the stop-and-filter experiment.
Listing 1: Snippet of vulnerable PHP code.
```php
<?php
$v = time() + 86400 * 30;
$exp = 0;
if(isset($_GET['dos-exp'])) {
$exp = $_GET['dos-exp'];
}
for($i = 0; $i < pow(2, $exp); $i++) {
$v = md5($v);
}
?>
```
ysis to find loops whose execution can be influenced by network inputs [32]. It then uses symbolic execution to detect whether the network inputs can trigger the loops to run infinitely. Xiao et al. proposed ∆Infer, which is an approach to detect workload-dependent performance bottleneck loops by inferring iteration counts of the loops using complexity models [35]. Torpedo detects second-order DoS vulnerabilities using taint analysis and symbolic execution [26]. SlowFuzz is a dynamic testing tool that generates inputs triggering worst-case algorithmic behavior for several well-known algorithms [27].
Although these systems can detect CPU-exhaustion bugs before the applications are deployed, they commonly rely on additional manual analysis to confirm vulnerabilities or reduce false positives. They also incur additional opportunity cost because developers need to run them whenever the application’s code or any of its dependencies are updated. Most important, they do not prevent attacks after an application has been deployed.
Instead of using static program analysis, RAMPART dynamically monitors a web application’s state and determines automatically if the current state deviates significantly from the expected state. In turn, RAMPART automatically adapts to any change to the application or its libraries without requiring source code. RAMPART achieves a low false positive rate by leveraging a probabilistic algorithm and by updating the filtering rules intelligently with an exploratory strategy, and it exhibits false negatives only if an attack is not severe enough to consume significant CPU resource.
**Program Profiling.** The program profiling implementation of RAMPART is inspired by prior work related to flow-sensitive and context-sensitive profiling [6, 7, 13, 15, 16]. Here, a function’s execution time is counted in different contexts based on the calling context tree. That is, they accumulate all functions that are called on the current execution path, to distinguish the same function called under different contexts. For RAMPART, we adopt a similar profiling strategy: We compute a hash value to encode the current execution state. Correspondingly, we can profile the running time of each called function in different contexts, and we can build a statistical execution model for each function. Moreover, during profiling, we compare the profiled functions to their statistical models, which allows us to identify the request that caused the CPU-exhaustion DoS attack, and which enables RAMPART to block similar requests in the future.
**Anomaly Detection.** RAMPART employs anomaly detection techniques to detect suspicious requests. The simplest anomaly detection approach is to set a static threshold for each feature, and to generate alerts when some or all the feature values are below or above their thresholds. Instead of a static threshold, RAMPART learns a dynamic threshold for function execution time because it is impractical to determine a static threshold for each function accurately and a priori, as their execution time can vary greatly in different execution contexts. Prior work employed supervised learning algorithms to build anomaly detection models [11, 19, 20, 28], which stands in contrast to RAMPART: We leverage anomaly detection models using statistical methods, but without requiring any labels during training.
6 Conclusion
Sophisticated Denial-of-Service (DoS) attacks targeting application-layer vulnerabilities can cause significant harm by severely degrading the performance and availability of a victim server over a prolonged period with only few carefully crafted requests.
In this paper, we present Rampart, which is a system that protects web applications from sophisticated DoS attacks that would otherwise overwhelm the server’s available CPU resources through carefully crafted attack requests. Rampart performs context-sensitive function-level program profiling and learns statistical models from historical observations, which it then employs to detect and stop suspicious requests that could cause CPU-exhaustion DoS. Rampart also adaptively synthesizes and updates filtering rules to block future attack requests. We thoroughly evaluated Rampart’s effectiveness and performance on real-world vulnerabilities as well as synthetic attacks for two popular web applications, Drupal and WordPress. Our evaluation demonstrated that Rampart is robust against a varying number of attackers and that it can effectively and efficiently protect web applications from CPU-exhaustion DoS attacks with negligible performance overhead, low false positive rate, and low false negative rate.
7 Acknowledgments
We thank the anonymous reviewers for their helpful suggestions and feedback to improve the paper. This material is based on research supported by DARPA under agreement FA8750-15-2-0084, NSF under agreement CNS-1704253, ONR under grants N00014-09-1-0104, N00014-15-1-2162 and N00014-17-1-2895, and the DARPA Transparent Computing program under contract DARPA-15-15-TCFP-006. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views, findings, conclusions or recommendations expressed in this material are those of the authors and should not be interpreted as necessarily representing the official views, policies or endorsements, either expressed or implied, of DARPA, NSF, ONR, or the U.S. Government.
References
|
{"Source-Url": "https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-meng.pdf", "len_cl100k_base": 15125, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 70131, "total-output-tokens": 18467, "length": "2e13", "weborganizer": {"__label__adult": 0.0006041526794433594, "__label__art_design": 0.0008077621459960938, "__label__crime_law": 0.005329132080078125, "__label__education_jobs": 0.0015020370483398438, "__label__entertainment": 0.0003421306610107422, "__label__fashion_beauty": 0.0002818107604980469, "__label__finance_business": 0.000743865966796875, "__label__food_dining": 0.0004549026489257813, "__label__games": 0.002925872802734375, "__label__hardware": 0.00247955322265625, "__label__health": 0.0009288787841796876, "__label__history": 0.0006222724914550781, "__label__home_hobbies": 0.00014889240264892578, "__label__industrial": 0.0006995201110839844, "__label__literature": 0.0006389617919921875, "__label__politics": 0.0008373260498046875, "__label__religion": 0.0004503726959228515, "__label__science_tech": 0.3134765625, "__label__social_life": 0.0001798868179321289, "__label__software": 0.07769775390625, "__label__software_dev": 0.587890625, "__label__sports_fitness": 0.00035381317138671875, "__label__transportation": 0.0005497932434082031, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 76056, 0.04616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 76056, 0.32059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 76056, 0.8985]], "google_gemma-3-12b-it_contains_pii": [[0, 669, false], [669, 5054, null], [5054, 11012, null], [11012, 16404, null], [16404, 22406, null], [22406, 27731, null], [27731, 33648, null], [33648, 37515, null], [37515, 42927, null], [42927, 46180, null], [46180, 49636, null], [49636, 52103, null], [52103, 55215, null], [55215, 57769, null], [57769, 60307, null], [60307, 63400, null], [63400, 67125, null], [67125, 71607, null], [71607, 76056, null]], "google_gemma-3-12b-it_is_public_document": [[0, 669, true], [669, 5054, null], [5054, 11012, null], [11012, 16404, null], [16404, 22406, null], [22406, 27731, null], [27731, 33648, null], [33648, 37515, null], [37515, 42927, null], [42927, 46180, null], [46180, 49636, null], [49636, 52103, null], [52103, 55215, null], [55215, 57769, null], [57769, 60307, null], [60307, 63400, null], [63400, 67125, null], [67125, 71607, null], [71607, 76056, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 76056, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 76056, null]], "pdf_page_numbers": [[0, 669, 1], [669, 5054, 2], [5054, 11012, 3], [11012, 16404, 4], [16404, 22406, 5], [22406, 27731, 6], [27731, 33648, 7], [33648, 37515, 8], [37515, 42927, 9], [42927, 46180, 10], [46180, 49636, 11], [49636, 52103, 12], [52103, 55215, 13], [55215, 57769, 14], [57769, 60307, 15], [60307, 63400, 16], [63400, 67125, 17], [67125, 71607, 18], [71607, 76056, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 76056, 0.10526]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2cca5f4b4f81009fcd5faa0d8b948351bcbfc802
|
[REMOVED]
|
{"Source-Url": "http://consystlab.unl.edu/Documents/Papers/Schneider-CP2018.pdf", "len_cl100k_base": 11585, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 56377, "total-output-tokens": 14162, "length": "2e13", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.0008063316345214844, "__label__crime_law": 0.0007867813110351562, "__label__education_jobs": 0.000919818878173828, "__label__entertainment": 0.00019443035125732425, "__label__fashion_beauty": 0.0002963542938232422, "__label__finance_business": 0.0005617141723632812, "__label__food_dining": 0.0005154609680175781, "__label__games": 0.0019044876098632812, "__label__hardware": 0.0012989044189453125, "__label__health": 0.001010894775390625, "__label__history": 0.0005555152893066406, "__label__home_hobbies": 0.00016498565673828125, "__label__industrial": 0.0007162094116210938, "__label__literature": 0.0004835128784179687, "__label__politics": 0.0004737377166748047, "__label__religion": 0.0007753372192382812, "__label__science_tech": 0.20703125, "__label__social_life": 0.00013840198516845703, "__label__software": 0.0129547119140625, "__label__software_dev": 0.76611328125, "__label__sports_fitness": 0.0004630088806152344, "__label__transportation": 0.0007739067077636719, "__label__travel": 0.00032901763916015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46896, 0.05727]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46896, 0.34779]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46896, 0.83546]], "google_gemma-3-12b-it_contains_pii": [[0, 2001, false], [2001, 4736, null], [4736, 8061, null], [8061, 11094, null], [11094, 14262, null], [14262, 16847, null], [16847, 20240, null], [20240, 23129, null], [23129, 25023, null], [25023, 27820, null], [27820, 29585, null], [29585, 32501, null], [32501, 34660, null], [34660, 38352, null], [38352, 40951, null], [40951, 44143, null], [44143, 46896, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2001, true], [2001, 4736, null], [4736, 8061, null], [8061, 11094, null], [11094, 14262, null], [14262, 16847, null], [16847, 20240, null], [20240, 23129, null], [23129, 25023, null], [25023, 27820, null], [27820, 29585, null], [29585, 32501, null], [32501, 34660, null], [34660, 38352, null], [38352, 40951, null], [40951, 44143, null], [44143, 46896, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46896, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46896, null]], "pdf_page_numbers": [[0, 2001, 1], [2001, 4736, 2], [4736, 8061, 3], [8061, 11094, 4], [11094, 14262, 5], [14262, 16847, 6], [16847, 20240, 7], [20240, 23129, 8], [23129, 25023, 9], [25023, 27820, 10], [27820, 29585, 11], [29585, 32501, 12], [32501, 34660, 13], [34660, 38352, 14], [38352, 40951, 15], [40951, 44143, 16], [44143, 46896, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46896, 0.09278]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
2d0f7cb45b1cb6004a4a43acb3e80c3f9a7c409d
|
Preference-Wise Testing for Android Applications
Yifei Lu
Minxue Pan
Juan Zhai
lyf@smail.nju.edu.cn
mzp@nju.edu.cn
zhaijuan@nju.edu.cn
State Key Laboratory for Novel Software Technology,
Software Institute, Nanjing University
Nanjing, China
Tian Zhang
Zhuang Li
ztluck@nju.edu.cn
lxz@nju.edu.cn
State Key Laboratory for Novel Software Technology,
Department of Computer Science and Technology,
Nanjing University
Nanjing, China
ABSTRACT
Preferences, the setting options provided by Android, are an essential part of Android apps. Preferences allow users to change app features and behaviors dynamically, and therefore, need to be thoroughly tested. Unfortunately, the specific preferences used in test cases are typically not explicitly specified, forcing testers to manually set options or blindly try different option combinations. To effectively test the impacts of different preference options, this paper presents Prefest, as a preference-wise enhanced automatic testing approach, for Android apps. Given a set of test cases, Prefest can locate the preferences that may affect the test cases with a static and dynamic combined analysis on the app under test, and execute these test cases only under necessary option combinations. The evaluation shows that Prefest can improve 6.8% code coverage and 12.3% branch coverage and find five more real bugs compared to testing with the original test cases. The test cost is reduced by 99% for both the number of test cases and the testing time, compared to testing under pairwise combination of options.
CCS CONCEPTS
• Software and its engineering → Software testing and debugging.
KEYWORDS
Android apps, Android testing, preference-wise testing
1 INTRODUCTION
The last decade has witnessed a rapid growth in Android apps, drawing attention from both academia and industry. To cope with the ever-changing market demands, Android app developers have to work in fast development cycles, causing a growing need for cost-effective testing approaches. Automatic generation of test inputs [3, 7, 8, 15, 19] aiming at the fully automatic testing of Android apps, as an example, has been prosperous since then.
For mobile apps on all platforms, it is often the case that there are some setting options designed to allow users to change app features and behaviors, and in Android, it is the preference [12]. By using preference, users can switch among different GUI styles, change the behaviors of certain functions, and enable or disable services, etc. While preference offers users the ability of customization, unfortunately for developers, the resulting diverse GUI displays and app behaviors require more testing under different preference options. Indeed, an app may work well in one setting of preference options, while crash in another. To properly test an app’s behavior under different preference options, which we call the preference-wise testing, can be challenging. The specific preferences used in one test case is typically not explicitly specified, and existing tools have not considered the impacts of preferences on app behaviors during testing. Black-box testing captures app status from GUIs. Since changing preference options usually causes just slight or even no changes in GUIs, preferences are mostly ignored. As for white-box testing, since a key-value mechanism is used for preference access, where the keys are typically dynamically generated, techniques such as symbolic execution are required for the accurate prediction of keys. However, symbolic execution is predominantly known to be suffering from scalability issues [16], which is even worse for Android apps due to the event-driven nature and the application development framework (ADF) [23]. Therefore, despite of the recent progresses in mobile testing, testers are still forced to manually set preference options or try different option combinations for the same test case, if they want to perform preference-wise testing.
In this paper, we propose the problem of preference-wise testing for Android apps and present the Prefest approach. Prefest is built on two key observations. Our first observation is that a test case typically interacts with just a few preferences defined in the app. So, for each test case, Prefest analyzes the preferences that may impact the app behavior, which we call test case relevant preferences, and executes test cases only under relevant preference option combinations. Specifically, given an Android app, Prefest first leverages
a static analysis to identify the preference structure that contains all the preferences defined in the app. Then in a dynamic analysis, it executes the test cases and logs the execution flows to pinpoint the relevant preferences to each test case. Finally, it re-executes test cases only under relevant preference option combinations to reach previously uncovered code.
To further reduce the test cost, we exploit our second observation that Android apps often share app states globally using the key-value mechanism. So, under one preference option combination, a piece of code executed in different test cases often produces the same app behavior, and therefore, does not require re-executions. We equip Prefest with a reduction strategy named Target Mode, which splits the app code into blocks and performs another analysis of the relevance between the preferences and the code blocks. For one code block, referred as target, Prefest will execute it only if it has not been executed by previous test cases.
Prefest can enhance the performance of existing automated testing tools. In addition, it can be a useful complement to manual testing. In practice, developers and testers are often not the same group of people. Identifying relevant preferences for test cases and testing apps under adequate preference options can be a costly or even tough job for testers. This is where Prefest can be handy, since it is all automated and the manual effort can be saved.
The main contributions can be summarized as:
1. A novel problem of preference-wise testing and also a fully automated solution Prefest, to improve the efficacy of existing testing approaches by considering the effects of preferences;
2. Multiple techniques employed in analyzing the impacts of preferences to Android testing, including the loading patterns for preference identification, the analysis for relevant preferences acquisition, and the Target Mode for test cost reduction;
3. A prototype also named Prefest and an empirical study on 30 real-world apps, showing that Prefest achieves 6.8% and 12.3% improvement in code and branch coverages, respectively, and detects five more real bugs.
The paper is organized as follows. Sec. 2 introduces the background and motivation of our work. Sec. 3 provides the overview and the details of the Prefest approach. Sec. 4 presents the experimental evaluation. Related work is discussed in Sec. 5 and conclusion is drawn in Sec. 6.
2 BACKGROUND & MOTIVATION
2.1 Background
In Android, GUI pages containing preferences are called setting screens. To use setting screens in an app, a programmer needs to define: (1) resource files (in XML format) to describe the preferences in each setting screen; (2) invocations of preference-related APIs in source code to specify the loading location of each setting screen; and (3) the accesses of preferences in source code.
Listing 1 shows a simplified resource file for a setting screen. The top-level tag PreferenceScreen defines the container for a setting screen. Each contained element represents a preference of different types, such as ListPreference and CheckBoxPreference in Listing 1.
To perform preference-wise testing, we need to obtain the essential details for each preference, including: (1) key: the unique name to refer to the preference in source code; (2) title: the text displayed in the setting screen; (3) defaultValue: the initial value of the preference; and (4) entryValues: the possible options can be set to the preference. As Listing 1 shows, these details are coded in the resource files, which can be retrieved by static analysis.
```
<PreferenceScreen>
<CheckBoxPreference
key="widget_update_location_pref_key"
title="Update Location"
defaultValue="false"/>
...
<ListPreference
key="widget_theme_pref_key"
title="Widget theme"
entryValues="["Dark", "Light"]"/>
...
</PreferenceScreen>
```
Listing 1: Sample resource file for preferences
For an app to load a setting screen defined in the resource file, the most common way is to call the API method addPreferencesFromResource with the resource file as its parameter, upon the creation of an Activity or a Fragment, i.e., within their lifecycle methods onCreate. A special setting screen named PreferenceHeaderScreen, which shows a list of navigation texts to switch among different setting screens, is officially recommended to load with another API method loadHeadersFromResource (see Sec. 3.2).
The accesses of preferences values are particularly complex. Android provides the SharedPreferences mechanism for activities and applications to manage preference data in the form of key-value pairs of primitive data types in the Android file system. The precise values of keys are critical to analyzing which preferences are relevant to a test case. However, they are difficult to acquire through static analysis since very often they are generated dynamically. To address this problem, we employ a dynamic approach to analyze which preferences are loaded and used for the given test cases. More details will be discussed in Sec. 3.3.
2.2 Motivation
In this section, we use a simple app, called GoodWeather, to show how preferences affect app behaviors. GoodWeather is an app that allows users to select the location by GPS or text search and displays the weather condition for the selected location. It also has a feature called widget that decks out the phone screen with the up-to-date weather condition. Users are offered with customization options manifested in preferences, as shown in Figure 1a.
Some of the preferences can change the widget’s functions, for example, update location can determine whether or not to start a service to runtime synchronize the location in the widget with the one set in the app. Others can be used to customize the styles of look, such as widget theme. The setting of such preferences can affect either the app behavior or the GUI display, and in some cases, cause bugs. For example, by default, update location is set to disabled, under which users are able to change the location. However, if update location is enabled, when users try to change the location, a crash would occur, as shown in Figure 1b. Clearly, to reveal this
bug, testers need to set this specific preference option first, and then change the location in the app. However, there is no explicit connection between a preference setting for the widget and a failure in the main app, and thus this bug is very likely to be untested.
From the GoodWeather example, it is obvious that a systematic and thorough preference-wise testing is needed to improve app quality. However, preference-wise testing can be challenging, since the impacts of preferences are tangled with app functions. As illustrated by the example, only enabling preference \textit{update location} or selecting the current location will not trigger the crash. Very often testing tools or even human testers have no knowledge about what preferences would affect the functions under test. Therefore, to intentionally reveal instead of randomly triggering the preference related bugs, they may have to perform exhaustive combinations of test cases and preference settings, which can lead to an explosion in testing space. Hence, there is an urgent need for the study of cost-effective preference-wise testing approach.
3 PREFERENCE-WISE TESTING
3.1 Approach Overview
Figure 2 depicts the overview of \textit{Prefest}. Given an APK file of the \textit{App Under Test (AUT)} and a set of test cases for the \textit{AUT}, \textit{Prefest} identifies the relevant preferences that may affect the app behavior and runs the test cases under a relevant preference option combinations to have a more thorough test of the \textit{AUT}. The test cases can be written manually, or generated from automated testing approaches like \textit{AndroidRipper} [3], \textit{A3E} [7] or \textit{Stoat} [33]. \textit{Prefest} consists of two major analyses: Preference Identification, which identifies and locates all the preferences (denoted as \textit{PI}) defined in \textit{AUT}; and Preference-Guided Test Case Analysis, which reveals the relevance between preferences and test cases through a data-flow analysis, and only tries the combinations of relevant preference options for each test case (denoted as \textit{PS}). An additional analysis mode, called Target Mode is also proposed, in which \textit{Prefest} splits the code into code blocks, and identifies the untested blocks and their relevant preferences (denoted as \textit{PB}). It only executes the test cases that can reach untested code blocks, and therefore, is more efficient.
3.2 Preference Identification
To conduct preference-wise testing, it is necessary to first identify the collection of preferences defined in the \textit{AUT}. \textit{Prefest} achieves so by reversing preference resource files from the \textit{AUT} with \textit{jadx} [31], and recording preferences by their \textit{key, title, type} and \textit{entryValues}. Currently, it supports four types of preferences, which are \textit{SwitchPreference}, \textit{CheckBoxPreference}, \textit{ListPreference} and \textit{EditTextPreference}. The decision is based on the investigation of 115 apps containing preferences from a popular open-source Android app list on GitHub [27]. It shows, on average, each app contains 20 preferences, of which 18 (90\%) preferences are of the aforementioned four types. The other 10\% are of the other types or customized preferences by developers, which we plan to support in the future.
Then, \textit{Prefest} uses \textit{Soot} [17] to statically analyze the source code for the Activities and Fragments in which the preferences are located. It first collects all the direct method calls, denoted as \textit{m_{caller} \rightarrow m_{callee}} in \textit{AUT}. The method callbacks are not considered here, since methods responsible for loading setting screens are mostly directly called during the initialization of the Activities or Fragments. Each method \textit{m} is assigned an attribute \textit{declaring} representing its declaration class. A call trace \textit{p}, defined as \textit{p = m_1 \rightarrow m_2 \rightarrow \ldots \rightarrow m_{t-1} \rightarrow m_t}, represents that through methods \textit{m_2}, \textit{m_3}, \ldots, \textit{m_{t-1}}, \textit{m_1} eventually invokes \textit{m_t}, and \textit{T} is the set of all call traces. \textit{Prefest} conducts analysis on the call traces to identify where the setting screens are loaded. By studying ways of implementing the setting screen loading, we summarized three loading patterns from the Android official documents, as shown in Table 1.
Pattern LPA represents that the loading of a setting screen is performed by an Activity, where the loading API \textit{addPreferencesFromResource} is eventually called by the \textit{onCreate} method of an
Table 1: Patterns for loading setting screens
<table>
<thead>
<tr>
<th>Pattern</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>LPA</td>
<td>$\exists p = m_{oc} \to \ldots \to m_{add} \in P, m_{oc}.declass \in \text{Activities}$</td>
</tr>
<tr>
<td>LPF</td>
<td>$\exists p, p' \in P, \rho = m_{oc} \to \ldots \to m_{add}, \rho' = m'<em>{lifecycle} \to \ldots \to m'</em>{init}, m_{oc}.declass = m'<em>{init}.declass \in \text{Fragments} \land m'</em>{lifecycle}.declass \in \text{Activities}$</td>
</tr>
<tr>
<td>LPH</td>
<td>$\exists p, p' \in P, \rho = m_{oc} \to \ldots \to m_{add}, \rho' = m_{oc} \to \ldots \to m'<em>{load}, m</em>{oc}.declass \in \text{fragments_referred(m'<em>{load})} \land m'</em>{oc}.declass \in \text{Activities}$</td>
</tr>
</tbody>
</table>
$m_{oc}$: lifecycle method `onCreate`;
$m_{lifecycle}$: any lifecycle method;
$m_{add}$: API method `addPreferencesFromResource`;
$m_{load}$: API method `loadHeadersFromResource`;
$m_{init}$: the constructor of a Class.
Activity through $\rho$. The setting screen is shown when the activity is launched. Pattern LPF represents that the loading of a setting screen is performed by a Fragment, which itself is initialized by an Activity. Loading API `addPreferencesFromResource` is eventually called by method `onCreate` declared in a Fragment, and an Activity instantiates this Fragment in one of its lifecycle methods through $\rho'$. For pattern LPF, the setting screen is shown when the activity is launched, initializing the fragment to load the setting screen. Pattern LPH represents that a preference header, responsible for loading multiple setting screens, is loaded by an Activity. Through call trace $\rho$, loading API `addPreferencesFromResource` is eventually called by method `onCreate` declared in a Fragment. Different from pattern LPF, the Fragment is not initialized explicitly, but instead, referred to in a preference header resource file. When an Activity eventually calls method `loadHeadersFromResource` in its `onCreate` method through $\rho'$ and loads the preference header, all fragments referred to in its resource file, represented by `fragments_referred(m'_{load})`, are initialized by the Android system. For pattern LPH, when the activity is launched, a preference header is shown, containing a list of selections for users to switch among different setting screens.
To analyze which pattern is adopted, Prefest starts from each $m_{add}$ and $m_{load}$, and performs a backwards search for any match of the pattern LPA, LPF or LPH. After the analysis, it obtains necessary information for each preference, denoted as $pi = \langle \text{key, title, type, entryValues, location} \rangle$. We define $PI$ as the set of all the $pi$. With $PI$, Prefest is able to set any concerned preference option combinations automatically with off-the-shelf Android GUI test frameworks.
### 3.3 Preference-Guided Test Case Analysis
To reduce the number of preference option combinations for test cases, we need to analyze for each test case which preferences are relevant. We define the relevant preferences to a test case are those whose values are acquired, passed and used in branch conditions during the execution of the test case, since preferences used in branch conditions can dynamically modify the function behaviors.
These branches, ignored by existing approaches, are usually blind spots in Android testing.
However, it is difficult to conduct a precise analysis statically, as Android apps are not stand-alone applications but plugins into the Android framework [6]. Even worse, the SharedPreferences mechanism used in preferences’ acquisition makes that the same line of code may point to a different preference, since the key of the preference can be changed. Techniques such as symbolic execution is required, however, they suffer from scalability issues due to the event-driven nature and the application development framework of Android.
We propose a dynamic analysis to address this problem. For the AUT, Prefest instruments loggers with Soot at the beginning and the end of each method, and also at each branching point. For efficiency, loggers are not instrumented in Android SDK and the external libraries. We simply record the invocations of API methods in these libraries. When running a test case, the logs are automatically collected, from which an execution flow comprised of a linear sequence of statements is generated. Then Prefest analyzes the execution flow statement by statement and collects variable manipulations and branch conditions.
$$(expression) \ e ::= n \mid pi \mid op(e) \ mi(e) \in E$$
$$(variable) \ v ::= \{v_1, v_2, \ldots, v_m\} \in V$$
$$(condition label) \ l \in \text{Label}$$
$$(statements) \ s ::= v = e \mid if \ e \ v \ else \ s_1 \mid switch \ e \ case \ n_1 : s_1 \mid case \ n_j : s_j$$
$$(execution flow) \ f ::= s_1; s_2; s_3; \ldots; s_n$$
The syntax of an execution flow is shown above. Here, $V$ and $E$ represent the sets of variables and expressions, respectively. Each $e \in E$ can be a constant $n$ (including constants of Boolean, Integer, Float, String), a variable $v$, or an expression constructed with a java operator $op$, a method invocation $mi$ or a symbolic variable $pi$ representing a preference. Recall that all necessary information for manipulating a preference is in $pi$ (Sec. 3.2), so it is natural to use $pi$ as the symbolic representation for preferences.
In an execution flow, loops are unfolded during the dynamic execution. Branch conditions of conditional statements are all labelled. Additionally, for invocations of methods that are instrumented, the parameter passing and method returns are also considered as assignments, and the execution of method bodies are included in the execution flow. Finally, an execution flow $f$ is represented as a series of statements.
$$(symbolic \ variable \ state) \ \Gamma_v ::= \{v_1 : e_1, \ldots, v_m : e_m\}$$
$$(symbolic \ conditional \ state) \ \Gamma_s ::= \{l_i : e_i^0, \ldots, l_n : e_i^n\}$$
$$(execution \ state) \ \omega_s ::= \langle \Gamma_v, \Gamma_s \rangle$$
Our data-flow analysis is performed along the execution flow, statement by statement. To deal with aliasing, an Andersen’s style analysis is implemented. The execution state $\omega_s$ at statement $s$ is defined above. For $\omega_s$, we use $\Gamma_v$ and $\Gamma_s$ to define the mapping relation that maps a variable $v$ or a branch condition labeled with $l_i$ to its symbolic expression $e_i$.
By applying $\Gamma_c$ on the variables representing keys, expressions about keys can be obtained. In most cases, keys are represented with constants, or string operations over several constants, and therefore, Prefest can calculate the concrete values of such keys. Then it retrieves the preferences having been accessed during testing from SharedPreferences by interpreting the seven preference acquisition methods defined in the Android official documents, including getBoolean, getFloat, getString, getInt, getLong, getStringSet, and getAll, with the calculated concrete values of keys.
...
S1: $r1 = \text{"widget_"}$
S2: $r2 = r1 + \text{"update_location_pref_key"}$
S3: $r3 = \text{SharedPreferences.getDefaultSharedPreferences}()$
S4: $r20 = r3.$getBoolean(r2, 0)
S5: $z1 = 0$
S6: if($z1 == 0$)
Listing 2: A slice of execution flow of GoodWeather
Take the preference update location of GoodWeather in Sec. 2.2 for example. Listing 2 shows a slice of the execution flow, consisting of four consecutive sequences of statements, which acquires and uses preference update location. Statements S1 and S2 generate the key of preference update location by string concatenation. S3 obtains SharedPreferences which stores all preferences. S4 invokes a preference acquisition method (getBoolean) with variable $r2$ as the key, and assigns the acquired preference option value to $r20$. S5 assigns the reverse of $r20$ to $z1$, which contributes to the branch condition in S6. Prefest calculates the concrete value of the key variable $r2$ used in S4, which is "widget_update_location_pref_key". It then interprets getBoolean in S4 with the value of $r2$, to get the specific preference update location, represented by the symbolic variable $p_{\text{update_location}}$.
With $\Gamma_c$, a relevant preference can be revealed from whether its $pi$ is directly or can affect by assignments other variables contained in the symbolic value of any branch condition. For instance, in the $\Gamma_c$ of the execution state at S6 of Listing 2, we have $(l_0, p_{\text{update_location}} \neq 0)$ for the branch condition in S6. So, preference update location is relevant to this branch condition, and by setting it to different values (true or false), the execution can reach different branches.
Now Prefest can test different app behaviors by trying different option value combinations of the relevant preferences, instead of all the preferences in the app. The valid values for a preference, i.e., entryValues, are already known, as discussed in Sec. 3.2. Specifically, SwitchPreferences and CheckBoxPreferences can be set to true or false; ListPreferences can be set to a finite set of options in the form of strings, predefined by developers; for EditTextPreferences which accept user text inputs as their values, Prefest uses boundary values as its entryValues, such as null or a random string (0, 1, IntMax are tested when only number input is allowed), since it focuses on bug detection.
We define a preference option combination to be tried for a test case testcase as a $ps$, where $ps = \{(pi, value)\}, \text{testcase}\$, and each $(pi, value)$ represents the setting for a single preference. A testcase can have multiple $ps$ representing different option combinations. Note that the number of $ps$ for a test case depends on the combinatorial strategy of preferences. For example, a pairwise combinatorial strategy can result in a smaller number of $ps$ than a full combinatorial strategy. PS represents all preference option combinations to be tested on all test cases in our preference-wise testing. Given a $ps \in \text{PS}$, Prefest generates a script and executes it to set preference option values, before executing the test case $ps.testcase$. In the script, for each $(pi, value)$ of $ps$, the $pi$.title and $pi.location$ help locate the preference in the screen, while $pi.type$ and value are used to generate operations that set the correct option value for the preference. After all the relevant preference are set, the original test case $ps.testcase$ is executed.
3.4 Test Cost Reduction with Target Mode
By focusing on relevant preferences, Prefest only needs to try option combinations for the relevant preferences. However, we empirically found out that PS can still be of a large size in some cases. For instance, in app Suntimes, 12 two-option (true and false) preferences are used in branch conditions upon its initialization, where option combinations can be too many. A further reduction of test cost is required, and we propose the Target Mode.
In Target Mode, Prefest splits the app code into blocks—straight-line code sequences with no branches in except to the entry and no branches out except at the exit. Since Prefest aims at testing the preference-related branches, we select blocks in preference-related branches as our targets. Noticing that third-party libraries can also be affected by preferences through parameter passing to demonstrate different behaviors, blocks containing invocations of third-party methods with preference-related variables as their parameters are also considered as targets. By splitting the execution flows into blocks, Prefest analyzes the relevant preferences to targets, similar to the analysis in Sec. 3.3. Like the $ps$ for a test case, we define a preference option combination to be tested for a target as $pb = \{(pi, value)\}, \text{block}\$.
As discussed earlier, targets only need to be executed once during testing. To accelerate the testing process, Prefest adopts the greedy strategy that the test case which can potentially execute most targets under a certain preference option combination is selected to be executed first. The key to the strategy is that we need to know which blocks can be reached by a test case under different option combinations, and which option combinations can help to reach the previously unreached blocks. By analyzing the execution flow of a test case combined with code, we can locate the branching points that the test case can reach, and all branches belong to these branching points can be reached potentially. To test the unreached blocks, we need the concrete values of variables, including preferences, to manipulate the values of the branch condition. Thanks to the symbolic representation of the branch conditions, most concrete values can be calculated. Thus, given a target block, Prefest can produce its $pb$, which is used to set the values of preferences.
Algorithm 1 shows the details of the Target Mode. It takes $PS$—the set of test cases with different preference option combinations—as input, and outputs $PB_{total}$—the set of the reached blocks with the option combination settings when it finishes. In the beginning is a
We implemented our approach into a tool, also name Prefest. The tool and the experimental data are available online \(^1\).
To evaluate Prefest, we conducted a series of experiments to answer the following questions:
RQ1 How effective is Prefest in terms of the code/branch coverage and the bug detection ability?
RQ2 How efficient is Prefest in terms of the number of test-runs and the test time?
RQ3 How does Prefest compare against alternative approaches for preference option combinations in terms of effectiveness and efficiency?
RQ4 How does Target Mode perform? Specifically, does it strike a good balance between test cost and test effectiveness?
### 4.1 Experiment Setup
We selected Stoat \([33]\), one of the state-of-the-art automated Android testing tools, to generate test cases as inputs for Prefest. The subject apps are chosen from both previous researches \([29, 32]\) and a popular open-source Android app list on GitHub \([27]\) with the following criteria:
1. the app should contain at least five preferences in its setting;
2. the app should be able to run standalone instead of as a library, and should be compatible with Android API-19, which is the recommended environment for Stoat;
3. the app should achieve a code coverage of over 20% and not easily crash when tested with Stoat.
Eventually, 7 apps from previous researches and 22 apps from the GitHub list satisfying the criteria were selected. Together with our motivating example GoodWeather, totally 30 apps were chosen as our subjects. We also analyzed the apps’ sizes by lines of bytecode (i.e., lines of instructions, calculated by JaCoCo) and numbers of preferences. The results show that the complexity of these apps has enough diversity for ranging from 5k instructions with 5 preferences, to over 200k instructions with 96 preferences.
To answer the RQs, we first applied Prefest with Target Mode (denoted as Prefest(T)) on all 30 apps. Then, we compared Prefest(T) with another two combination approaches for preference options, which are:
- **NonDefault**—from Sec. 3.2, we know that each preference has a default value under which the original test case is executed. In this strategy, each preference is set to a value other than its default value (random value is used if there are multiple valid values).
- **Pairwise**—the most common type of t-way combinatorial testing [26], that is, for any two preferences among all preferences, all possible pairs of their option values are tested for each test case. For ListPreference and EditTextPreference which may have multiple values, only two values—the default one and a randomly selected one—is used to restrict the combination number in Pairwise.
We also compared Prefest against an implementation without the Target Mode (denoted as Prefest(N)) in the comparative study. Prefest(N) uses pairwise technique to construct the set PS to be executed. In other words, compared with Pairwise, Prefest(N) adapts the pairwise testing for not all preferences but only relevant ones.
The comparative study was only conducted on GoodWeather and the seven apps from previous researches, since it was extremely time consuming and virtually impossible to conduct the study on all 30 apps.
---
\(^1\)https://github.com/Prefest2018/Prefest
The experimental environment was a physical machine with 8GB RAM and 2.0GHz quad-core processor. The Android emulator to run tests was configured with 2GB RAM and the X86 ABI image (SDK 4.4.2, API level 19). The running of Stoat was on Ubuntu 14.04 configured as: 1h for GUI exploring, 1h for MCMC sampling, 30 steps as the longest steps in sampling one case, and 30 cases generated at one iteration of sampling. For comparison, we retrieved the test cases from Stoat’s records of MCMC sampling and run the tests on Windows 10 under the above four strategies. For all the experiments, we use JaCoCo [14] to calculate the coverage of instructions and branches.
### 4.2 RQ1: Effectiveness on Coverage and Bugs
Table 2 lists the 30 apps, their sizes measured by number of instructions and preferences, the instruction and branch coverages achieved by the original test (Default) and Prefest(T), respectively. As we can see, with a preference-wise testing, the coverages of all subjects have been improved by percentages ranging from 0.7%-15.1% for instruction coverage, and 1.3%-34.1% for branch coverage.
The average improvement is 6.8% and 12.3%, for instruction and branch coverages, respectively. As an enhanced testing for an already state-of-the-art tool, this improvement is significant.
We can see that Prefest(T) achieved large improvement in some apps: among 30 apps, instruction coverage improvement over 10% is seen in 8 apps, and branch coverage improvement over 20% is seen in 7 apps; and small improvement (less than 3% improvement) is seen in 7 apps; and small improvement (less than 3% improvement) is seen in 7 apps; and small improvement (less than 3% improvement) is seen in 7 apps. We studied these apps and their original test cases, and found out that the apps having more improvement were better tested by Stoat, compared with the apps with less improvement. This is reasonable, since Prefest is a complement to existing testing approaches and relies on the execution flows to analyze the relevant preferences. Therefore, the preference-wise testing and the other testing approaches can form a mutual boost relationship in performance.
It is worth mentioning that for apps Signal, Anikandroid and Wikipedia, although the improvement of 4.02%, 5.39% and 5.84% (5.40%, 9.13% and 7.03%) in instruction (branch) coverages, respectively. This is reasonable, since Prefest is a complement to existing testing approaches and relies on the execution flows to analyze the relevant preferences.
significant, considering that these apps have more than 100K instructions, the more tested instructions and branches, in absolute terms, can be over 2000 instructions and 100 branches.
We are particularly interested in branch coverage, since branches can cause different app behaviors with the same movements on the GUIs, and are common in complex apps. We conducted experiments to evaluate how well the branches can be tested with Prefest, and whether it is possible to use existing approaches to obtain similar or better results. We chose Default (Stoat), Prefest(N), Prefest(T), and an additional testing tool Monkey [13]—a clear winner among current test input generation tools [9], to conduct experiments on the example of goodWeather and the seven apps from existing researches. We configured Monkey as [9] suggested, and the test time was also set to 1 hour, the time for MCMC sampling in Stoat.
The results are shown in Figure 3. For all the branches in the apps that can be affected by preferences, Prefest(T) and Prefest(N) covered 88% and 90% branches on average. Although Prefest(T) tries less preference option combinations than Prefest(N), in some apps, it can achieve higher branch coverage, since Prefest(T) can select the exact options for ListPreferences to cover preference-related branches via the concrete value calculation, whereas for the pairwise combination strategy of Prefest(N), a random selection of the ListPreferences value is used for the non-default value. Stoat and Monkey achieved 59% and 72% branch coverage on average, respectively. Considering that all preferences have default values, even forbidding the setting of preferences, Stoat and Monkey should be able to achieve a branch coverage ranging from 30% to 50% from our observation. So, from this point of view, we can say that it is difficult to achieve a high coverage for these preference-related branches, even with the two of the most effective testing tools. However, with Prefest, the branch coverage can be easily improved to around 90%.
Prefest detected additional five bugs, as shown in Table 3. These bugs are all preference related, which can only be found by testing specific functions under specific preference settings, and are not detected by Stoat. The reason is that Stoat usually missed some specific values of specific preferences, or sometimes even missed the setting screens, due to its random nature. The first bug causes data leaks while the others cause app crashes, which were logged as error messages by Android system. All bugs have been reproduced. Only the bug in vanilla is two-preference relevant while the rests are one-preference relevant. Particularly, the first four bugs were revealed for the first time, and we posted issues on GitHub. The last bug has been reported by others. So far, bugs in KiSS, vanilla and AmazeFileManager have been confirmed and fixed by developers. Especially, the bug revealed in vanilla was an old one introduced over one year ago, and developers were happy to know the root cause and be able to fix it. There is no response for the other two bug issues, and we noticed that these two projects are no longer maintained. Nevertheless, since they cause app crashes or data leaks, we are confident that they are real bugs.
### 4.3 RQ2: Efficiency
To answer RQ2, we recorded the test time and the numbers of test-runs of Default and Prefest(T) on the 30 apps in Table 2. Time consumed by Prefest(T) consists of the preference analysis time and the test execution time, and Table 2 shows the total time, with the analysis time in parentheses.
Compared to 107 minutes and 232 test-runs took by Default on average, Prefest(T) only took 24 minutes and 18 test-runs, contributing 22.4% and 8.0% to those of Default. The reason is that Prefest(T) aims at only the unreached blocks, and thus, just needs to execute part of the test cases. Meanwhile, as discussed, Prefest(T) has a good performance on code coverage and bug detection, showing the value of our proposed “enhanced testing”.
The results also show the efficiency of our static and dynamic combined analyses. For 29 out of all 30 apps, 23 apps took just about 1 minute to conduct the analyses, and the other 6 apps took no more than 4 minutes. Only app — Signal, took 11 minutes due to its large app size and long execution flow. However, the time cost is still acceptable, compared with the original test time, and more time spent on complex apps, we believe, is worthwhile.
### 4.4 RQ3 & RQ4: Comparative Study
To answer RQ3 and RQ4, we run experiments on the example of GoodWeather and the seven apps from existing researches with Default, Prefest(T), Prefest(N), NonDefault and Pairwise, and recorded the results in in Table 4 and Table 5.
---
**Table 3: Bugs detected by Prefest**
<table>
<thead>
<tr>
<th>App</th>
<th>GitHub Issue URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>GoodWeather</td>
<td>github.com/qqq3/good-weather/issues/54</td>
</tr>
<tr>
<td>Radiobeacon</td>
<td>github.com/openbmap/radiocells-scanner-android/issues/223</td>
</tr>
<tr>
<td>KiSS</td>
<td>github.com/Neamar/KiSS/issues/1136</td>
</tr>
<tr>
<td>vanilla</td>
<td>github.com/vanilla-music/vanilla/issues/898</td>
</tr>
<tr>
<td>AmazeFileManager</td>
<td>github.com/TeamAmaze/AmazeFileManager/issues/1400</td>
</tr>
</tbody>
</table>
---
Figure 3: Preference-related branch coverage achieved by Stoat, Monkey, Prefest(T) and Prefest(N)
Table 4: Comparison of the instruction and branch coverages of different strategies
<table>
<thead>
<tr>
<th>Subject</th>
<th>Default</th>
<th>Prefest(T)</th>
<th>Prefest(N)</th>
<th>NonDefault</th>
<th>Pairwise</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Inst.%</td>
<td>Branch%</td>
<td>Inst.%</td>
<td>Branch%</td>
<td>Inst.%</td>
</tr>
<tr>
<td>GoodWeather</td>
<td>60.61</td>
<td>35.10</td>
<td>68.33</td>
<td>47.05</td>
<td>70.12</td>
</tr>
<tr>
<td>A2dpvolume</td>
<td>40.03</td>
<td>17.39</td>
<td>41.56</td>
<td>20.31</td>
<td>41.70</td>
</tr>
<tr>
<td>Alwayson</td>
<td>44.55</td>
<td>30.71</td>
<td>46.10</td>
<td>33.63</td>
<td>47.64</td>
</tr>
<tr>
<td>Suntimes</td>
<td>39.65</td>
<td>29.25</td>
<td>42.69</td>
<td>32.58</td>
<td>43.94</td>
</tr>
<tr>
<td>Opensudoku</td>
<td>44.61</td>
<td>32.53</td>
<td>46.76</td>
<td>36.60</td>
<td>47.29</td>
</tr>
<tr>
<td>Radio beacon</td>
<td>37.18</td>
<td>19.90</td>
<td>39.60</td>
<td>21.28</td>
<td>39.80</td>
</tr>
<tr>
<td>Notepad</td>
<td>51.97</td>
<td>39.84</td>
<td>55.19</td>
<td>47.81</td>
<td>55.19</td>
</tr>
<tr>
<td>Wikipedia</td>
<td>43.14</td>
<td>27.31</td>
<td>45.66</td>
<td>29.23</td>
<td>49.25</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Average Improvement%</td>
<td>6.4%</td>
<td>14.8%</td>
<td>8.9%</td>
<td>19.3%</td>
<td>5.4%</td>
</tr>
</tbody>
</table>
Table 5: Comparison of test-run numbers and test time of different strategies
<table>
<thead>
<tr>
<th>Subject</th>
<th>Default</th>
<th>Prefest(T)</th>
<th>Prefest(N)</th>
<th>NonDefault</th>
<th>Pairwise</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>#Run</td>
<td>Time(min)</td>
<td>#Run</td>
<td>Time(min)</td>
<td>#Run</td>
</tr>
<tr>
<td>GoodWeather</td>
<td>360</td>
<td>137</td>
<td>14</td>
<td>14</td>
<td>2035</td>
</tr>
<tr>
<td>A2dpvolume</td>
<td>180</td>
<td>73</td>
<td>6</td>
<td>8</td>
<td>1080</td>
</tr>
<tr>
<td>Alwayson</td>
<td>240</td>
<td>121</td>
<td>15</td>
<td>22</td>
<td>1049</td>
</tr>
<tr>
<td>Suntimes</td>
<td>150</td>
<td>88</td>
<td>22</td>
<td>32</td>
<td>1497</td>
</tr>
<tr>
<td>Opensudoku</td>
<td>120</td>
<td>65</td>
<td>10</td>
<td>12</td>
<td>129</td>
</tr>
<tr>
<td>Radio beacon</td>
<td>180</td>
<td>118</td>
<td>16</td>
<td>17</td>
<td>294</td>
</tr>
<tr>
<td>Notepad</td>
<td>321</td>
<td>104</td>
<td>56</td>
<td>48</td>
<td>1661</td>
</tr>
<tr>
<td>Wikipedia</td>
<td>180</td>
<td>123</td>
<td>17</td>
<td>32</td>
<td>1429</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Average Percentage</td>
<td>9.0%</td>
<td>22.2%</td>
<td>522%</td>
<td>1557%</td>
<td>100%</td>
</tr>
</tbody>
</table>
From Table 4 we can see that Pairwise and Prefest(N) have the best and similar performance in improving instruction and branch coverages, which are 8.9% and over 19% improvement for instruction and branch coverages, respectively. The marginally lower branch coverage of Prefest(N) than Pairwise is because Prefest(N) missed some relevant preferences due to the short-circuit evaluation in the compiling stage. As Soot works on Java ByteCode, these short-circuit preferences were not analyzed. However, such cases are extremely rare, and thus, we can consider the effectiveness of Prefest(N) and Pairwise as equivalent. Prefest(T) comes next in effectiveness, with 6.4% and 14.8% improvement for instruction and branch coverages, respectively. A main reason for more coverage of Pairwise and Prefest(N) compared with Prefest(T) lies in that, for few blocks, their behaviors can vary under different preference option combinations. For example, some blocks, responsible for displaying GUIs, can present different preferences on setting screens, depending on the value of a certain preference, e.g., a switch deciding to display or hide a sub-menu of preferences. These cases cannot be handled by Prefest(T), but with a more exhaustive trying of different preferences, Prefest(N) is able to process most of them.
Nevertheless, Prefest(T) still retained 72% and 77% improvement in instruction and branch coverages of those of Prefest(N) and Pairwise. Considering its time cost, we still consider Prefest(T) as the best approach, for its balance on effectiveness and cost. As Table 5 shows, the Pairwise approach was extremely time-consuming, by taking over 43 times of the original time cost. In fact, the comparative study on just these 8 apps took about 35 days, and we estimated that over four months would be needed to scale the study to all the 30 apps. The fact that 24 days were used in applying Pairwise on the 8 apps indicates the necessity of our Prefest. By removing irrelevant preferences in combinations, Prefest(N) reduces the time by about two third of the time cost of Pairwise, but still needed more than 15 times of the original test time. In contrast, Prefest(T) only took about half an hour to perform the tests, accounting to just one-fifth of the original test time.
Nowadays, fast developing cycle is the key to the success of Android app development due to the fast-changing mobile markets, and developers typically can only spare a little time for testing. The Target Mode, which tries to keep a good balance between test efficiency and effectiveness, is more likely to be attractive to developers. If app quality is critical and time recourse allows, developers can still choose Prefest(N) for its best effectiveness in coverage but much less time cost than Pairwise. However, as the experiments show, the effectiveness in bug detection for Prefest(N)
and Prefest(T) is the same: all bugs found by Prefest(N) and Pairwise were found by Prefest(T).
4.5 Threats to Validity
4.5.1 Internal Threats. The major threat comes from that the original test cases may include some operations of setting preference options, which will change some option values set by Prefest and result in executing different code parts than planned. To mitigate this threat, Prefest takes into account the effects of some simple preference setting methods, such as setBoolean(), setString(), when calculating the values of preference options.
Another threat comes from Soot, which we use to perform the analysis. Soot works on Java ByteCode, so the short-circuit evaluation in the compilation phase would lead to the missing of relevance between preferences and test cases. An alternative analysis framework based on original source code can solve the problem, which we plan to study in the future.
The third threat comes from that the current implementation of Prefest takes Soot’s way of focusing on error messages produced by the Android system, and does not consider test assertions. If one needs assertions in the test cases, since Prefest generates new tests with different preference settings, new assertions will be needed.
4.5.2 External Threats. The main external threat is that our evaluation results may not be generalized on other Android applications. Our experiments were performed only on thirty apps, since the experiments were time-consuming. It is possible that the effectiveness may vary for other apps. However, this problem is alleviated since the complexity of the thirty apps has enough diversity for ranging from 5k to over 200k instructions, and several apps are also widely used in real-world such as Wikipedia and Signal.
As Prefest has only worked with Soot, another threat arises from whether Prefest can work with other test input generation approaches. We mitigate this threat by implementing Prefest into an independent tool which takes test cases as the direct input. In this case, Prefest can easily cooperate with other tools, as test cases can be easily obtained from these tools’ log files. Certainly, manually written test cases are also welcome to Prefest.
5 RELATED WORK
In this section, we will discuss relevant researches from Android testing and combinatorial testing.
5.1 Android Testing
Nowadays, frameworks and tools that automate the execution of tests are widely spread in industry, such as Robotium [28], monkeyrunner [25] and Appium [5]. To further improve the automation, many research approaches are proposed for the automated generation of test inputs, based on fuzzing testing techniques [2, 19], model-based testing techniques [3, 7] and search-based techniques [20, 21]. Several researches also apply symbolic execution or concolic execution to Android testing: Mirzaei et al. [23] present SIG-Droid, which combines model-based testing with symbolic execution to systematically generate test inputs for Android apps; Anand et al. [4] illustrate the technique ACTEve, which treats touch on screen as user inputs and generates sequences of events automatically and systematically with concolic execution to alleviate the
5.2 Combinatorial Testing
Combinatorial Testing has been an active field of researches in the last twenty years [26]. One of the major trends in this area has been towards minimizing the size of test sets for a given combinatorial criteria, with greedy and heuristic algorithms [10, 11, 18, 34], genetic algorithm [22, 30], or even artificial intelligence [1].
Recent years, these combinatorial optimization techniques are also adapted in Android testing. Two studies are closely related to this paper, one is TrimDroid [24], an approach that statically extracts dependencies among widgets to reduce the number of combinations in GUI testing; and the other one is PATDroid, which performs a hybrid program analysis that excludes irrelevant permissions to reduce unnecessary permission combinations for test cases. Compared with TrimDroid, Prefest uses both static and dynamic analyses on the AUT and the existing test cases to perform the preference-wise testing under certain preference option combinations, while TrimDroid employs static analysis over the AUT, to automatically generate test cases. Compared with PATDroid, Prefest targets at preferences, which are more difficult to analyze as their values are passed through execution flows. In addition, PATDroid uses manual written test cases while Prefest uses test cases generated from automatic testing tools, which are usually of huge size, bringing more difficulty for reduction. In summary, we propose the Target Mode in Prefest that can reduce the test cost to a plausible level.
6 CONCLUSION
We present Prefest, a preference-wise enhanced testing approach for Android applications. With a static and dynamic combined analysis, Prefest gives an automated solution to test apps only under necessary preference option combinations with existing tests, in which a Target Mode is proposed for further reduction in test cost. Our experiment results show that within 1% test cost compared to tests under pairwise combinations of preferences, Prefest achieves a 6.8% and 12.3% improvement in code and branch coverage. Moreover, we also found five additional preference-related bugs in real-world apps using Prefest.
ACKNOWLEDGMENTS
This research is supported by the National Key R&D Program of China (Grant No. 2017YFB1001801) and the National Natural Science Foundation of China (Nos. 6160204, 61632015).
REFERENCES
|
{"Source-Url": "https://minxuepan.github.io/Pubs/prefest.pdf", "len_cl100k_base": 11705, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43198, "total-output-tokens": 13017, "length": "2e13", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.00024509429931640625, "__label__crime_law": 0.00025177001953125, "__label__education_jobs": 0.0005769729614257812, "__label__entertainment": 4.762411117553711e-05, "__label__fashion_beauty": 0.00013780593872070312, "__label__finance_business": 0.00010949373245239258, "__label__food_dining": 0.00022327899932861328, "__label__games": 0.0005068778991699219, "__label__hardware": 0.0006451606750488281, "__label__health": 0.00024819374084472656, "__label__history": 0.0001418590545654297, "__label__home_hobbies": 5.322694778442383e-05, "__label__industrial": 0.00019025802612304688, "__label__literature": 0.00017952919006347656, "__label__politics": 0.0001684427261352539, "__label__religion": 0.0002865791320800781, "__label__science_tech": 0.004650115966796875, "__label__social_life": 7.12275505065918e-05, "__label__software": 0.005352020263671875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002269744873046875, "__label__transportation": 0.0002951622009277344, "__label__travel": 0.0001442432403564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53345, 0.03991]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53345, 0.34449]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53345, 0.89203]], "google_gemma-3-12b-it_contains_pii": [[0, 4499, false], [4499, 10749, null], [10749, 15432, null], [15432, 21824, null], [21824, 28586, null], [28586, 31869, null], [31869, 34380, null], [34380, 39714, null], [39714, 44961, null], [44961, 50536, null], [50536, 53345, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4499, true], [4499, 10749, null], [10749, 15432, null], [15432, 21824, null], [21824, 28586, null], [28586, 31869, null], [31869, 34380, null], [34380, 39714, null], [39714, 44961, null], [44961, 50536, null], [50536, 53345, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53345, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53345, null]], "pdf_page_numbers": [[0, 4499, 1], [4499, 10749, 2], [10749, 15432, 3], [15432, 21824, 4], [21824, 28586, 5], [28586, 31869, 6], [31869, 34380, 7], [34380, 39714, 8], [39714, 44961, 9], [44961, 50536, 10], [50536, 53345, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53345, 0.16889]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
28195dfa61c29683350925684c63dc21e5be9344
|
[REMOVED]
|
{"len_cl100k_base": 9663, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 51494, "total-output-tokens": 10822, "length": "2e13", "weborganizer": {"__label__adult": 0.0005755424499511719, "__label__art_design": 0.000903606414794922, "__label__crime_law": 0.0006709098815917969, "__label__education_jobs": 0.01288604736328125, "__label__entertainment": 0.0005540847778320312, "__label__fashion_beauty": 0.0003719329833984375, "__label__finance_business": 0.0009598731994628906, "__label__food_dining": 0.0005292892456054688, "__label__games": 0.0018167495727539065, "__label__hardware": 0.0009784698486328125, "__label__health": 0.0009617805480957032, "__label__history": 0.0007033348083496094, "__label__home_hobbies": 0.00016033649444580078, "__label__industrial": 0.0007624626159667969, "__label__literature": 0.004604339599609375, "__label__politics": 0.0006775856018066406, "__label__religion": 0.0008296966552734375, "__label__science_tech": 0.452392578125, "__label__social_life": 0.00034546852111816406, "__label__software": 0.044525146484375, "__label__software_dev": 0.472412109375, "__label__sports_fitness": 0.0004553794860839844, "__label__transportation": 0.0006561279296875, "__label__travel": 0.00026917457580566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42208, 0.0367]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42208, 0.64352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42208, 0.8725]], "google_gemma-3-12b-it_contains_pii": [[0, 3921, false], [3921, 7912, null], [7912, 12257, null], [12257, 16790, null], [16790, 20973, null], [20973, 23040, null], [23040, 24756, null], [24756, 26432, null], [26432, 31075, null], [31075, 33369, null], [33369, 34682, null], [34682, 37303, null], [37303, 41284, null], [41284, 42080, null], [42080, 42208, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3921, true], [3921, 7912, null], [7912, 12257, null], [12257, 16790, null], [16790, 20973, null], [20973, 23040, null], [23040, 24756, null], [24756, 26432, null], [26432, 31075, null], [31075, 33369, null], [33369, 34682, null], [34682, 37303, null], [37303, 41284, null], [41284, 42080, null], [42080, 42208, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42208, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42208, null]], "pdf_page_numbers": [[0, 3921, 1], [3921, 7912, 2], [7912, 12257, 3], [12257, 16790, 4], [16790, 20973, 5], [20973, 23040, 6], [23040, 24756, 7], [24756, 26432, 8], [26432, 31075, 9], [31075, 33369, 10], [33369, 34682, 11], [34682, 37303, 12], [37303, 41284, 13], [41284, 42080, 14], [42080, 42208, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42208, 0.05213]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
45e7adaa4f0c307ed57933e9d3c7efed35b9209f
|
Mapping of Applications to Platforms
Jian-Jia Chen
(slides are based on Peter Marwedel)
TU Dortmund, Informatik 12
Germany
2018年 01 月 23 日
These slides use Microsoft clip arts. Microsoft copyright restrictions apply.
Structure of this course
2: Specification
3: ES-hardware
4: System software (RTOS, middleware, …)
Design repository
6: Application mapping
7: Optimization
5: Evaluation & validation & (energy, cost, performance, …)
8: Test
Application Knowledge
Numbers denote sequence of chapters
Mapping of Applications to Platforms
## Distinction between mapping problems
<table>
<thead>
<tr>
<th></th>
<th>Embedded</th>
<th>PC-like</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Architectures</strong></td>
<td>Frequently heterogeneous very compact</td>
<td>Mostly homogeneous not compact (x86 etc)</td>
</tr>
<tr>
<td><strong>x86 compatibility</strong></td>
<td>Less relevant</td>
<td>Very relevant</td>
</tr>
<tr>
<td><strong>Architecture fixed?</strong></td>
<td>Sometimes not</td>
<td>Yes</td>
</tr>
<tr>
<td><strong>Model of computation (MoCs)</strong></td>
<td>C+multiple models (data flow, discrete events, ...)</td>
<td>Mostly von Neumann (C, C++, Java)</td>
</tr>
<tr>
<td><strong>Optim. objectives</strong></td>
<td>Multiple (energy, size, ...)</td>
<td>Average performance dominates</td>
</tr>
<tr>
<td><strong>Real-time relevant</strong></td>
<td>Yes, very!</td>
<td>Hardly</td>
</tr>
<tr>
<td><strong>Applications</strong></td>
<td>Several concurrent apps.</td>
<td>Mostly single application</td>
</tr>
<tr>
<td><strong>Apps. known at design time</strong></td>
<td>Most, if not all</td>
<td>Only some (e.g. WORD)</td>
</tr>
</tbody>
</table>
Problem Description
Given
- A set of applications
- Scenarios on how these applications will be used
- A set of candidate architectures comprising
- (Possibly heterogeneous) processors
- (Possibly heterogeneous) communication architectures
- Possible scheduling policies
Find
- A mapping of applications to processors
- Appropriate scheduling techniques (if not fixed)
- A target architecture (if DSE is included)
Objectives and constraints
- deadlines, temperatures
- Cost, performance, energy, reliability
Related Work
- Mapping to ECUs in automotive design
- Scheduling theory:
Provides insight for the mapping task → start times
- Hardware/software partitioning:
Can be applied if it supports multiple processors
- High performance computing (HPC)
Automatic parallelization, but only for
• single applications,
• fixed architectures,
• no support for scheduling,
• memory and communication model usually different
- High-level synthesis
Provides useful terms like scheduling, allocation, assignment
- Optimization theory
Scope of mapping algorithms
Useful terms from hardware synthesis:
- **Resource Allocation**
Decision concerning type and number of available resources
- **Resource Assignment**
Mapping: Task $\rightarrow$ (Hardware) Resource
- **xx to yy binding:**
Describes a mapping from behavioral to structural domain, e.g. task to processor binding, variable to memory binding
- **Scheduling**
Mapping: Tasks $\rightarrow$ Task start times
Sometimes, resource assignment is considered being included in scheduling.
Classes of mapping algorithms considered in this course
- **Classical scheduling algorithms**
Mostly for independent tasks & ignoring communication, mostly for mono- and homogeneous multiprocessors (EDF, EDD, RM, DM, etc.)
- **Dependent tasks as considered in architectural synthesis**
Initially designed in different context, but applicable
- **Hardware/software partitioning**
Dependent tasks, heterogeneous systems, focus on resource assignment
- **Design space exploration using genetic algorithms**
Heterogeneous systems, incl. communication modeling
Classes of mapping algorithms considered in this course
- Classical scheduling algorithms
Mostly for independent tasks & ignoring communication, mostly for mono- and homogeneous multiprocessors (EDF, EDD, RM, DM, etc.)
- Dependent tasks as considered in architectural synthesis
Initially designed in different context, but applicable
- Hardware/software partitioning
Dependent tasks, heterogeneous systems, focus on resource assignment
- Design space exploration using genetic algorithms
Heterogeneous systems, incl. communication modeling
Scheduling with precedence constraints
Task graph and possible schedule:
Simultaneous Arrival Times: The Latest Deadline First (LDF) Algorithm
LDF [Lawler, 1973]: reads the task graph and among the tasks with no successors inserts the one with the latest deadline into a queue. It then repeats this process, putting tasks whose successor have all been selected into the queue.
At run-time, the tasks are executed in an order opposite to the generated total order.
LDF is non-preemptive and is optimal for mono-processors.
If no local deadlines exist, LDF performs just a topological sort.
Asynchronous Arrival Times: Modified EDF Algorithm
This case can be handled with a modified EDF algorithm. The key idea is to transform the problem from a given set of dependent tasks into a set of independent tasks with different timing parameters [Chetto90]. This algorithm is optimal for mono-processor systems.
If preemption is not allowed, the heuristic algorithm developed by Stankovic and Ramamritham can be used.
Dependent tasks
The problem of deciding whether or not a schedule exists for a set of dependent tasks and a given deadline is NP-complete in a strong sense in general [Garey/Johnson].
Strategies:
1. Add resources, so that scheduling becomes easier
2. Split problem into static and dynamic part so that only a minimum of decisions need to be taken at run-time.
3. Use scheduling algorithms from high-level synthesis
Task graph
Assumption:
execution time = 1
for all tasks
As soon as possible (ASAP) scheduling
ASAP: All tasks are scheduled as early as possible
Loop over (integer) time steps:
- Compute the set of unscheduled tasks for which all predecessors have finished their computation
- Schedule these tasks to start at the current time step.
As soon as possible (ASAP) scheduling: Example
\[
\begin{align*}
\tau &= 0 \\
\tau &= 1 \\
\tau &= 2 \\
\tau &= 3 \\
\tau &= 4 \\
\tau &= 5 \\
\end{align*}
\]
As late as possible (ALAP) scheduling
ALAP: All tasks are scheduled as late as possible
Start at last time step*:
Schedule tasks with no successors and tasks for which all successors have already been scheduled.
* Generate a list, starting at its end
As-late-as-possible (ALAP) scheduling: Example
(Resource constrained) List Scheduling
List scheduling: extension of ALAP/ASAP method
Preparation:
- Topological sort of task graph $G=(V,E)$
- Computation of priority of each task:
- Possible priorities $u$:
- Number of successors
- Longest path
- Mobility = $\tau$ (ALAP schedule) - $\tau$ (ASAP schedule)
Mobility as a priority function
*Mobility* is not very precise
Algorithm
List(G(V,E), B, u){
i := 0;
repeat {
Compute set of candidate tasks $A_i$;
Compute set of not terminated tasks $G_i$;
Select $S_i \subseteq A_i$ of maximum priority $r$ such that
$|S_i| + |G_i| \leq B$ (*resource constraint*)
foreach ($v_j \in S_i$): $\tau(v_j):=i$; (*set start time*)
$i := i + 1$;
}
until (all nodes are scheduled);
return ($\tau$);
}
Example
Assuming $B = 2$, unit execution time and $u : \text{path length}$
$u(a) = u(b) = 4$
$u(c) = u(f) = 3$
$u(d) = u(g) = u(h) = u(j) = 2$
$u(e) = u(i) = u(k) = 1$
$\forall i : G_i = 0$
(Time constrained) Force-directed scheduling
- Goal: balanced utilization of resources
- Based on spring model;
- Originally proposed for high-level synthesis
Evaluation of HLS-Scheduling
- Focus on considering dependencies
- Mostly heuristics, few proofs on optimality
- Not using global knowledge about periods etc.
- Considering discrete time intervals
- Variable execution time available only as an extension
- Includes modeling of heterogeneous systems
Overview
Scheduling of aperiodic tasks with real-time constraints: Table with some known algorithms:
<table>
<thead>
<tr>
<th></th>
<th>Equal arrival times; non-preemptive</th>
<th>Arbitrary arrival times; preemptive</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Independent tasks</strong></td>
<td>EDD (Jackson)</td>
<td>EDF (Horn)</td>
</tr>
<tr>
<td><strong>Dependent tasks</strong></td>
<td>LDF (Lawler), ASAP, ALAP, LS, FDS</td>
<td>EDF* (Chetto)</td>
</tr>
</tbody>
</table>
Conclusion
- HLS-based scheduling
- ASAP
- ALAP
- *List scheduling* (LS)
- *Force-directed scheduling* (FDS)
- Evaluation
Classes of mapping algorithms considered in this course
- **Classical scheduling algorithms**
Mostly for independent tasks & ignoring communication, mostly for mono- and homogeneous multiprocessors (EDF, EDD, RM, DM, etc.)
- **Dependent tasks as considered in architectural synthesis**
Initially designed in different context, but applicable
- **Hardware/software partitioning**
Dependent tasks, heterogeneous systems, focus on resource assignment
- **Design space exploration using genetic algorithms**
Heterogeneous systems, incl. communication modeling
Hardware/software partitioning
No need to consider special hardware in the future?
Correct for fixed functionality, but wrong in general: “By the time MPEG-\(n\) can be implemented in software, MPEG-\(n+1\) has been invented” [de Man]
Functionality to be implemented in software or in hardware?
Functionality to be implemented in software or in hardware?
Decision based on hardware/software partitioning, a special case of hardware/software codesign.
Codesign Tool (COOL) as an example of HW/SW partitioning
Inputs to COOL:
1. Target technology
2. Design constraints
3. Required behavior
Hardware/software codesign: approach
Processor $P_1$
Processor $P_2$
Hardware
Steps of the COOL partitioning algorithm (1)
1. Translation of the behavior into an internal graph model
2. Translation of the behavior of each node from VHDL into C
3. Compilation
- All C programs compiled for the target processor,
- Computation of the resulting program size,
- Estimation of the resulting execution time (simulation input data might be required)
4. Synthesis of hardware components:
∀ leaf nodes, application-specific hardware is synthesized.
High-level synthesis sufficiently fast.
Steps of the COOL partitioning algorithm (2)
5. Flattening of the hierarchy:
• Granularity used by the designer is maintained.
• Cost and performance information added to the nodes.
• Precise information required for partitioning is pre-computed
6. Generating and solving a mathematical model of the optimization problem:
• Integer linear programming ILP model for optimization. Optimal with respect to the cost function (approximates communication time)
Steps of the COOL partitioning algorithm (3)
7. **Iterative improvements:**
Adjacent nodes mapped to the same hardware component are now merged.
Steps of the COOL partitioning algorithm (4)
8. **Interface synthesis:**
After partitioning, the glue logic required for interfacing processors, application-specific hardware and memories is created.
Example
Hardware/Software Configurations
- Running on FPGA requires $C_i$ amount of configurable logic blocks (CLBs) and results in execution time $t_{i,h}$ (purely on FPGA)
- Running on the software (uniprocessor) requires $t_{i,s}$ (purely on software)
What is the minimum number of CLBs required for the task graph when the deadline is set to $D$?
An ILP model for HW/SW partitioning
- $X_v$: =1 if node $v$ is mapped to FPGA and 0 otherwise.
- Cost function: minimize $\sum_{v \in V} C_v X_v$
- Constraints:
- Let $F_i = t_{i,h} X_i + t_{i,s} (1-X_i)$
- If $X_2 = X_3 = 0$, then the finishing time is
- $F_1 + F_2 + F_3 + F_4$
- If $X_2 = X_3 = 1$, then the finishing time is
- $F_1 + \max\{F_2, F_3\} + F_4$
- If $X_2 = 1$ and $X_3 = 0$, then the finishing time is
- $F_1 + \max\{F_2, F_3\} + F_4$
- If $X_2 = 0$ and $X_3 = 1$, then the finishing time is
- $F_1 + \max\{F_2, F_3\} + F_4$
An ILP model for HW/SW partitioning
- $X_v$: =1 if node $v$ is mapped to FPGA and 0 otherwise.
- Cost function: minimize $\sum_{v \in V} C_v X_v$
- Constraints:
- Let $F_i = t_{i,h}X_i + t_{i,s}(1-X_i)$
- If $X_2 + X_3 = 0$, then the finishing time is
- $F_1 + F_2 + F_3 + F_4$
- If $X_2 + X_3 \geq 1$, then the finishing time is
- $F_1 + \max\{F_2, F_3\} + F_4$
- Logical Constraints:
- $(X_2 \text{ OR } X_3)$ implies $F_1 + \max\{F_2, F_3\} + F_4 \leq D$
- $\neg(X_2 \text{ OR } X_3)$ implies $F_1 + F_2 + F_3 + F_4 \leq D$
Transforming Nonlinear Operation “max” (only for your reference)
- Method 1: \( G = \max\{F_2, F_3\} \)
- \( F_2 \leq G \)
- \( F_3 \leq G \)
- \( F_1 + G + F_4 \leq D \) when \((X_2 \text{ OR } X_3)\)
- Method 2: \( G \leq \max\{F_2, F_3\} \)
- Let \( f \) be a sufficiently large positive integer (i.e., 1000000D)
- Let \( z \) be a binary variable, either 0 or 1
- It can be formulated by using the following four linear constraints:
- \( F_2 \leq F_3 + fz \)
- \( F_3 \leq F_2 + f(1-z) \)
- \( G \leq F_3 + fz \)
- \( G \leq F_2 + f(1-z) \)
Logical Operations
“AND/OR/NOT/Implication” (only for your reference)
- Logical $x_1$ AND $x_2$:
- Use the linear constraints $y_1 \geq x_1 + x_2 - 1$, $y_1 \leq x_1$, $y_1 \leq x_2$, $0 \leq y_1 \leq 1$, where $y_1$ is constrained to be an integer. This enforces the desired relationship.
- Logical $x_1$ OR $x_2$:
- Use the linear constraints $y_2 \leq x_1 + x_2$, $y_2 \geq x_1$, $y_2 \geq x_2$, $0 \leq y_2 \leq 1$, where $y_2$ is constrained to be an integer.
- Logical NOT $x_1$:
- Use $y_3 = 1 - x_1$.
- Logical implication: To express $y_4 = (x_1 \Rightarrow x_2)$ (i.e., $y_4 = \neg x_1 \lor x_2$), we can adapt the construction for logical OR.
- Use the linear constraints $y_4 \leq 1 - x_1 + x_2$, $y_4 \geq 1 - x_1$, $y_4 \geq x_2$, $0 \leq y_4 \leq 1$, where $y_4$ is constrained to be an integer.
Separation of scheduling and partitioning
Combined scheduling/partitioning very complex;
- Heuristic: Compute estimated schedule
- Perform partitioning for estimated schedule
- Perform final scheduling
- If final schedule does not meet time constraint, go to 1 using a reduced overall timing constraint.
Actual execution time
1\textsuperscript{st} Iteration
approx. execution time
specification
approx. execution time
New specification
2\textsuperscript{nd} Iteration
HW/SW partitioning in the context of mapping applications to processors
- Handling of heterogeneous systems
- Handling of task dependencies
- Considers of communication (at least in COOL)
- Considers memory sizes etc (at least in COOL)
- For COOL: just homogeneous processors
- No link to scheduling theory
SPARE Slides for FDS
Phase 1: Generation of ASAP and ALAP Schedule
Next: computation of “forces”
- Direct forces push each task into the direction of lower values of $D(i)$.
- Impact of direct forces on dependent tasks taken into account by indirect forces
- Balanced resource usage $\approx$ smallest forces
- For our simple example and time constraint=6: result = ALAP schedule
Scheduling – An example
Solve the differential equation
\[ y'' + 3zy' + 3y = 0 \]
This can be calculated using this iterative algorithm
\[
\text{while}(z < a) \text{ repeat}
\]
\[
zl := z + dz; \\
u_l := u - (3 \cdot z \cdot u \cdot dz) - (3 \cdot y \cdot dz); \\
y_l := y + (u \cdot dz); \\
z := zl; \\
u := ul; \\
y := yl;
\]
1. Compute time frames $R(j)$;
2. Compute “probability“ $P(j,i)$ of assignment $j \rightarrow i$
$R(j) = \{ \text{ASAP-control step} \ldots \text{ALAP-control step} \}$
$$P(j,i) = \begin{cases} \frac{1}{|R(j)|} & \text{if } i \in R(j) \\ 0 & \text{otherwise} \end{cases}$$
3. Compute “distribution” $D(i)$
(# Operations in control step $i$)
\[ D(i) = \sum_{j, \text{type}(j) \in H} P(j, i) \]
4. Compute direct forces (1)
- $\Delta P_i(j, i')$: $\Delta$ for force on task $j$ in time step $i'$, if $j$ is mapped to time step $i$.
The new probability for executing $j$ in $i$ is 1; the previous was $P(j, i)$.
The new probability for executing $j$ in $i' \neq i$ is 0; the previous was $P(j, i)$.
\[
\Delta P_i(j, i') = \begin{cases}
1 - P(j, i) & \text{if } i = i' \\
-P(j, i') & \text{otherwise}
\end{cases}
\]
4. Compute direct forces (2)
- $SF(j, i)$ is the overall change of direct forces resulting from the mapping of $j$ to time step $i$.
$$SF(j, i) = \sum_{i' \in R(j)} D(i') \Delta P_i(j, i')$$
$$\Delta P_i(j, i') = \begin{cases}
1 - P(j, i) & \text{if } i = i' \\
-P(j, i') & \text{otherwise}
\end{cases}$$
Example
$$SF(1, 1) = 2 \frac{5}{6} (1 - \frac{1}{2}) - 2 \frac{2}{6} (\frac{1}{2}) = \frac{1}{2} (\frac{17}{6} - \frac{14}{6}) = \frac{1}{2} (\frac{3}{6}) = \frac{1}{4}$$
4. Compute direct forces (3)
Direct force if task/operation 1 is mapped to time step 2
\[
D(1) = \frac{5}{6} \\
D(2) = \frac{2}{6} \\
D(3) = \frac{5}{6} \\
D(4) = 0
\]
\[
SF(1, 2) = D(1) \Delta P_2(1, 1) + D(2) \Delta P_2(1, 2)
\]
\[
= \frac{5}{6} \times (-0, 5) + \frac{2}{6} \times 0.5
\]
\[
= -\frac{17}{12} + \frac{14}{12}
\]
\[
= -\frac{3}{12} = -\frac{1}{4}
\]
5. Compute indirect forces (1)
Mapping task 1 to time step 2 implies mapping task 2 to time step 3
Consider predecessor and successor forces:
\[ V F(j, i) = \sum_{j' \in \text{predecessor of } j} \sum_{i' \in I} D(i') \Delta P_{j, i}(j', i') \]
\[ N F(j, i) = \sum_{j' \in \text{successor of } j} \sum_{i' \in I} D(i') \Delta P_{j, i}(j', i') \]
\( \Delta P_{j, i}(j', i') \) is the \( \Delta \) in the probability of mapping \( j' \) to \( i' \) resulting from the mapping of \( j \) to \( i \)
5. Compute indirect forces (2)
\[ VF(j, i) = \sum_{j' \in \text{predecessor of } j} \sum_{i' \in I} D(i') \Delta P_{j,i}(j', i') \]
\[ NF(j, i) = \sum_{j' \in \text{successor of } j} \sum_{i' \in I} D(i') \Delta P_{j,i}(j', i') \]
Example: Computation of successor forces for task 1 in time step 2
\[ NF(1, 2) = D(2) \Delta P_{1,2}(2, 2) + D(3) \Delta P_{1,2}(2, 3) \]
\[ = \frac{2}{6} \times (-0, 5) + \frac{5}{6} \times 0.5 \]
\[ = -\frac{14}{12} + \frac{5}{12} \]
\[ = -\frac{9}{12} + \frac{3}{4} \]
\[ = -\frac{3}{4} \]
Overall forces
The total force is the sum of direct and indirect forces:
\[ F(j, i) = SF(j, i) + VF(j, i) + NF(j, i) \]
In the example:
\[ F(1, 2) = SF(1, 2) + NF(1, 2) = \frac{-1}{4} + \left( -\frac{3}{4} \right) = -1 \]
The low value suggests mapping task 1 to time step 2
Overall approach
procedure forceDirectedScheduling;
begin
AsapScheduling;
AlapScheduling;
while not all tasks scheduled do
begin
select task $T$ with smallest total force;
schedule task $T$ at time step minimizing forces;
recompute forces;
end;
end;
May be repeated for different task/processor classes
Not sufficient for today's complex, heterogeneous hardware platforms
SPARE Slides for COOL
An integer linear programming (ILP) model for HW/SW partitioning
Notation:
- Index set $\mathcal{V}$ denotes task graph nodes.
- Index set $\mathcal{L}$ denotes task graph node types e.g. square root, DCT or FFT.
- Index set $\mathcal{M}$ denotes hardware component types. e.g. hardware components for the DCT or the FFT.
- Index set $\mathcal{J}$ of hardware component instances.
- Index set $\mathcal{KP}$ denotes processors. All processors are assumed to be of the same type.
An ILP model for HW/SW partitioning
- $X_{v,m} = 1$ if node $v$ is mapped to hardware component type $m \in M$ and 0 otherwise.
- $Y_{v,k} = 1$ if node $v$ is mapped to processor $k \in KP$ and 0 otherwise.
- $NY_{l,k} = 1$ if at least one node of type $l$ is mapped to processor $k \in KP$ and 0 otherwise.
- $Type$ is a mapping from task graph nodes to their types: $Type : V \rightarrow L$
- The cost function accumulates the cost of hardware units:
$$C = \text{cost(processors)} + \text{cost(memories)} + \text{cost(application specific hardware)}$$
Constraints
Operation assignment constraints
\[ \forall v \in V : \sum_{m \in M} X_{v,m} + \sum_{k \in KP} Y_{v,k} = 1 \]
All task graph nodes have to be mapped either in software or in hardware.
Variables are assumed to be integers.
Additional constraints to guarantee they are either 0 or 1:
\[ \forall v \in V : \forall m \in M : X_{v,m} \leq 1 \]
\[ \forall v \in V : \forall k \in KP : Y_{v,k} \leq 1 \]
Operation assignment constraints (2)
∀ \ l ∈ L, \ ∀ \ ν:\text{Type}(\nu) = c_\nu, \ ∀ \ k ∈ KP : \text{NY}_{l,k} ≥ Y_{\nu,k}
For all types \ l of operations and for all nodes \ ν of this type: if \ ν is mapped to some processor \ k, then that processor must implement the functionality of \ l.
Decision variables must also be 0/1 variables:
∀ \ l ∈ L, \ ∀ \ k ∈ KP : \text{NY}_{l,k} ≤ 1.
Resource & design constraints
- $\forall m \in M$, the cost (area) for components of type $m$ is equal to the sum of the costs of the components of that type. This cost should not exceed its maximum.
- $\forall k \in KP$, the cost for associated data storage area should not exceed its maximum.
- $\forall k \in KP$ the cost for storing instructions should not exceed its maximum.
- The total cost $(\Sigma_{m \in M})$ of HW components should not exceed its maximum.
- The total cost of data memories $(\Sigma_{k \in KP})$ should not exceed its maximum.
- The total cost instruction memories $(\Sigma_{k \in KP})$ should not exceed its maximum.
Scheduling
Processor $p_1$
FIR$_1$
FIR$_2$
ASIC $h_1$
Communication channel $c_1$
$\quad v_1 \quad v_2 \quad v_3 \quad v_4$
$\quad v_5 \quad v_6 \quad v_7 \quad v_8$
$\quad v_9 \quad v_{10}$
$\quad v_{11}$
$\quad \ldots \quad v_3 \quad \ldots \quad v_4 \quad \ldots \quad v_7 \quad \ldots \quad v_8 \quad \ldots \quad e_3 \quad \ldots \quad e_4 \quad \ldots \quad t \quad \ldots \quad t \quad \ldots \quad t$
Scheduling / precedence constraints
- For all nodes $v_{i_1}$ and $v_{i_2}$ that are potentially mapped to the same processor or hardware component instance, introduce a binary decision variable $b_{i_1,i_2}$ with $b_{i_1,i_2}=1$ if $v_{i_1}$ is executed before $v_{i_2}$ and $= 0$ otherwise.
Define constraints of the type
- $(\text{end-time of } v_{i_1}) \leq (\text{start time of } v_{i_2})$ if $b_{i_1,i_2}=1$ and
- $(\text{end-time of } v_{i_2}) \leq (\text{start time of } v_{i_1})$ if $b_{i_1,i_2}=0$
- Ensure that the schedule for executing operations is consistent with the precedence constraints in the task graph.
- Approach fixes the order of execution
Other constraints
- **Timing constraints**
These constraints can be used to guarantee that certain time constraints are met.
- Some less important constraints omitted ..
Example
HW types $H_1$, $H_2$ and $H_3$ with costs of 20, 25, and 30.
Processors of type $P$.
Tasks $T_1$ to $T_5$.
Execution times:
<table>
<thead>
<tr>
<th>$T$</th>
<th>$H_1$</th>
<th>$H_2$</th>
<th>$H_3$</th>
<th>$P$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>20</td>
<td></td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>20</td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td>12</td>
<td>10</td>
</tr>
<tr>
<td>4</td>
<td></td>
<td>12</td>
<td></td>
<td>10</td>
</tr>
<tr>
<td>5</td>
<td>20</td>
<td></td>
<td></td>
<td>100</td>
</tr>
</tbody>
</table>
Operation assignment constraints (1)
\[ \forall v \in V : \sum_{m \in KM} X_{v,m} + \sum_{k \in KP} Y_{v,k} = 1 \]
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>20</td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>2</td>
<td>20</td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>3</td>
<td>12</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>12</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>20</td>
<td></td>
<td>100</td>
</tr>
</tbody>
</table>
\[ X_{1,1} + Y_{1,1} = 1 \] (task 1 mapped to \( H1 \) or to \( P \))
\[ X_{2,2} + Y_{2,1} = 1 \]
\[ X_{3,3} + Y_{3,1} = 1 \]
\[ X_{4,3} + Y_{4,1} = 1 \]
\[ X_{5,1} + Y_{5,1} = 1 \]
Assume types of tasks are $l = 1, 2, 3, 3, \text{ and } 1$.
$\forall l \in L, \forall v: \text{Type}(v) = c_l, \forall k \in KP: NY_{l,k} \geq Y_{v,k}$
Functionality 3 to be implemented on processor if node 4 is mapped to it.
Other equations
Time constraints leading to: Application specific hardware required for time constraints \( \leq 100 \) time units.
<table>
<thead>
<tr>
<th>( T )</th>
<th>( H1 )</th>
<th>( H2 )</th>
<th>( H3 )</th>
<th>( P )</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>20</td>
<td>-</td>
<td>-</td>
<td>100</td>
</tr>
<tr>
<td>2</td>
<td>-</td>
<td>20</td>
<td>-</td>
<td>100</td>
</tr>
<tr>
<td>3</td>
<td>-</td>
<td>-</td>
<td>12</td>
<td>10</td>
</tr>
<tr>
<td>4</td>
<td>-</td>
<td>-</td>
<td>12</td>
<td>10</td>
</tr>
<tr>
<td>5</td>
<td>20</td>
<td>-</td>
<td>-</td>
<td>100</td>
</tr>
</tbody>
</table>
Cost function:
\[
C = 20 \#(H1) + 25 \#(H2) + 30 \#(H3) + \text{cost(processor)} + \text{cost(memory)}
\]
Result
For a time constraint of 100 time units and \( \text{cost}(P) < \text{cost}(H3) \):
<table>
<thead>
<tr>
<th></th>
<th>( T )</th>
<th>( H1 )</th>
<th>( H2 )</th>
<th>( H3 )</th>
<th>( P )</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>20</td>
<td></td>
<td></td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>20</td>
<td></td>
<td></td>
<td>100</td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td>12</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td>12</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>100</td>
</tr>
</tbody>
</table>
Solution (educated guessing):
\( T1 \rightarrow H1 \)
\( T2 \rightarrow H2 \)
\( T3 \rightarrow P \)
\( T4 \rightarrow P \)
\( T5 \rightarrow H1 \)
Application example
Audio lab (mixer, fader, echo, equalizer, balance units); slow SPARC processor
1µ ASIC library
Allowable delay of 22.675 µs (~ 44.1 kHz)
Outdated technology; just a proof of concept.
Running time for COOL optimization
Only simple models can be solved optimally.
Deviation from optimal design
Hardly any loss in design quality.
Running time for heuristic
Design space for audio lab
Everything in software: $72.9 \mu s$, $0 \lambda^2$
Everything in hardware: $3.06 \mu s$, $457.9 \times 10^6 \lambda^2$
Lowest cost for given sample rate: $18.6 \mu s$, $78.4 \times 10^6 \lambda^2$
Positioning of COOL
COOL approach:
- shows that a formal model of hardware/SW codesign is beneficial; IP modeling can lead to useful implementation even if optimal result is available only for small designs.
Other approaches for HW/SW partitioning:
- starting with everything mapped to hardware; gradually moving to software as long as timing constraint is met.
- starting with everything mapped to software; gradually moving to hardware until timing constraint is met.
- Binary search.
|
{"Source-Url": "https://ls12-www.cs.tu-dortmund.de/daes/media/documents/teaching/courses/ws1718/es/es-chen-6.pdf", "len_cl100k_base": 8430, "olmocr-version": "0.1.53", "pdf-total-pages": 75, "total-fallback-pages": 0, "total-input-tokens": 128837, "total-output-tokens": 10992, "length": "2e13", "weborganizer": {"__label__adult": 0.0005931854248046875, "__label__art_design": 0.0011720657348632812, "__label__crime_law": 0.0005502700805664062, "__label__education_jobs": 0.004245758056640625, "__label__entertainment": 0.00016045570373535156, "__label__fashion_beauty": 0.0003616809844970703, "__label__finance_business": 0.0007114410400390625, "__label__food_dining": 0.0005831718444824219, "__label__games": 0.0017118453979492188, "__label__hardware": 0.0287322998046875, "__label__health": 0.0011730194091796875, "__label__history": 0.0007214546203613281, "__label__home_hobbies": 0.0005617141723632812, "__label__industrial": 0.0025806427001953125, "__label__literature": 0.0003619194030761719, "__label__politics": 0.0004417896270751953, "__label__religion": 0.0009479522705078124, "__label__science_tech": 0.330078125, "__label__social_life": 0.00013327598571777344, "__label__software": 0.00972747802734375, "__label__software_dev": 0.61181640625, "__label__sports_fitness": 0.0006814002990722656, "__label__transportation": 0.001506805419921875, "__label__travel": 0.0003371238708496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26969, 0.03073]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26969, 0.59449]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26969, 0.75894]], "google_gemma-3-12b-it_contains_pii": [[0, 220, false], [220, 511, null], [511, 548, null], [548, 2367, null], [2367, 2887, null], [2887, 3429, null], [3429, 3948, null], [3948, 4516, null], [4516, 5068, null], [5068, 5142, null], [5142, 5662, null], [5662, 6085, null], [6085, 6503, null], [6503, 6560, null], [6560, 6840, null], [6840, 7000, null], [7000, 7255, null], [7255, 7302, null], [7302, 7627, null], [7627, 7691, null], [7691, 8124, null], [8124, 8317, null], [8317, 8477, null], [8477, 8777, null], [8777, 9276, null], [9276, 9408, null], [9408, 9976, null], [9976, 10274, null], [10274, 10431, null], [10431, 10570, null], [10570, 10812, null], [10812, 11330, null], [11330, 11799, null], [11799, 11945, null], [11945, 12149, null], [12149, 12503, null], [12503, 13072, null], [13072, 13618, null], [13618, 14193, null], [14193, 15016, null], [15016, 15499, null], [15499, 15807, null], [15807, 15828, null], [15828, 15874, null], [15874, 16188, null], [16188, 16520, null], [16520, 16795, null], [16795, 16916, null], [16916, 17344, null], [17344, 17827, null], [17827, 18200, null], [18200, 18701, null], [18701, 19233, null], [19233, 19513, null], [19513, 19952, null], [19952, 19974, null], [19974, 20455, null], [20455, 21015, null], [21015, 21430, null], [21430, 21821, null], [21821, 22472, null], [22472, 22891, null], [22891, 23561, null], [23561, 23734, null], [23734, 24138, null], [24138, 24559, null], [24559, 24787, null], [24787, 25275, null], [25275, 25875, null], [25875, 26080, null], [26080, 26160, null], [26160, 26226, null], [26226, 26253, null], [26253, 26479, null], [26479, 26969, null]], "google_gemma-3-12b-it_is_public_document": [[0, 220, true], [220, 511, null], [511, 548, null], [548, 2367, null], [2367, 2887, null], [2887, 3429, null], [3429, 3948, null], [3948, 4516, null], [4516, 5068, null], [5068, 5142, null], [5142, 5662, null], [5662, 6085, null], [6085, 6503, null], [6503, 6560, null], [6560, 6840, null], [6840, 7000, null], [7000, 7255, null], [7255, 7302, null], [7302, 7627, null], [7627, 7691, null], [7691, 8124, null], [8124, 8317, null], [8317, 8477, null], [8477, 8777, null], [8777, 9276, null], [9276, 9408, null], [9408, 9976, null], [9976, 10274, null], [10274, 10431, null], [10431, 10570, null], [10570, 10812, null], [10812, 11330, null], [11330, 11799, null], [11799, 11945, null], [11945, 12149, null], [12149, 12503, null], [12503, 13072, null], [13072, 13618, null], [13618, 14193, null], [14193, 15016, null], [15016, 15499, null], [15499, 15807, null], [15807, 15828, null], [15828, 15874, null], [15874, 16188, null], [16188, 16520, null], [16520, 16795, null], [16795, 16916, null], [16916, 17344, null], [17344, 17827, null], [17827, 18200, null], [18200, 18701, null], [18701, 19233, null], [19233, 19513, null], [19513, 19952, null], [19952, 19974, null], [19974, 20455, null], [20455, 21015, null], [21015, 21430, null], [21430, 21821, null], [21821, 22472, null], [22472, 22891, null], [22891, 23561, null], [23561, 23734, null], [23734, 24138, null], [24138, 24559, null], [24559, 24787, null], [24787, 25275, null], [25275, 25875, null], [25875, 26080, null], [26080, 26160, null], [26160, 26226, null], [26226, 26253, null], [26253, 26479, null], [26479, 26969, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26969, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26969, null]], "pdf_page_numbers": [[0, 220, 1], [220, 511, 2], [511, 548, 3], [548, 2367, 4], [2367, 2887, 5], [2887, 3429, 6], [3429, 3948, 7], [3948, 4516, 8], [4516, 5068, 9], [5068, 5142, 10], [5142, 5662, 11], [5662, 6085, 12], [6085, 6503, 13], [6503, 6560, 14], [6560, 6840, 15], [6840, 7000, 16], [7000, 7255, 17], [7255, 7302, 18], [7302, 7627, 19], [7627, 7691, 20], [7691, 8124, 21], [8124, 8317, 22], [8317, 8477, 23], [8477, 8777, 24], [8777, 9276, 25], [9276, 9408, 26], [9408, 9976, 27], [9976, 10274, 28], [10274, 10431, 29], [10431, 10570, 30], [10570, 10812, 31], [10812, 11330, 32], [11330, 11799, 33], [11799, 11945, 34], [11945, 12149, 35], [12149, 12503, 36], [12503, 13072, 37], [13072, 13618, 38], [13618, 14193, 39], [14193, 15016, 40], [15016, 15499, 41], [15499, 15807, 42], [15807, 15828, 43], [15828, 15874, 44], [15874, 16188, 45], [16188, 16520, 46], [16520, 16795, 47], [16795, 16916, 48], [16916, 17344, 49], [17344, 17827, 50], [17827, 18200, 51], [18200, 18701, 52], [18701, 19233, 53], [19233, 19513, 54], [19513, 19952, 55], [19952, 19974, 56], [19974, 20455, 57], [20455, 21015, 58], [21015, 21430, 59], [21430, 21821, 60], [21821, 22472, 61], [22472, 22891, 62], [22891, 23561, 63], [23561, 23734, 64], [23734, 24138, 65], [24138, 24559, 66], [24559, 24787, 67], [24787, 25275, 68], [25275, 25875, 69], [25875, 26080, 70], [26080, 26160, 71], [26160, 26226, 72], [26226, 26253, 73], [26253, 26479, 74], [26479, 26969, 75]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26969, 0.07692]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b16ef43ab0cf9c5f02174d1a7a1335321fcd29fa
|
2006
Roadmap for Enhanced Languages and Methods to Aid Verification
Gary T. Leavens
Iowa State University
Jean-Raymond Abrial
ETH Zürich
Don Batory
University of Texas
Michael Butler
University of Southampton
Alessandro Coglio
Kestrel Institute
See next page for additional authors
Follow this and additional works at: http://lib.dr.iastate.edu/cs_techreports
Part of the Computational Engineering Commons, Programming Languages and Compilers Commons, and the Software Engineering Commons
Recommended Citation
Leavens, Gary T.; Abrial, Jean-Raymond; Batory, Don; Butler, Michael; Coglio, Alessandro; Fisler, Kathi; Hehner, Eric; Jones, Cliff; Miller, Dale; Peyton-Jones, Simon; Sitaraman, Murali; Smith, Douglas R.; and Stump, Aaron, "Roadmap for Enhanced Languages and Methods to Aid Verification" (2006). Computer Science Technical Reports. 6.
http://lib.dr.iastate.edu/cs_techreports/6
This Article is brought to you for free and open access by the Computer Science at Iowa State University Digital Repository. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.
Roadmap for Enhanced Languages and Methods to Aid Verification
Abstract
This roadmap describes ways that researchers in four areas -- specification languages, program generation, correctness by construction, and programming languages -- might help further the goal of verified software. It also describes what advances the 'verified software' grand challenge might anticipate or demand from work in these areas. That is, the roadmap is intended to help foster collaboration between the grand challenge and these research areas. A common goal for research in these areas is to establish language designs and tool architectures that would allow multiple annotations and tools to be used on a single program. In the long term, researchers could try to unify these annotations and integrate such tools.
Keywords
Verification, verified software grand challenge, specification languages, program generation, correctness by construction, programming languages, tools, annotations
Disciplines
Computational Engineering | Programming Languages and Compilers | Software Engineering
Comments
Copyright 2006 by the authors.
Authors
Gary T. Leavens, Jean-Raymond Abrial, Don Batory, Michael Butler, Alessandro Coglio, Kathi Fisler, Eric Hehner, Cliff Jones, Dale Miller, Simon Peyton-Jones, Murali Sitaraman, Douglas R. Smith, and Aaron Stump
This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/cs_techreports/6
Roadmap for Enhanced Languages and Methods to Aid Verification
Gary T. Leavens, Jean-Raymond Abrial, Don Batory, Michael Butler, Alessandro Coglio, Kathi Fisler, Eric Hehner, Cliff Jones, Dale Miller, Simon Peyton-Jones, Murali Sitaraman, Douglas R. Smith, and Aaron Stump
TR #06-21
July 2006
Keywords: Verification, verified software grand challenge, specification languages, program generation, correctness by construction, programming languages, tools, annotations.
2006 CR Categories:
Submitted for publication.
Copyright © 2006 by the authors.
Department of Computer Science
226 Atanasoff Hall
Iowa State University
Ames, Iowa 50011-1041, USA
Abstract
This roadmap describes ways that researchers in four areas — specification languages, program generation, correctness by construction, and programming languages — might help further the goal of verified software. It also describes what advances the “verified software” grand challenge might anticipate or demand from work in these areas. That is, the roadmap is intended to help foster collaboration between the grand challenge and these research areas.
A common goal for research in these areas is to establish language designs and tool architectures that would allow multiple annotations and tools to be used on a single program. In the long term, researchers could try to unify these annotations and integrate such tools.
1 Introduction
Hoare has proposed a grand challenge project, formerly called the “verifying compiler” grand challenge [64], and now called the “verified software” grand challenge by Hoare, Misra, and Shankar [69]. The original idea was to automatically check correctness of programs that are “specified by types, assertions, and other redundant annotations.” However, the current version of the grand challenge recognizes the possibility of many tools, some of which may require human intervention or assistance. In any case, verification would be based on the text of the program and the annotations contained within it.
1.1 Audience
This report is addressed to two audiences. The first is researchers interested in program verification, especially related to the “verified software” grand chal-
The second is researchers in the following areas:
**specification languages** that describe behavior or properties to be verified,
**program generation** that automatically synthesizes code,
**correctness by construction** that concerns development and documentation of implementations especially to facilitate verification, and
**programming languages** that describe algorithms and data.
The report is addressed to researchers in these four areas who are interested in verification, specifically how their work might aid the verifying software grand challenge. This report explains what these four areas might do to help the overall grand challenge project and thus foster the goal of verified software within the scope of the grand challenge project. It is not intended to suggest an overall research agenda for any of these areas.
### 1.2 Motivation
There are many approaches to verification, all of which are embraced by the grand challenge effort. One can write or find code and verify it using a variety of tools and approaches.
While recognizing the value of many approaches to producing verified software, researchers in the four areas mentioned above are often motivated by the idea of gaining benefits (in ease, productivity, or power of verification) by providing the verifier with more information than just a bare program in some standard programming language. Verifying a bare program after-the-fact has the following fundamental problems.
- Without a specification or some annotations in the code, the properties that one can verify must be implicit and thus very weak, such as that the program will not crash or throw exceptions.
- Even with a specification, a program can be arbitrarily difficult to verify (due to lack of modularity or other artificial complexities).
With regard to the first point, even adding some partial specifications makes the verification problem more interesting and the results more useful. This is a potentially valuable technique for legacy code. For example, one might specify that a function returns a list of length equal to its input, which is only a partial specification of what the function does. Indeed, there is an entire spectrum of properties that one might consider, as shown in Figure 1. So there is not necessarily a unique best specification for a function, since some kinds of properties, such as resource consumption, its behavior in a transactional setting, its real-time behavior, and so on, may best be thought of as outside of the traditional specification of functional behavior.

With regard to the second point, researchers believe that information about design decisions made in the program’s development can be of great use to the
verification process. Well-known examples are annotations for loops and object
invariants, but information can also be obtained from the process of generating a
program (up to and including a complete proof), and the process of constructing
a program and its proof hand in hand. Intermediate modeling and refinement
steps are also believed to greatly aid verification and may in the limit constitute a
proof. Types in programming languages can also be augmented with additional
information related to correctness proofs, and other program annotations, such
as those describing ownership in the heap, can be of great value. To summarize,
the motivation for all these areas is to make such information available to a
verifier.
1.3 Limitations
The “research roadmap” that follows is limited in several ways.
First, the roadmap focuses on the four research areas named above and
their relation to verification. Other techniques and research areas related to
verified software are largely ignored. Furthermore, although there are many
ways in which these four research areas might aid the general goal of more
reliable software, this roadmap only focuses on the specific ways that these
areas might produce verified or more easily verifiable software in the context
of the grand challenge project. Much research is already going on in all of
these areas to promote more reliable software, and such research would also
contribute, indirectly, to the goal of making software easier to verify. However,
discussing all such research would lead to a very broad survey which would be
of less use to the verified software grand challenge.
The second way in which our roadmap is limited is that it has only (thus
far) drawn on the expertise of a very small sample of researchers in each of
the research areas\(^1\). The authors of this report were selected in the following
way. A conference on the verified software grand challenge was held in Zürich
Switzerland in October 2005 [68]. At that conference, the organizers — Hoare,
Shankar, and Misra — picked leaders for three committees to write research
roadmaps. Leavens was picked to lead the committee writing this report. Leav-
ens in turn picked the committee members, intentionally aiming for a small
committee, using a selection that was biased toward people who had attended
the conference in Zürich.
Finally, the preceding limitations result in limitations on the applicability of
our roadmap. First it is biased toward research directly related to the verified
software grand challenge. Second, since the committee is small compared to the
number of researchers in the four research areas, this report does not necessarily
represent a consensus of the researchers in any of the four research areas.
1.4 Outline
The next section gives some background about verification problems and chal-
lenge problems. Following that, Section 3 describes the common goal of the
four areas with respect to the grand challenge, that is, what they might, over-
all, provide to it. Sections 4-7 describe the more specific needs and potential
research directions in each of the four areas. Section 8 concludes this report.
\(^1\) However, it also reflects feedback from the members of IFIP working group 2.3, the mini-
conference on verified software April 1–2, 2006 held at SRI, and the Dagstuhl workshop on
2 Background
This section gives some background on verification problems and lays out some needs that researchers in the four areas have for challenge problems.
2.1 Verification Problems
An enhanced language or tool is intended to work on some class of verification problems. A precise way to state such a class of verification problems is to describe:
- a specification language, in which to state the assumptions and guarantees of a correct implementation, and
- a programming language, in which to implement the specifications, and whose code is to be verified.
A specification in the specification language together with a program in the programming language constitute a problem for a verification system. A pair of a specification and programming language describe the set of possible such problem instances that such a system should be able to handle.
The specification language and programming language might be integrated; there is no need to have two separate languages. Some examples of integrated languages are Gypsy [7], Alphard [63, 92, 123], Euclid [83, 91], Eiffel [97, 98], Resolve [126], and SPARK [18].
For various reasons the grand challenge project has not articulated, and will probably not articulate, constraints on what verification problems are of interest. But verification problems of interest will be described indirectly, through challenge problems.
2.2 Challenge problems
Challenge problems can help stimulate research, especially in the short term. The following are some suggestions for such challenge problems.
To reward research that can handle problems of significant size, the challenge problems should be big enough to require reusable modules and structuring (at multiple levels).
Challenge problems at a minimum need to have explicitly stated (informal) requirements. It will also be helpful to have formal requirement models.
A formal specification of the properties of interest for each challenge problem is also needed by each of the four areas. Those working in specification languages could use the formal specification as a baseline for case studies that compare their work against the notation used to state the properties of the challenge problem. The other areas need a formal specification as a starting point for certain kinds of research.
As a practical matter, and as an aid to those working in all four areas, challenge problems should also come with test cases.
To aid work on programming languages and some researchers in the correctness by construction approach, it would also be helpful to provide well-tested candidate implementations with each challenge problem. Such implementations would be useful to researchers in programming languages, who could try to devise alternative implementations or languages that would allow easier verification of implementations.
3 Common Goal: Verifiable Artifacts
To set out goals for the four areas, we make some assumptions. The main assumption is that the grand challenge is interested in at least the following:
- specification of safety properties (e.g., the relation between inputs and outputs, lack of deadlock), and
- imperative programming languages (such as Pascal or C), including object-oriented languages (such as Java).
On the one hand, although it is non-trivial and of some economic importance, this is a rather small class of verification problems. For example, most imperative programming languages have only limited support for concurrency (e.g., threads in Java), but different models of concurrency may become increasingly important in the next several years. On the other hand it is still perhaps too large, because it encompasses the entire spectrum of safety properties, including everything in Figure 1. The reader should keep in mind that the grand challenge project may indeed be interested in other kinds of specifications and programs. In that case this report will most likely be missing some potentially interesting research directions.
Assuming the goal of the project is to build tools that will be able to handle at least verifying safety properties for imperative languages, we see the following short-term and long-term goals that are shared across the four areas.
3.1 Short Term: Extensible Languages and Tools
In the short term (i.e., in the next 5-7 years), a common goal is to allow for extension of tools and languages by other researchers (and ultimately, by users).
For specification and programming languages, this means designing languages so that other researchers (and ultimately users) can add new specification notations and new annotations to aid in verification proofs. These languages should allow specifications to be added (and proved) incrementally.
Such extensions should ideally not just describe syntax, but also have access to information from the language processor (e.g., a compiler). User-extensible annotation mechanisms, such as those found in C# and Java may be a useful technique for achieving parts of this goal.
In all four areas, tool builders should strive to define architectures that will permit other researchers to easily add new specifications and other proof-oriented annotations, that will enable other tools to cooperate on verification of the same program. XML may be an aid for achieving parts of this goal. Overall, the idea is to recognize that no one tool will have all the necessary features for attacking all parts of a difficult verification problem. Tool (framework) builders should make it easier to build new tools or extend existing tools. This in turn will help other researchers gain much needed experience with their approaches, but at a lower cost.
Since efforts in building extensible tools can have a multiplicative effect in enabling research, such efforts should be highly encouraged by the project.
3.2 Long Term: Unification
In the long term (8-15 years), researchers should attempt some consolidation of various languages and tools in their areas. This is desirable because the soft...
\[\text{In addition to the utility of such annotations in verification, the more properties one proves, the more confidence one has in a program. This is an additional motivation for the goal of allowing language extension.}\]
ware industry does not want to deal with many different languages, notations, methods, and tools. Furthermore, it is also theoretically unsatisfying to have to explain a wide diversity of approaches. Thus, while research will continue to make progress by exploring a wide range of approaches to attacking verification problems, in the second half of the project some researchers should also build on and consolidate the ideas of several tools and languages.
4 Research in Specification Languages
This section was mainly written by: Gary T. Leavens, Kathi Fisler, Cliff Jones, Douglas R. Smith, and Murali Sitaraman.
4.1 Need for Specification Languages
Research in (formal) specification languages is central to the grand challenge, because interesting verification problems contain interesting specifications. Thus the grand challenge project needs at least one specification language for stating the properties that are to be verified in the class of verification problems of interest. Even if the class of verification problems only encompasses very weak or partial specifications, such as those on the left side of Figure 1, there will still be the need for a specification language (although in the extreme case, the specification language might be trivial in the sense that it contains just one sentence: “the program should not crash”).
4.2 Assumed Scope
Since it is not clear what properties are of interest to the grand challenge, this section assumes that the set of properties of interest includes at least safety properties for sequential and concurrent programs. That is, the remainder of this section assumes that the grand challenge is interested in specifying at least:
- assertions about states and data values, which allow one to describe the functionality of procedures in imperative programming languages, and
- properties of the history of events in a program’s execution.
4.3 Background: Kinds of Specification Languages
This section defines terms used in the description of short-term and long-term research directions, particularly about different kinds of specification languages.
Specifications can be stated at many different abstraction levels. At the highest level of abstraction are requirements, which describe the behavior of entire programs from the end-user’s perspective, often including non-functional properties, such as cost or time. Requirements are initially informal, but may be (partially) formalized later. What we hereafter refer to as specifications are statements that may describe or refer to a program’s internal states or events, which may not be directly visible to a program’s user. Such statements are usually formal and describe a class of programs or program modules (components) that have a design with features that can be related to the internal states or events mentioned in the specification. Thus what we call specifications are at a level of abstraction that is more relevant to the detailed design of a program. Such detailed-design specifications are capable of documenting interfaces of individual program modules, such as procedures or classes.
One technique for writing such specifications is algebraic, in which one writes axioms that relate operations to other operations. While the
early papers described non-imperative examples, this technique has also been adapted to specification of imperative code [25, 53, 67]. The CLEAR language [28, 29], which provides category-theoretic foundations for the structuring and refinement of algebraic specifications. In CLEAR, specification morphisms are used to structure specifications, and colimits serve to compose specifications. Later examples of this approach include Specware [78] and CASL [24].
Another technique for writing such specifications is the pre- and postcondition style originated by Hoare [65]. In this technique, if a purely mathematical language, such as higher-order logic (as in the PVS theorem prover [116] or Isabelle/HOL [110]) or temporal logic [93] is used for specification of a program, there must be some abstraction function (or relation) that maps the states or events in the program’s execution to the abstract states or event models that the specification’s formulas mention [66, 81, 146]. Many behavioral specification languages, such as VDM [76], Z [130], Object-Z [119], and OCL [139] have more structuring mechanisms, many of which resemble structures (such as procedures and classes) in programming languages. Besides helping structure larger specifications, such mechanisms constrain what kinds of abstraction functions are considered in proofs.
Carrying these structuring mechanisms farther, by writing specifications as annotations to programs in some particular programming language, yields an interface specification language [142]. In such a language, a correct implementation must have both the specified interface and specified behavior (or properties), and thus the relation between a program’s state (or events) and the abstract state (or events) described by the specification is much more tightly constrained. Examples of behavioral interface specification languages include the Larch family [57, 142], the Resolve family [44, 115], SPARK [18], Eiffel [97, 98], JML [27, 85], and Spec# [19, 20, 86]. Examples of history-based interface specification languages include Bandera [58] and Java Pathfinder [58]. Interface specification languages, with their close relationship to a programming language, seem likely to be important for the grand challenge, especially in the short term.
4.4 Short-Term Research Goals
The following are some short-term (5-7 years) research goals for specification language research.
4.4.1 Open Languages and Tools
Specification languages should be designed to be extensible and open, so that researchers can more easily experiment with variations and extensions. Tools for specification languages, such as type checkers or verification condition generators, should also be designed with an architecture that makes for easy variation and extension. Tools should also allow different analysis and verification systems easy access to and manipulation of specifications, as these will aid the verification efforts of the grand challenge.
4.4.2 Reasoning about Partial Specifications
Tools for specification languages should make it easy to state and prove logical consequences of specifications. These can be used both for debugging specifications and for proving connections with formalizations of requirements, etc. It should not be necessary to have a complete specification in order to do such reasoning; in other words, it should be possible to reason about partial speci-
fications in which many parts are underspecified, to permit early debugging of the specification.
4.4.3 Refinement
Tools for specification languages should make it easy to state refinements between specifications [16, 60, 99, 100, 42, 78]. There should be automated support for both debugging and proving such refinements, using techniques such as model checking for finding problems with proposed refinements. Section 6 discusses both the posit-and-prove and transformational approaches to proving refinements, and how these techniques can aid verification.
4.4.4 Modularity and Reuse
Specification languages should permit modular descriptions of reusable interfaces. While verified software does not have to be reusable, reusable modules can make it easier to develop larger and more interesting verified software.
4.4.5 Specification of Resources
If non-functional properties, such as time and space consumption, are of interest to the grand challenge, then specification and reasoning techniques for such nonfunctional properties [61, 80, 125] should be further developed and integrated with other kinds of specification.
4.4.6 Interface Specifications
The design of interface specification languages poses some special problems.
Specification and Translation of Assertions Experience with Eiffel [97, 98] and Larch seems to suggest that programmers may find that specification languages like Eiffel, in which assertions are written in the syntax of the programming language, are easier to use than Larch-style languages. (See also Finney’s study of mathematical notations [50].) However, other efforts in teaching mathematical specifications to undergraduate students appear to be quite successful, suggesting that the exact notations and language might play a significant role in ease of understandability and use [127]. Thus one research problem is to understand the ease of use of different specification notations (both in practice and for use in verification).
Another research problem is to study how to translate assertions in different languages into logical formula that are useful in reasoning (e.g., in a theorem prover) [6, 85].
Heap Structuring Better techniques for heap structuring, using concepts such as ownership seem to hold promise for aiding verification of pointer-based and object-oriented programs. At the very least, some way to prevent representation exposure [88, 112] seems necessary to do modular reasoning about frame axioms and invariants [77, 103, 104]. Heap structuring also seems helpful for making sense of object invariants in systems built from abstraction layers [19, 86, 105].
It may be that other simplifications in reasoning can be obtained by introducing specifications that further restrict heap structures, for example, to cycle-free-pointers, where such restrictions are appropriate (e.g., in the implementation of lists and trees). What are the right techniques for specifying such restrictions and what kinds of reasoning benefits are obtainable?
**Assistance in Writing Specifications** To verify large programs that use many modules and libraries, it is often necessary to specify large libraries or code. Many such specification tasks are quite labor-intensive and somewhat unrewarding intellectually. Some automation would help. Tools like Daikon \[47, 109\] and Houdini \[52\] have demonstrated that it is possible to recover some formal specifications from code using various heuristics. It might be interesting to infer specifications from examples or directly from test cases. A research goal would be to have such tools work with user-specified abstractions, so that they could be used to more quickly write more abstract specifications. Or perhaps some automatic abstraction heuristics could be used. An environment for writing specifications could allow users to edit out some cases in a specification, to achieve more abstraction by underspecification.
**New Language Features** If more advanced programming languages are of interest to the grand challenge project, then how to specify properties of programs that use advanced features, like advice in aspect-oriented languages, will be important.
### 4.5 Long-Term Research Goals
The following are some longer term (8-15 years) goals for specification languages.
#### 4.5.1 Integration of Data and Control
An important challenge for specification language design is to integrate the two disparate worlds of state-based and history-based (or event-based) specification languages. Typically, specification languages either focus on sequential programs and describe properties of data values, or they focus on concurrent programs and described properties of event histories. However, complete verification of concurrent programs demands reasoning about both data and control. Some potential approaches are to use atomicity \[90, 118\] or to use transitions over relations.
#### 4.5.2 Traceability
Links between requirements and detailed design specifications should be able to be explicitly stated and reasoned about. One approach may be to develop techniques for stating and proving refinement relationships between (particular pairs of) requirement and specification languages. Another approach might be to design languages that are good both for formalizing requirements and for specification of the detailed design.
#### 4.5.3 Tool Frameworks that Support Integration
Frameworks that would make it easy to build tools for specification languages and to integrate different tools for reasoning about specifications should be a long-term goal. Integration among reasoning tools, such as model checkers and theorem provers, would also be helpful.
#### 4.5.4 Interface Specification Language Design
A theory of how to design interface specification languages should be developed that allows a new specification language to be quickly designed for a new programming language, at least within a fixed set of programming paradigms. Ulti-
mately such a theory should extend beyond the imperative and object-oriented paradigms to other paradigms of interest to the grand challenge. Along the same lines, it may also be useful to understand how to tailor the design of such a language to a specific architectural style. This would potentially help with verification of programs written in such styles.
5 Research in Program Generation
This section was mainly written by: Gary T. Leavens, Don Batory, Alessandro Coglio, and Douglas R. Smith.
5.1 Background on Program Generation
A program generator [39] is a tool that produces code from some higher-level description of the code. Conventional compilers for languages such as C and Java fit this characterization, because they generate lower-level assembly or bytecode from higher-level programming languages. However, the term “program generator” is typically used for tools that produce code in relatively high-level languages such as C and Java, and where the higher-level description of the code is a specification. Nonetheless, we do not rule out the view of compilers as generators; in fact, the research directions advocated here apply to compilers as well.
A program generator operates on the syntax of the source (specification) and target (code) languages. Roughly speaking, the generator reads the specification and writes the code, i.e. it transforms the specification into the code. Program generators are often written in conventional languages such as C or Java; they manipulate data structures that encode abstract syntax trees of the source and target languages. The pattern matching featured by languages like ML and Haskell provides a convenient way to implement syntactic transformations. Languages like Refine [72] and Stratego [133] provide even more convenient features to implement syntactic transformations in a more declarative way, by means of rewriting rules, strategies, and quotation/anti-quotation pattern matching.
5.2 Relation to Model-Driven Development
The premise of Model-Driven Development (MDD) [21, 26, 135] is that a program has multiple representations, expressed as models. Transformations will update models and map models to other models, and compose models.
Since code is the most important kind of model in MDD, MDD falls within the scope of the program generation area.
5.3 Motivation for Program Generation
Program generation is useful for at least two reasons [39]. One is productivity: instead of writing the code directly, the developer writes and maintains the specification, which is supposedly shorter and easier to read and write than the code. The other reason, which is more relevant to our context, is that the code can be generated in such a way as to be automatically verified; that is, it will be correct with respect to the specification. The research directions advocated here aim at automatic verification.
\[3\] Thus, roughly speaking, a model is an object and a transformation is a method.
Program generation also fits well with the use of software product lines. A software product line describes a family of programs \cite{22,39}. Using a product line gives a significant reduction in artificial complexity, more regularity and structure in a program’s modules, and leads to modules are more likely to encapsulate increments in program functionality. All three are key requirements for module reusability, large scale synthesis, and verification. Showing how to verify software product lines would illustrate the connection between scale, design, and verification.
5.4 Problem: Verified Program Generation
The problem is that even when using the most declarative syntax transformation languages available, the semantics of the source specification and of target code are not directly “represented” in the program generator. Thus, it is very possible to generate code that is incorrect with respect to the specification, by doing “wrong” syntactic transformations. Achieving correctness is thus the overriding research problem for program generation with respect to the grand challenge.
5.5 Problem: Scalability
There has been significant progress in algorithm synthesis and automatic design optimization \cite{128}, especially in restricted domains; examples include Planware \cite{23}, Amphion \cite{132}, and AutoBayes \cite{51}. While continued progress in the generation of moderate size programs can be expected, a scalable approach to program generation must also focus on how to generate verified compositions of reusable modules. A vast majority of practitioners and researchers who are automating parts of program development are building tools that are compositional in nature. COM, Java server pages, and Enterprise Java Beans are examples. These tools stitch code modules together to synthesize larger modules. Most code modules are written by hand, but some (e.g., parsers or boiler-plate interfaces) are generated by simple tools. In effect, the specification languages for these code synthesizers are akin to module interconnection languages.
A module is more than just code; it encapsulates several different kinds of information: specifications, code, formal models from which properties can be inferred, documentation, performance models, etc. Specifications and performance models are especially important for verification. It is thus important to synthesize such information for generated compositions of modules \cite{22}.
A well-known example of the above is the work on query optimization in relational databases \cite{122}. An optimizer maps a declarative specification (e.g., a SQL SELECT statement) to an efficient implementation. A SELECT statement is first mapped to a relational algebra expression, the expression is optimized, and then code is generated from the optimized expression. Each relational algebra operation is a module, and a relational algebra expression is a composition of modules that represents a query evaluation program. Each module (operation) encapsulates two different representations: a performance model (which evaluates the efficiency of the operation) and code (to implement the operation). The query optimizer uses only the performance model of an operation to deduce the most efficient composition. The program synthesizer uses only the code representation to generate the implementation. A similar organization (i.e., modules containing multiple formal models) will be needed for program verification.
5.6 Short-Term Research Goals
The following are some short-term (5-7 year) research goals in the area of program generation.
5.6.1 Formalizing Language Semantics
The first step to establish the correctness of generated code is to formalize the semantics of the source and target language, along with a notion of what it means for an artifact in the target language (the code) to be correct with respect to an artifact in the source language (the specification). For example, the correctness notion could be that the two artifacts have the same observable behavior (where the notion of observable behavior must be also formalized). These formalizations should be developed in a suitably expressive logical language with a formal proof theory, such as the languages found in mundane theorem provers. Examples include Project Bali [111] and the LOOP Project [74] [136], both of which formalize Java.
5.6.2 Tool Development
Current (meta-)languages and tools [72] [133] do not deal with the semantics and proof aspects of transformations, but only with their syntax. Thus, an important research direction is to design languages and tools, by which one can more directly represent semantics and generate proofs and code in an integrated fashion.
5.6.3 Certified Code Generation
Instead of directly verifying the generator, a promising approach is to have the generator produce, along with the code, a machine-checkable proof of the correctness of the output code with respect to the input specification [35] [36] [106]. The proof should use the inference rules of the logical language in which the semantics of source and target language, as well as the notion of correctness, are formalized.
Then, as in the well-known proof-carrying code technique [107], the proof is checked by a simple proof checker, so that trust is shifted from a large and complex generator to a small and simple checker.
5.6.4 Transformation Patterns
Proof-generating transformation patterns, which will emerge from applying program generation in practice should be cataloged; e.g. taxonomies of algorithm theories and datatype refinements [129]. These catalogs will help others apply the ideas and build tools more quickly.
5.6.5 Better Algorithms to Aid in Program Generation
To apply general design principles and transformations to a concrete specification requires some analysis (to verify applicability conditions) and constructive inference (to extract expressions to fill in design templates).
More practical program generation requires low-order polynomial time algorithms for analysis and constraint solving. A promising approach is to compose constraint-solvers and decision-procedures for various specialized theories. Static analysis can also sometimes provide a fast alternative to search-based theorem provers.
12
5.7 Long-Term Research Goals in Program Generation
The following are some long-term (8-15 year) goals for research in program generation.
5.7.1 Scalability
To allow scalability of program generation, techniques for generating compositional, well-structured designs are needed in each application domain. A complementary need is for techniques for composing properties, specifications, and other non-code information in modules. It must be clear how such compositions preserve (or affect) properties of interest.
5.7.2 Taxonomy of Proof-Generating Transformations
A collection of proof-generating patterns (or templates) should be made into a library, categorized by various dimensions, such as application domain, source and target language, etc. This knowledge would make it easier to develop future program generators.
5.7.3 Better Tools and Frameworks
Researchers could design better languages, tools, and frameworks, to ease the task of building future program generators. Such tools could both more directly support proof generation and could also ease the proof of correctness for the program generator itself.
Such tools and languages could also more directly support proof-generating patterns.
5.7.4 Factoring the Certification Process
Establish sound techniques for incorporating formal proofs into the certification process for program generators, in order to eliminate some testing and reduce the need for other kinds of testing. (Current practice is to perform extensive and expensive testing, both to validate the generated code’s functionality and performance, and to test for vulnerabilities and flaws along various code paths.) Given a complete specification from which the code is generated, together with a proof of consistency between code and specification, there should be little need to perform path testing to reveal flaws. There will still be a need to test that the specification meets intentions, but that can be a more specialized activity. Also, those requirements that are not treated during generation or refinement (e.g. performance concerns) would also still need to be tested.
5.7.5 Allow Update of Running Systems
For embedded systems, it is often necessary to update (fix) the code while the system is running. Supporting such updates in a system where code is generated may be a matter of generating the code to allow for eventual update.
5.7.6 More Manual Control
To allow users to operate outside a limited domain to some extent, program generators could be designed to allow more manual input, making them a blend of a program generator and a correctness by construction system, as described in the next section.
6 Research in Correctness by Construction
This section was mainly written by: Michael Butler, Gary T. Leavens, Eric Hehner, Murali Sitaraman, Jean-Raymond Abrial, and Cliff Jones.
6.1 Motivation
Much discussion on the need for a powerful program verifier seems to contain the following underlying assumptions:
- That a program verifier will be used mostly to verify completed programs.
- That when verification fails it is because the program contains errors.
While a powerful program verifier is a very valuable tool for programmers, it does not help them construct a correct program in the first place, nor does it help document and explain decisions (e.g., those motivated by efficiency considerations) made in existing code.
Equally important, the correctness of any verification is dependent on the validity of the formal properties against which a program is checked. Since we cannot, in general, guarantee that such properties are what users really want, we will, in the remainder of this section use the phrase “verification by construction,” instead of the more common phrase “correctness by construction,” to emphasize the potential problems with the initial specification.
The verification by construction approach helps developers who want to construct verified software systems by addressing the following questions:
Q1 How do we construct models and properties against which to verify our software?
Q2 How do we ensure that our models and properties properly reflect the requirements on the system?
Q3 How do we take account of the environment in which our software is intended to operate?
Q4 How do we construct our software so that the verification will succeed?
In the following, we will largely ignore question Q2, since it too large and important to be included in our grand challenge; it would constitute a grand challenge on its own.
As can be seen from the other questions, the verification by construction approach broadens the focus away from just verifying a finished product to analysis of models at all stages of the development process. It encourages verification of designs and not just verification of programs. Verification of designs may lead to a greater payoff than just verifying programs. Introducing formal modeling early in the development cycle helps to identify problems earlier, long before any code is developed, thus helping to avoid expensive later rework.
As well as supporting verification of designs and implementations, the formal modeling languages used in verification by construction encourage a rational design process. We contend that the use of good abstractions and simple mathematical structures in modeling, and reuse of modules with specifications can lead to cleaner, more rational system architectures that are easier to verify (and maintain) than architectures developed using less disciplined approaches.
6.2 How is Verification by Construction Achieved?
Verification by construction can be achieved by having a formal framework in which models are constructed at multiple levels of abstraction; each level of abstraction is refined by the one below, and this refinement relationship is documented by an abstraction relation (typically in the form of a gluing invariant) \[\text{1, 3, 16, 22, 30, 76, 82, 99, 100, 101}\]. The highest levels of abstraction are used to express the required behavior in terms of the problem domain. The closer it is to the problem domain, the easier it is to validate against the informal requirements, i.e., ensure that it is the right specification. The lowest level of abstraction corresponds to either an implementation, a specification from which an efficient implementation can be derived automatically, or to a specification realized in hardware.
Also critical in this framework are mechanisms for composing and decomposing models. Composition can be useful for building up specifications by combining models incorporating different requirements. Decomposition is important for relating system models to architectures of subsystem models and also for subsequent separate refinement of subsystems \[\text{5, 2, 14, 15, 30, 41}\].
Ensuring that a model \(M_2\) refines or implements \(M_1\) requires bridging the abstraction gap between them. Typically there is a large abstraction gap between a good formal specification, i.e., one that is easy to validate against the requirements, and an efficient implementation.
Verification by construction does not require that such abstraction gaps be bridged by a series of (small) transformations, done at the time that \(M_2\) is derived from \(M_1\), each step of which guarantees refinement. While this kind of transformational approach is valuable \[\text{60, 99, 100, 101}\], verification by construction also includes a posit-and-prove approach, in which the developer provides both \(M_1\) and \(M_2\) and uses tools to verify that \(M_1\) is refined by \(M_2\) \[\text{1, 3, 76, 82}\]. The difference is not great, especially since in the transformational approach, the transformation applied might result in the generation of side conditions that will need to be verified. Conversely, if the abstraction gap between \(M_1\) and \(M_2\) is small enough, or if the properties involved are limited, a tool can generate proof obligations that can be verified, perhaps automatically using model checkers or powerful theorem provers. Tools are important for the transformational approach, but tools are also useful in the posit-and-prove approach, for example, to help one discover ancillary properties, such as invariants.
Through refinement it is often possible to model and reason about how a strategy solves a problem in an abstract way using abstract specifications that encapsulate algorithms and data structures. At higher levels of abstraction one can focus reasoning on design choices closely related to the problem domain and less on coding details. These abstract specifications can then be optimized through refinements that select implementation modules, or that introduce more concrete algorithms and data structures. Reasoning about these optimizing refinements no longer requires reasoning about the original problem as this will have been dealt with by the earlier refinement.
In this way, by keeping the models as abstract as possible at each level, or by reusing modules, one will often have simpler proof obligations to discharge. This contrasts with the situation that obtains when one verifies a program (without annotations) and without intermediate refinement steps. In doing such a proof, one must reason about a number of issues simultaneously: the problem to be solved, the data structures, and the algorithms used in the solution. Using a series of refinement steps helps factor out and modularize such decisions, allowing them to be dealt with separately. This often simplifies proof obligations and helps make reasoning more manageable.
When using refinement, one does not necessarily distinguish between properties and models. Essentially we are working with models in a modeling language and the important property to be proved of some model $M_2$ is that it is a refinement of some other model $M_1$. So the answer to the question “what properties should we prove of a model?” is “those properties that help show that it is a refinement of its abstraction.” For the most abstract models, the important property is that they satisfy the requirements of the problem domain. This is an informal check which can sometimes be aided by checking required ancillary properties. With a refinement approach the “creative” input in a development is a collection of explicit models at different levels of abstraction. The invention of ancillary properties is dictated by the need to prove refinement between these explicit models. Creating models at different levels of abstraction, or reusing previously-available modules with specifications, fits well with an engineering approach.
6.3 The Goal of Verification by Construction
Existing theories, languages, proof techniques and tools for verification by construction need to be evolved to address more fully questions Q1, Q3, and Q4 above. This will lead to powerful tools that will:
- Support the construction of models (specifications, designs, programs) at multiple levels of abstraction,
- Support the verification of refinement between models,
- Support the verification of modules built from other modules, and
- Support verified construction of complex systems consisting of software and environments in which software operates.
The feasibility of these results will be demonstrated through their application to the development of complex software systems. The long term directions described later are intended to lead toward these goals. We also suggest some short-term directions which can build immediately on existing work in the area and will contribute to elaboration of the longer term problems and their solutions.
6.4 Short-Term Research Directions
The following are some short-term (5-7 year) research goals.
6.4.1 Range of Case Studies
Develop and open for scrutiny several case studies of verification by construction, using existing techniques and tools. These case studies should be selected from the class of verification problems considered for the grand challenge project, and might include some of the overall project’s challenge problems. Some case studies should focus on verification of modules. In all cases, the studies will help identify particular areas for improvement in the approaches.
Researchers should consider developments in which not every part of a design is mapped down to fresh code, rather some parts are implemented by legacy systems. The specifications of the legacy parts need not appear at the highest level, rather they could be introduced in later refinement steps. The correctness of the overall system implementation with respect to the abstract specification would be conditional on the assumption that any legacy parts satisfy their
specification; an assumption whose discharge may be tackled by other parts of the grand challenge.
Existing research projects and efforts have made requirements documents and formal specifications available and these could be used as starting points and built on further [94, 117, 131].
6.4.2 Links between tools
Build links between existing tools to support verification by construction. In particular, build links between proof obligation generators for refinement checking (as found in B and Z for example) and
- the latest powerful theorem provers, model checkers and SAT solvers, and
- automated invariant generation tools (such as Daikon [47]).
Existing work that could be used as a basis for tool integration work includes the Eclipse-based Rodin platform for refinement [117] and the Community Z tools initiative [40].
These experiments will guide the long term direction of a unified tools framework for verification by construction.
6.4.3 Programming Language Mappings
Models at low levels of abstraction need to be converted to executable software. The effective way of doing this is through tool-supported mappings to existing programming languages such as Ada, Eiffel, Java and C#. In the medium term these mappings should be pragmatic and their soundness provided through informal arguments. To increase confidence in the resulting code, the mappings should also generate appropriate formal annotations (e.g., SPARK, Eiffel, JML or Spec# assertions) from the models and ancillary properties. This allows the generated code and annotations to be analyzed using existing program analysis tools. For some applications or domains it may be appropriate to consider mapping low-level models direct to byte code by-passing the compiler. Since the code generation problem is essentially the problem of program generation, the research directions pointed out in Section 5 also apply to this problem.
Examples of automated mapping of models to code are found in AtelierB [34], which supports generation of C and Ada code from low level B models, and the B-Toolkit [13], which supports generation of C code from low level B models.
6.5 Long-Term Research Directions
The following are some long-term (8-15 year) research directions in the verification by construction approach.
6.5.1 Evolution + Refinement
Refinement is never purely top down from most to least abstract, because it is difficult to get the abstract model precisely right. One usually starts with an idealistic abstract model because that is easy to define. As refinement proceeds and more architectural and environmental details are addressed it often becomes clearer how the ideal abstract model needs to be modified to reflect reality better. Modifications to some level of abstraction will ripple up and down the refinement chain. This is not a weakness of the refinement approach per se, rather a reflection of the reality of engineering of complex systems. The
theories, languages, proof techniques and tools need to support evolution of designs during and after development with minimal effort.
6.5.2 Complex system design
Control systems, interactive systems, and distributed systems involve multiple agents (users, environments, new programs, legacy code) all of which contribute to the correctness of a system. Individually the agents may be very complex, so reasoning about compositions of agents in all their gory detail may be infeasible. Instead, there is evidence that it will be feasible to reason about complex systems through good use of abstraction, refinement and module composition [31, 32, 59].
The extent to which one must consider the operating environment when developing software depends on where one draws the boundaries of the system. To reason about the validity of any fault tolerance mechanisms, it is useful to include some abstraction of the environment in the formal models in order to verify the effectiveness of these mechanisms. For example, when reasoning about the effectiveness of a security protocol, it is usual to include some abstraction of an attacker. The goal is not to implement the attacker, rather it is to show that the protocol achieves its security goal even in the presence of an attacker, under some assumptions about attacker behavior. These assumptions about attacker behavior can be encoded in the formal abstraction of the attacker.
6.5.3 Richer Refinement Theories
Within a particular framework there may be differing strengths of refinement. A weaker notion might capture the preservation of safety behavior, while stronger notions might capture preservation of liveness and/or fairness.
Another important dimension is resource usage. A theory of refinement should ideally allow one to prove tight bounds on resources, while still permitting abstract reasoning. Specifications of resource usage should also not require reverification when the computing platform is changed.
The refinement relation should enjoy some form of transitivity. Refinement is based on comparing models according to some notion of what can be observed about them, and it is useful to be able to modify what can be observed at different levels of abstraction. In particular, the interface to a system is usually described abstractly and may need to be made much more concrete at decomposition or implementation levels. In such cases, the observable behavior is not directly comparable, but needs to be compared via some mapping and transitivity of refinement is via composition of mappings.
6.5.4 Refinement Patterns
A halfway house between transformational and posit-and-prove can be envisaged, where certain patterns of model and refinement can be captured and used in the construction of refinements. This is a more pragmatic idea than transformational refinement in that the pattern might not guarantee the correctness of the refinement. Instead $M_2$ would be constructed from $M_1$ by application of a pattern and the correctness of the refinement would be proved in the usual posit-and-prove way. Ideally the pattern should provide much of the ancillary properties (e.g., invariants, tactics) required to complete the proof, or at least an indication of what kinds of properties might be needed.
The aim of using such patterns is to minimize verification effort when applying refinement. A research goal is to identify such patterns through a range of case studies and supporting the application of the patterns with tools.
6.5.5 Integrated Tools Framework
To a large extent the theory needed to support verification by construction already exists. The challenge is to provide a powerful set of tools to support abstraction, refinement, decomposition and proof. Tools should strive to achieve as much integration as possible and avoid isolation. Such tools should also exploit as much of the existing work in theorem proving and model checking as possible and should be designed in anticipation of future advances in these areas. The same can be said for using state-of-the-art methods in programming language design, program verification, and automated program generation. As they evolve, the support tools should be applied to the development of interesting software-based systems.
7 Research in Programming Languages
This section was mainly written by Gary T. Leavens, Simon Peyton-Jones, Dale Miller, and Aaron Stump.
7.1 Assumptions and Scope
In this section we assume that imperative languages are of interest. This is not meant to exclude research on other paradigms. For example, functional languages and domain-specific languages each have their own advantages for verification.
Also, this roadmap assumes that verifying a compiler (or other programming language tools) is not a goal of the grand challenge. This is not to say that researchers in programming languages are not concerned about correctness of the tools they produce. On the contrary, it is standard, for example, for all type systems in programming language research papers to come with a formal proof of correctness. (The recent POPLmark challenge calls for such proofs to be written in machine-checkable form [12].) However, it seems likely that such verification problems will be outside the emphasized areas of the grand challenge.
7.2 Programming Language Approaches to Verification
Aside from using refinement to derive programs that are “correct by construction,” program generation (including certifying compilers [102]), and direct use of semantics we know of the following main approaches that directly aid the verification of software.
7.2.1 Type systems
Types are weak specifications [71] that are automatically checked by compilers.
Type systems are a long-standing topic of interest in programming language research. Early work in type theory [37, 113] showed how dependent types allow a type system to express complete functional specifications as well as constructive proofs of program correctness, at many levels of detail. Examples of dependently typed programming languages where this idea is explored include ATS, RSP1, Omega, Epigram, Cayenne, and Martin-Löf type theory [11, 33, 95, 114, 124, 141]. Work by Voda has similar goals [138].
4 Besides use of Hoare logic, or “axiomatic semantics” [65] one can also specify and verify software using denotational [121] or operational semantics [10]. However, these styles are not typically well-suited for specification purposes, at least for imperative programs.
7.2.2 Program Analysis
Program analysis gathers information that safely approximates what programs will do at runtime. Static type systems are a special case of static analysis, but program analysis is not restricted to obtaining information about types. Like type checking, program analysis can be seen as a way of doing weak verification; for example shape analysis can be seen as a way of “computing a safe approximation to a statement’s strongest postcondition” [120, p. 284].
Many interesting formal methods tools have checked various properties using static analyses of various sorts. Examples include partial correctness (checked by, e.g., TVLA [87]), conformance to API protocols (checked by SLAM [17]), memory safety (checked by Prefix and Prefast [84] and LCLint [18]), and absence of race conditions (checked by Autolocker [96]). (There are also several systems that look for error patterns, including Metal [46] and Findbugs [70].)
7.2.3 Assertions
Assertions are logical properties of a system, usually expressed in some extension of predicate logic or temporal logic. Assertions can specify post-conditions for methods, invariant properties for objects, and protocols that API calls should obey.
There has also been a historical strand of work that directly adds Hoare-style specification and verification to programming languages. Gypsy [7] and Alphard [63, 92, 123] are early examples. The Euclid language [83, 91] was notable along these lines: Euclid omitted or restricted several features of Pascal, as an aid to formal verification. For example, Euclid introduced the notion of heap regions as a way to get some control on aliasing, and also prohibited overlap among the parameters to procedure calls. The SPARK subset of Ada [18] continues this tradition. Perhaps the most successful such language is Eiffel [97, 98], which takes a very pragmatic approach to specification and focuses on run-time assertion checking. The ESC system [43] is an interesting hybrid, since it uses assertions, but in some ways is more like a static analysis system.
7.3 Problems with Current Approaches
We see several overall problems with the above approaches to directly aiding verification.
7.3.1 Effort Needed for Verification
Programmers are less likely to use a technique if it does not allow them to suppress proofs or details.
For example, when using a dependent type system, the need to provide proofs of correctness along with executable code limits the appeal of dependent type systems, since this demands substantially more work than needed in currently popular programming languages, and the proofs are not optional. A potential way out of this difficulty for dependent types is shown by Dependent ML, which, while also based on dependent types, has the goal of checking properties without programmer-supplied proofs [145]. Thus one research direction would be to explore how to gain the advantages of dependent type systems without the need to explicitly supply proofs.
Similarly, when using assertions, one often has to specify many properties in addition to the property of interest. The Bandera system [38] and SLAM [17] both use slicing [134] before model checking to avoid state space explosion.
An interesting research direction would be to use slicing more extensively in other kinds of verification.
7.3.2 Lack of Extensibility
Current programming languages often fix a particular notation and verification technique, and do not allow users to modify or add to it. For example, it is hard to find a single level of specification beyond types that all programmers would agree is worthwhile. Indeed one might criticize most languages where types play a central role for taking an important concept and freezing it. That is, if types are so important, why do languages (like Java, Standard ML, and Haskell) allow for just one type system? It would seem more valuable to first see a programming language as describing an untyped computation and then allow for various ways to infer the various kinds of typings as well as other static properties. Also, types are open-ended: there is no one best type system, and researchers will always be making new proposals for better systems. Similar remarks apply to assertion languages and static analysis frameworks.
Thus a research direction would be to find a more open architecture for programming language definition (and implementation) that allows the use of multiple type systems, multiple static analyses and multiple different kinds of assertions. Ideally, it would be best to allow these different kinds of annotations to interact with each other. For example, it would be great if specifications written using assertions could refer to properties (such as what variables are assigned) that are covered by a static analysis.
7.4 Short-Term Research Directions
In this section we describe some ideas for research directions in the short term (5-7 years), with two goals: directly supporting specification and verification, and eliminating much of its drudgery by eliminating common problems.
7.4.1 Supporting Specification and Verification Annotations
Basic language features for supporting specification and verification have been discussed above, in the section on specification languages. These should be investigated for their interactions with programming languages and systems. For example to what extent can optimizing compilers and other kinds of static analysis make use of such information?
There is one important aspect of programming language designs that could greatly ease specification and verification, which is to design languages so that expressions (or at least some identifiable subset of expressions) have no side effects. Side effects in expressions make it difficult to follow Eiffel’s lead in using programming language expressions in assertions [97, 98]. While some languages in the Pascal family (including Euclid [83] and Ada [73]) already do this, based on Pascal’s separation of functions and procedures [75, 144], it deserves to be more widely followed.
Tools for programming languages could also be designed to better support specification and verification annotations. Ideally annotations should be provided in an open manner, which would allow users and tool providers to add to the set of annotations. Meta-information such as the annotations of Java and C# are useful for this purpose, but are weak in that they do not allow full use of the language’s expression syntax and are not hierarchical, and thus do not support rich syntax for specification. Furthermore, to support typing and verification, annotations must be permitted at all levels of syntax; for example, adding annotations to statements is necessary to specify the effect of a loop.
Another way that programming languages could aid working with annotations is if they would allow annotations to substitute for code. That is, a tool should be able to manipulate a program in which some parts are not implemented in the language, but are merely specified with some annotations. Achieving this kind of “specification closure” would help researchers working on compilers and interface specification.
7.4.2 Eliminating Drudgery in Specification and Verification
Programming language design can reduce the cost of specification and verification by keeping the language simple, by automating more of the work (e.g., by propagating type information), and by eliminating common errors. (Eliminating common errors would also help make programs more reliable, even if programmers do not use verification techniques.) Historical examples include elimination of dangling references by the use of garbage collection, encapsulation of iteration idioms (such as map or for loops), type systems that avoid null pointer dereferences (as in Lisp or CLU [89] and the work of Fähndrich and Leino [49]), and SPARK’s elimination of conditional data flow errors (such as reading from uninitialized variables) [18].
It seems like a fruitful research direction to try to eliminate other common errors, such as array indexing errors, perhaps by using dependent types or by using modulo arithmetic to map all integers back to defined array elements.
It is perhaps also useful to look closely at verification technology and to see what features of programming languages cause the most trouble for verification efforts. Following the lead of Euclid [83, 91], and SPARK [18], it may be interesting to try to design languages (or subsets) without such features. Another way of putting this research question is: what features that are not in languages like SPARK can now be handled without causing difficulty for verification?
Some common errors may not be problems with language itself, but may be problems with use of libraries or simply mistakes that programmers commonly make. Can rules for automatically finding such common errors, as is done in Metal [46] and Findbugs [70], be added to a programming language, under the control of tool builders or users? One simple direction for achieving allowing such extensions may be to add features like `declare error` and `declare warning` from AspectJ [9, 79], although such mechanisms may be too simple to handle all the kinds of bugs detected by such tools.
7.5 Long-Term Directions
In the longer term (8-15 years), one can contemplate more integration instead of just promoting extensible tools to aid specification and verification.
7.5.1 Integration of Tools and Languages
Make the programming language’s compiler a platform that makes it easier to build and integrate multiple specification and verification tools. Eclipse may be an example of the kind of development platform that is headed in the right direction, but it would need to be substantially enhanced to allow for the addition of multiple tools and to support their integration.
7.5.2 More Integration of Types and Specifications
Another goal is to find potential “sweet spots” that are intermediate between full functional (or control) specifications and type systems. Dependent types
might be helpful as a technology for verification of such partial specifications, but they must be made much more accessible to programmers.
### 7.5.3 Integration of Rich Static Checking
Support the integration of rich static checking (verification of partial specifications) in the programming language. Researchers could explore taking some existing programming languages and providing support for flexible deduction to be allowed on source code and any assertions that are associated with that code (either in the code as type declarations, loop invariants, etc.) or separately.
Allow for possible community-based inference to be performed on a module-by-module level. Provide the elements of a computational logic that could help in performing basic source-level manipulations such as substitutions and unification. An example of such a scheme can be found in the work of the Ciao system [62].
### 8 Conclusions
This roadmap has described ways that researchers in four areas — specification languages, program generation, correctness by construction, and programming languages — might help the verified software grand challenge project. Researchers in these areas need challenge problems to be described in many different ways, including requirements, source code, and test cases.
In the short term, a common research goal shared by all four areas is building extensible tool frameworks that would allow researchers to more easily implement specification and verification tools. This could lead to the exploration of more research ideas and to more careful evaluation of these ideas.
In the long term, researchers can try to consolidate the best of these ideas into new theories and tools.
### Acknowledgments
Thanks to the members of IFIP Working Group 2.3 (Programming Methodology) for discussions and for comments on an earlier draft of this material, presented at the Brugges meeting in March 2006. Special thanks to Michael Jackson (the one involved in IFIP WG 2.3) for his advice on narrowing the scope of this roadmap: “specialize!” (Yes, it was even broader previously.) Thanks to Shriram Krishnamurthi for several discussions and suggestions. Thanks to Rod Chapman for comments, ideas, and corrections relating to SPARK. Thanks also to the participants at the SRI Mini-Conference on Verified Software (April 1–2, 2006) and to the Dagstuhl workshop on “The Challenge of Software Verification” (July 10–13, 2006) for additional comments and suggestions. Thanks to the US National Science Foundation for grants supporting these meetings and for supporting, in part, the work of Leavens (CCF-0428078 and CCF-0429567), Fisler (CCR-0132659 and CCR-0305834), and Stump (CCF-0448275).
### References
[56] J. Guttag and J. J. Horning. The algebraic specification of abstract data
[57] John V. Guttag, James J. Horning, S.J. Garland, K.D. Jones, A. Modet,
[58] Klaus Havelund and Thomas Pressburger. Model checking Java programs
using Java PathFinder. *International Journal on Software Tools for Tech-
nology Transfer (STTT)*, 2(4), April 2000.
[59] Ian J. Hayes, Michael Jackson, and Cliff B. Jones. Determining the spec-
ification of a control system from that of its environment. In Araki et al. [8],
pages 154–169.
[60] Eric C. R. Hehner. *A Practical Theory of Programming*. Texts and
http://www.cs.utoronto.ca/~hehner/aPToP
[61] Eric C. R. Hehner. Formalization of time and space. *Formal Aspects of
López-García. Integrated program debugging, verification, and optimization
using abstract interpretation (and the Ciao system preprocessor). *Sci.
London, K. V. S. Prasad, V. R. Prasad, Jonathan Rosenberg, Mary Shaw,
and William A. Wulf (editor). (preliminary) an informal definition of
Alphard. Technical Report CMU-CS-78-105, School of Computer Science,
[65] C. A. R. Hoare. An axiomatic basis for computer programming. *Commu-
[67] C. A. R. Hoare, I. J. Hayes, He Jifeng, C. C. Morgan, A. W. Roscoe,
J. W. Sanders, I. H. Sorensen, J. M. Spivey, and B. A. Sufrin. Laws of
See corrections in the September 1987 CACM.
[68] Charles Anthony Richard Hoare, Natarajan Shankar, and Jay Misra, edi-
tors. *Proc. IFIP Working Conference on Verified Software: Tools, Tech-
niques, and Experiments*, Zürich, Switzerland, October 2005.
[69] Tony Hoare, Jayadev Misra, and N. Shankar. The IFIP working conference
on verified software: Theories, tools, experiments. http://tinyurl.com/nrhdl
October 2005.
[70] David Hovemeyer. *Simple and Effective Static Analysis to Find Bugs*.
|
{"Source-Url": "http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1002&context=cs_techreports", "len_cl100k_base": 14862, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 86215, "total-output-tokens": 25825, "length": "2e13", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0003352165222167969, "__label__crime_law": 0.00028824806213378906, "__label__education_jobs": 0.0012807846069335938, "__label__entertainment": 5.561113357543945e-05, "__label__fashion_beauty": 0.00016355514526367188, "__label__finance_business": 0.0002112388610839844, "__label__food_dining": 0.0002942085266113281, "__label__games": 0.0005869865417480469, "__label__hardware": 0.0006365776062011719, "__label__health": 0.00041747093200683594, "__label__history": 0.00024068355560302737, "__label__home_hobbies": 8.547306060791016e-05, "__label__industrial": 0.00031828880310058594, "__label__literature": 0.0003445148468017578, "__label__politics": 0.00026345252990722656, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.00971221923828125, "__label__social_life": 8.243322372436523e-05, "__label__software": 0.0038700103759765625, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00030350685119628906, "__label__transportation": 0.000579833984375, "__label__travel": 0.00018405914306640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 104614, 0.04661]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 104614, 0.65999]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 104614, 0.88704]], "google_gemma-3-12b-it_contains_pii": [[0, 1232, false], [1232, 2682, null], [2682, 5101, null], [5101, 6638, null], [6638, 9499, null], [9499, 12901, null], [12901, 15738, null], [15738, 19137, null], [19137, 22399, null], [22399, 25819, null], [25819, 28830, null], [28830, 31791, null], [31791, 34767, null], [34767, 38246, null], [38246, 41059, null], [41059, 43725, null], [43725, 46606, null], [46606, 50656, null], [50656, 53759, null], [53759, 56707, null], [56707, 60216, null], [60216, 63208, null], [63208, 66434, null], [66434, 69978, null], [69978, 73277, null], [73277, 76298, null], [76298, 78998, null], [78998, 81683, null], [81683, 84642, null], [84642, 87444, null], [87444, 90113, null], [90113, 92816, null], [92816, 95440, null], [95440, 98343, null], [98343, 101210, null], [101210, 103909, null], [103909, 104614, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1232, true], [1232, 2682, null], [2682, 5101, null], [5101, 6638, null], [6638, 9499, null], [9499, 12901, null], [12901, 15738, null], [15738, 19137, null], [19137, 22399, null], [22399, 25819, null], [25819, 28830, null], [28830, 31791, null], [31791, 34767, null], [34767, 38246, null], [38246, 41059, null], [41059, 43725, null], [43725, 46606, null], [46606, 50656, null], [50656, 53759, null], [53759, 56707, null], [56707, 60216, null], [60216, 63208, null], [63208, 66434, null], [66434, 69978, null], [69978, 73277, null], [73277, 76298, null], [76298, 78998, null], [78998, 81683, null], [81683, 84642, null], [84642, 87444, null], [87444, 90113, null], [90113, 92816, null], [92816, 95440, null], [95440, 98343, null], [98343, 101210, null], [101210, 103909, null], [103909, 104614, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 104614, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 104614, null]], "pdf_page_numbers": [[0, 1232, 1], [1232, 2682, 2], [2682, 5101, 3], [5101, 6638, 4], [6638, 9499, 5], [9499, 12901, 6], [12901, 15738, 7], [15738, 19137, 8], [19137, 22399, 9], [22399, 25819, 10], [25819, 28830, 11], [28830, 31791, 12], [31791, 34767, 13], [34767, 38246, 14], [38246, 41059, 15], [41059, 43725, 16], [43725, 46606, 17], [46606, 50656, 18], [50656, 53759, 19], [53759, 56707, 20], [56707, 60216, 21], [60216, 63208, 22], [63208, 66434, 23], [66434, 69978, 24], [69978, 73277, 25], [73277, 76298, 26], [76298, 78998, 27], [78998, 81683, 28], [81683, 84642, 29], [84642, 87444, 30], [87444, 90113, 31], [90113, 92816, 32], [92816, 95440, 33], [95440, 98343, 34], [98343, 101210, 35], [101210, 103909, 36], [103909, 104614, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 104614, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
1d161289075c031d1b11b170735b7c1a9ebf174e
|
Model Transformations
What is a transformation?
- A **transformation** is the automatic generation of a target model from a source model, according to a transformation definition.
- A **transformation definition** is a set of transformation rules that together describe how a model in the source language can be transformed into a model in the target language.
- A **transformation rule** is a description of how one or more constructs in the source language can be transformed into one or more constructs in the target language.
- Unambiguous specifications of the way that (part of) one model can be used to create (part of) another model
- Preferred characteristics of transformations
- **semantics-preserving**
Model-to-model vs. Model-to-code
- **Model-to-model** transformations
- Transformations may be between different languages. In particular, between different languages defined by MOF
- **Model-to-text** transformations
- Special kind of model to model transformations
- MDA TS to Grammar TS
Modelling with UML, with semantics
Transformations as models
- Treating everything as a model leads not only to conceptual simplicity and regular architecture, but also to implementation efficiency.
- An implementation of a transformation language can be composed of a transformation virtual machine plus a metamodel-driven compiler.
- The transformation VM allows uniform access to model and metamodel elements.
Model transformation
- Each model conforms to a metamodel.
- A transformation builds a target model (Mb) from a source model (Ma).
- A transformation is a model (Mt, here) conforming to a metamodel (MMt).
Characterisation of model transformations (1)
- **Endogenous vs. exogenous**
- **Endogenous** transformations are transformations between models expressed in the same metamodel. Endogenous transformations are also called **rephrasing**
- Optimisation, refactoring, simplification, and normalization of models.
- Transformations between models expressed using different meta-models are referred to as **exogenous** transformations or **translations**
- Synthesis of a higher-level specification into a lower-level one, reverse engineering, and migration from a program written in one language to another
- **Horizontal vs. vertical**
- **Horizontal** transformations are transformations where the source and target models reside at the same abstraction level
- Refactoring (an endogenous transformation) and migration (an exogenous transformation)
- **Vertical** transformations are transformations where the source and target models reside at different abstraction levels
- Refinement, where a specification is gradually refined into a full-fledged implementation
Characterisation of model transformations (2)
- **Level of automation**
- The level of automation is the grade to which a model transformation can be automated.
- **Complexity**
- Simple transformations
- Mappings for identifying relations between source and target model elements
- Complex transformations
- Synthesis, where higher-level models are refined to lower-level models
- **Preservation**
- Each transformation preserves certain aspects of the source model in the transformed target model.
- The properties that are preserved can differ significantly depending on the type of transformation.
- With refactorings the (external) behaviour needs to be preserved, while the structure is modified.
- With refinements, the program correctness needs to be preserved.
Characterisation of model transformations (3)
Transformation = Matching and deriving patterns
Lang. X Transformation Definition Lang. Y
expressed in defined by expressed in
matched patterns derived patterns
Transformation in the same meta-model
Lang. X Transformation Definition Lang. X
expressed in defined by expressed in
matched patterns
Transformation in the same model
Lang. X Transformation Definition
expressed in defined by
in-place transformation
Modelling with UML, with semantics
Refinement preserve meaning and derives complex patterns
Lang. X Refinement Definition Lang. Y
expressed in defined by expressed in
higher abstraction level
Refinement in the same meta-model
Lang. X Refinement Definition
expressed in defined by expressed in
Refinement in the same model
Lang. X Refinement Definition
expressed in defined by in-place refinement
Characterisation of model transformations (4)
Features of model transformations
- **Specification**
- Some approaches provide a dedicated specification mechanism, such as pre-/post-conditions expressed in OCL.
- **Transformation rules**
- A transformation rule consists of two parts:
- A left-hand side (LHS), which accesses the source model
- A right-hand side (RHS), which expands in the target model
- A **domain** is the rule part used for accessing the models on which the rule operates
- The **body** of a domain can be divided into three subcategories
- Variables: Variables may hold elements from the source and/or target models
- Patterns: Patterns are model fragments with zero or more variables
- Logic: Logic expresses computations and constraints on model elements
- The transformations variables and patterns can be **typed**.
Features of model transformations
- **Rule application control**
- *Location determination* is the strategy for determining the model locations to which transformation rules are applied.
- *Scheduling* determines the order in which transformation rules are executed.
- **Rule organisation**
- Rule organisation is concerned with composing and structuring multiple transformation rules by mechanisms such as modularisation and reuse.
- **Source-target relationship**
- whether source and target are one and the same model or two different models
- Create new models
- Update existing models
- In-place update
Features of model transformations
- **Incrementality**
- Ability to update existing target models based on changes in the source models
- **Directionality**
- Unidirectional transformations can be executed in one direction only, in which case a target model is computed (or updated) based on a source model
- Multidirectional transformations can be executed in multiple directions, which is particularly useful in the context of model synchronisation.
Features of model transformations
- **Tracing**
- Mechanisms for recording different aspects of transformation execution, such as creating and maintaining trace links between source and target model elements.
- Trace information can be useful in
- performing impact analysis (i.e. analyzing how changing one model would affect other related models),
- determining the target of a transformation as in model synchronization
- model-based debugging (i.e. mapping the stepwise execution of an implementation back to its high-level model)
- debugging model transformations themselves
Model-to-model approaches (1)
- **Direct manipulation approaches**
- Offers an internal model representation and some APIs to manipulate it
- Usually implemented as an object-oriented framework
- Users usually have to implement transformation rules, scheduling, tracing, etc.
- Examples: Java Metadata Interface (JMI), EMF, …
- **Structure-driven approaches**
- Two distinct phases:
- The first phase is concerned with creating the hierarchical structure of the target model
- The second phase sets the attributes and references in the target
- The overall framework determines the scheduling and application strategy; users are only concerned with providing the transformation rules
- Example: OptimalJ
Model-to-model approaches (2)
- **Template-based approaches**
- Model templates are models with embedded meta-code that compute the variable parts of the resulting template instances.
- Model templates are usually expressed in the concrete syntax of the target language, which helps the developer to predict the result of template instantiation.
- Typical annotations are conditions, iterations, and expressions, all being part of the meta-language. An expression language to be used in the meta-language is OCL.
- Examples: Czarnecki, Antkiewicz (2005)
- **Operational approaches**
- Similar to direct manipulation but offer more dedicated support for model transformation.
- Extend the utilized metamodeling formalism with facilities for expressing computations.
- Extend a query language such as OCL with imperative constructs.
- The combination of MOF with such extended executable OCL becomes a fully-fledged object-oriented programming system.
- Examples: QVT Operational mappings, XMF-Mosaic's executable MOF, MTL, C-SAW, Kermeta, etc.
Model-to-model approaches (3)
- **Relational approaches**
- Declarative approaches in which the main concept is mathematical relations
- The basic idea is to specify the relations among source and target element types using constraints
- Since declarative constraints are non-executable, declarative approaches give them an executable semantics, such as in logic programming
- Relational approaches are side-effect-free, support multidirectional rules, can provide backtracking …
- Examples: QVT Relations, MTF, Kent Model Transformation Language, Tefkat, AMW, mappings in XMF-Mosaic, etc.
Model-to-model approaches (4)
- **Graph-transformation-based approaches**
- Based on the theoretical work on graph transformations
- Operates on typed, attributed, labelled graphs
- Graph transformation rules have an LHS and an RHS graph pattern.
- The LHS pattern is matched in the model being transformed and replaced by the RHS pattern in place
- Additional logic, for example, in string and numeric domains, is needed to compute target attribute values such as element names
- Examples: AGG, AToM3, VIATRA, GReAT, UMLX, BOTL, MOLA, Fujaba, etc.
Model-to-model approaches (5)
- **Hybrid approaches**
- Hybrid approaches combine different techniques from the previous categories
- as separate components
- or/and, in a more fine-grained fashion, at the level of individual rules
- In a hybrid rule, the source or target patterns are complemented with a block of imperative logic which is run after the application of the target pattern
- Rules are unidirectional and support rule inheritance.
- Examples:
- Separate components: QVT (Relations, Operational mappings, and Core)
- Fine-grained combination: ATL and YATL
Model-to-model approaches (6)
- **Other approaches**
- Extensible Stylesheet Language Transformation (XSLT)
- Models can be serialized as XML using the XMI
- Model transformations can be implemented with Extensible Stylesheet Language Transformation (XSLT), which is a standard technology for transforming XML
- The use of XMI and XSLT has scalability limitations
- Manual implementation of model transformations in XSLT quickly leads to non-maintainable implementations
- Application of meta-programming to model transformation
- Domain-specific language for model transformations embedded in a meta-programming language.
Model-to-text approaches
• **Visitor-based** approaches
• Use visitor mechanism to traverse the internal representation of a model and write text to a text stream
• Example: Jamda
• **Template-based** approaches
• The majority of currently available MDA tools support template-based model-to-text generation
• structure of a template resembles more closely the code to be generated
• Templates lend themselves to iterative development (they can be derived from examples)
• A template consists of the target text containing slices of meta-code to access information from the source
• Examples: oAW, JET, Codagen Architect, AndroMDA, ArcStyler, MetaEdit, OptimalJ, etc.
QVT Operational
MOF QVT: OMG’s model-to-model transformation standard
- **QVT** stands for **Query/Views/Transformations**
- OMG standard language for expressing *queries, views,* and *transformations* on MOF models
- OMG QVT Request for Proposals (QVT RFP, ad/02-04-10) issued in 2002
- Seven initial submissions that converged to a common proposal
- Current status (June, 2011): version 1.1, formal/11-01-01
[http://www.omg.org/spec/QVT/1.0/](http://www.omg.org/spec/QVT/1.0/)
MOF QVT context
- Abstract syntax of the language is defined as MOF 2.0 metamodel
- Transformations (Tab) are defined on the base of MOF 2.0 metamodels (MMa, MMb)
- Transformations are executed on instances of MOF 2.0 metamodels (Ma)
### Requirements for MOF QVT language
- Some requirements formulated in the QVT RFP
<table>
<thead>
<tr>
<th>Mandatory requirements</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Query language</td>
<td>Proposals shall define a language for querying models</td>
</tr>
<tr>
<td>Transformation language</td>
<td>Proposals shall define a language for transformation definitions</td>
</tr>
<tr>
<td>Abstract syntax</td>
<td>The abstract syntax of the QVT languages shall be described as MOF 2.0 metamodel</td>
</tr>
<tr>
<td>Paradigm</td>
<td>The transformation definition language shall be declarative</td>
</tr>
<tr>
<td>Input and output</td>
<td>All the mechanisms defined by proposals shall operate on models instances of MOF 2.0 metamodel</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Optional requirements</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Directionality</td>
<td>Proposals may support transformation definitions that can be executed in two directions</td>
</tr>
<tr>
<td>Traceability</td>
<td>Proposals may support traceability between source and target model elements</td>
</tr>
<tr>
<td>Reusability</td>
<td>Proposals may support mechanisms for reuse of transformation definitions</td>
</tr>
<tr>
<td>Model update</td>
<td>Proposals may support execution of transformations that update an existing model</td>
</tr>
</tbody>
</table>
MOF QVT architecture
- Layered architecture with three transformation languages:
- **Relations** (declarative)
- Core (declarative, simpler than Relations)
- **Operational Mappings** (imperative)
- Black Box is a mechanism for calling external programs during transformation execution
- QVT is a set of three languages that collectively provide a hybrid “language”.
Overview of Operational Mappings (OM)
- Imperative transformation language that extends relations
- OM execution overview:
- **Init**: code to be executed before the instantiation of the declared outputs.
- **Instantiation** (internal): creates all output parameters that have a null value at the end of the initialization section
- **Population**: code to populate the result parameters and the
- **End**: code to be executed before exiting the operation. Automatic handling of traceability links
- Transformations are unidirectional
- Supported execution scenarios:
- Model transformations
- In-place update
- OM uses explicit internal scheduling, where the sequence of applying the transformation rules is specified within the transformation rules
- Updates have to be implemented in the model transformations
Flattening class hierarchies example
- Flattening UML class hierarchies: given a source UML model transform it to another UML model in which only the leaf classes (classes not extended by other classes) in inheritance hierarchies are kept.
- Rules:
- Transform only the leaf classes in the source model
- Include the inherited attributes and associations
- Attributes with the same name override the inherited attributes
- Copy the primitive types
Sample input model
Course
name : String
Person
name : String
ssn : String
Student
attends
EnrolledInSchool
school : String
Address
street : String
city : String
residesAt
Employee
supervisor
PhDStudent
supervisor
Employed
organizationName : String
Professor
name : FullName
FullName
firstName : String
lastName : String
Car
carOwnership
«primitive type» String
Sample output model
- **Course**
- name : String
- **Address**
- street : String
- city : String
- **PhDStudent**
- name : String
- ssn : String
- school : String
- **Professor**
- name : FullName
- ssn : String
- organizationName : String
- **FullName**
- firstName : String
- lastName : String
- **Car**
- **relationships**
- attends
- residesAt
- carOwnership
**OM language: Transformation program structure**
```plaintext
transformation flatten
(in hierarchical : UML,
out flat : UML);
main() {
Entry point: execution of the
transformation starts here by executing the
operations in the body of main
...}
helper declarations
...
mapping operations declarations
Transformation elements:
Transformation consists of mapping
operations and helpers forming the
transformation logic.
```
**Signature:** declares the
transformation name and the
source and target metamodels.
*in* and *out* keywords indicate
source and target model variables.
Mapping operations
- A mapping operation maps one or more source elements into one or more target elements
- Always unidirectional
- Selects source elements on the base of a type and a Boolean condition (guard)
- Executes operations in its body to create target elements
- May invoke other mapping operations and may be invoked
- Mapping operations may be related by inheritance, merging, and disjunction
General structure of mapping operations
```
mapping Type::operationName((in|out|inout) pName : pType)*
: (rName : rType) +
when {guardExpression} pre-condition
where {guardExpression} { post-condition
init {
...
init section contains code executed before the instantiation of the declared result
elements
}
There exists an implicit instantiation section that creates all the output parameters not created in
the init section. The trace links are created in the instantiation section.
population {
...
population section contains code that sets the values or the result and the
parameters declared as out or inout. The population keyword may be
skipped. The population section is the default section in the operation body.
}
end {
...
end section contains code executed before exiting the operation
}
```
Modelling with UML, with semantics
Mapping operations: Example
- Rule for transforming leaf classes
- selects only classes without subclasses, collects all the inherited properties and associations, creates new class in the target model
```
mapping Class::copyLeafClass() : Class
when {
not hierarchical.allInstances(Generalization)->exists(g | g.general = self)
} {
name := self.name;
ownedAttribute += self.ownedAttribute.
map copyOwnedProperty();
ownedAttribute += (self.allFeatures()[Property] -
self.ownedAttribute).copyProperty(self);
self.allFeatures()[Property]->select(p |
not p.association.oclIsUndefined()).association.copyAssociation(self);
}
```
- Mappings only executed once
- Call of mappings with OCL-syntax (`collection->map` vs. `object.map`)
Helpers: Example
```plaintext
intermediate property Property::mappedTo : Set(Tuple(c : Class, p : Property));
helper Property::copyProperty(in c : Class) : Property {
log('[Property] name = ' + self.name);
var copy := object Property {
name := self.name;
type := self.type.map transformType();
};
self.mappedTo += Tuple{ c = c, p = copy };
return copy;
}
```
Resolving object references
- The transformation engine maintains links among source and target model elements. These links are used for resolving object references from source to target model elements and back.
- `resolveIn` is an operation that looks for model elements of a given type (`Class`) in the target model derived from a source element by applying a given rule (`copyLeafClass`).
```
helper Association::copyAssociation(IN c : Class) : Association {
var theOwnedEnd : Property := self.ownedEnd->any(true); …
return object Association {
name := self.name;
package := self.package.resolveoneIn(Package::transformPackage, Package);
ownedEnd += new Property(theOwnedEnd.name,
c.resolveoneIn(Class::copyLeafClass, Class)); …
}
}
call to constructor
```
- Variants: `resolve(i | exp)`, `resolveone(i | exp)`
- *late resolve* for resolving *after* the transformation (in order of calls)
Mapping operations: Disjunction, inheritance, merging
```java
mapping DataType::copyDataType() : DataType {
name := self.name;
ownedAttribute += self.ownedAttribute.map copyOwnedProperty();
}
mapping PrimitiveType::copyPrimitiveType() : PrimitiveType {
init {
result := self.deepclone().oclAsType(PrimitiveType);
}
}
mapping Type::transformType() : Type
disjuncts DataType::copyDataType,
Class::copyLeafClass,
PrimitiveType::copyPrimitiveType;
• Inherited rules executed after init
• Merged rules executed after end
```
Imperative OCL constructs
• More sophisticated control flow
• `compute (v : T := exp) body`
• `like let ... in`
• `while (cond) body`
• `coll->forEach (i | exp) body`
• `break, continue`
• `switch-statement, exceptions`
MOFM2T: OMG’s model-to-text transformation standard
- **M2T** stands for **Model-to-Text**
- OMG standard language for *transforming* MOF models into text
- Current status (June, 2011): version 1.0, formal/08-01-16
http://www.omg.org/spec/MOFM2T/1.0/
M2T Transformations: Example (1)
```plaintext
[comment encoding = UTF-8 ]
[** Java Beans-style code from UML static structure */]
[module generate('http://www.eclipse.org/uml2/3.0.0/UML')]
[** ]
* Generate a Java file from a UML class
* @param aClass
*/
[template public generateClass(aClass : Class)]
[comment @main/]
[file (aClass.name.concat('.java'), false, 'UTF-8')] public class [aClass.name] {
[for (p : Property | aClass.attribute) separator('
')]
[generateClassAttribute(p)]
[/for]
}[/file]
[/template]
```
- **verbatim text**
- **call of another template**
- **top-level rule (several possible)**
- **output in file, not appending**
- **metamodel type**
M2T Transformations: Example (2)
```java
[template public generateClassAttribute(aProperty : Property)]
private [getTypeName(aProperty.type)/] [aProperty.name/];
public [getTypeName(aProperty.type)/] [aProperty.name.toUpperFirst()]/()} {
// [protected(aProperty.name)]
// TODO implement
// [/protected]
return this.[aProperty.name/];
}
[/template]
[template public generateDataType(aDataType : DataType)]
[comment @main/]
[file (aDataType.name.concat('.java'), false, 'UTF-8')]
public class [aDataType.name/]
for (p : Property | aDataType.attribute)
before('{
separator('
after('}
public [getTypeName(aProperty.type)/] [aProperty.name/]; [/for]
[/file]
[/template]
[query public getTypeName(aType : Type) : String = aType.name /]
```
MOFM2T features
- **Tracing**
- `trace(id) ... [/trace]`
- **Change of escape direction**
- `@text-explicit` (default, shown above)
- `@code-explicit`
- **Macros**
- **Module structure**
- Public module elements visible outside a module
- Guards on templates for selecting a template when overriding (overridden template callable with `super/`)
- **No type checking of output**
Model Transformation Languages
Model-to-model approaches: Example
1. Package-to-schema
• Every package in the class model should be mapped to a schema with the same name as the package.
2. Class-to-table
• Every persistent class should be mapped to a table with the same name as the class. Furthermore, the table should have a primary-key column with the type NUMBER and the name being the class name with _tid appended.
3. Attribute-to-column
• The class attributes have to be appropriately mapped to columns, and some columns may need to be related to other tables by foreign key definitions.
Model-to-model approaches: Example
1. Package-to-schema
2. Class-to-table
3. Attribute-to-column
UML to RDBMS example: Metamodel
ATLAS Transformation Language (ATL)
- **Hybrid** approach
- declarative rules and imperative blocks
- based on OCL
- Developed by ATLAS Group (INRIA & LINA)
- Integrated into Eclipse platform
http://www.eclipse.org/m2m/atl/
- **Modules** composed of
- Rules
- matched rules (top-level)
- called rules
- Helpers
- **Normal execution mode**: target model generated by explicit rules
- **Refinement execution mode**: target model generated by explicit rules + all model elements that are not changed by rules
ATL: Matched rules
- Pattern-based generation of target elements from source elements
```plaintext
rule rule_name {
from in_var : in_type [(condition)]?
[using {
var1 : var_type1 = init_exp1;
...
varn : var_typen = init_expn;
}]
to
out_var1 : out_type1 (bindings1),
out_var2 : distinct out_type2 foreach (e in collection) (bindings2),
...
out_varn : out_typen (bindingsn)
[do {
statements
}]
}
```
- Source pattern
- Local variables
- Target patterns
- Iterated target pattern
- Imperative block for changing target elements
module SimpleClass2SimpleRDBMS;
create OUT : SimpleRDBMS from IN : SimpleClass;
rule PersistentClass2Table {
from c : SimpleClass!Class
(c.is_persistent and c.parent->oclIsUndefined())
using {
primary_attributes :
Sequence(TupleType(name : String,
type : SimpleClass!Classifier,
isPrimary : Boolean)) =
c.flattenedFeatures->select(f | f.isPrimary);
persistent_features : Sequence(TupleType(...)) = ...;
foreign_key_attributes : Sequence(TupleType(...)) = ...;
rest_of_attributes :
Sequence(TupleType(name : String,
type : SimpleClass!Classifier)) =
c.flattenedFeatures->
select(tup | not tup.isPrimary and
not tup.type->oclIsKindOf(SimpleClass!Class));
}
ATL: Example (2)
``` ATL
to t : SimpleRDBMS!Table
(name<-c.name,
cols<-primary_key_columns->union(foreign_key_columns)->union(rest),
pkey<-primary_key_columns,
fkeys<-foreign_keys),
primary_key_columns : distinct SimpleRDBMS!Column
foreach (primAttr in primary_attributes)
(name<-primAttr.name,
type<-primAttr.type.name),
...
}
helper context SimpleClass!Class def :
allAttributes : Sequence(SimpleClass!Attribute) =
self.attrs->
union(if not self.parent.oclIsUndefined()
then self.parent.allAttributes->select(attr |
not self.attrs->exists(at | at.name = attr.name))
else
Sequence {}
endif)->flatten();
...
```
QVT Relations: Language Overview
- Declarative language based on relations defined on model elements in meta-models
- Object patterns that may be matched and instantiated
- Automatic handling of traceability links
- Transformations are potentially multidirectional
- Supported execution scenarios:
- Check-only: verifies if given models are related in a certain way
- Unidirectional transformations
- Multidirectional transformations
- Incremental update of existing models
- Relations uses implicit rule scheduling which is based on the dependencies among the relations.
- The Relations semantics is divided into two steps
- It first conducts a checking step, where it checks, whether there exists a valid match in the target model that satisfies the relationship with the source model
- On the basis of the checking results, the enforcement semantics modifies the target model so that it satisfies the relationship to the source model
Relations transformations
• Relations **transformations** are specified between candidate models as a set of relations that must hold for the transformation to be successful. A **candidate model** is any model that conforms to a model type.
• In a **relation**, **domains** are declared that match elements in the candidate models.
• Relations can be further constrained by two sets of **predicates**, a when clause and a where clause. The
• The **when** clause specifies the conditions under which the relationship needs to hold
• The **where** clause specifies the condition that must be satisfied by all model elements participating in the relation.
• Each of the domains is also associated with several **object template expressions** used to match **patterns** in the candidate models
• Pattern matching is the process to determine correspondences between the candidate models
• **Checkonly** and **enforce** determine in which direction the transformation is executed.
• Existing objects are updated. For this purpose, the concept of **keys** uniquely identify object instances.
QVT Relations: Graphical syntax
Figure from [QVTP]
Relational approach: QVT Relations (1)
top relation ClassToTable {
cn : String; prefix : String;
checkonly domain.uml c : SimpleUML::UmlClass {
umlNamespace = p : SimpleUML::UmlPackage { },
umlKind = 'Persistent', umlName = cn
};
enforce domain rdbms t : SimpleRDBMS::RdbmsTable {
rdbmsSchema = s : SimpleRDBMS::RdbmsSchema { },
rdbmsName = cn,
rdbmsColumn = cl : SimpleRDBMS::RdbmsColumn {
rdbmsName = cn + '_tid', rdbmsType = 'NUMBER'
},
rdbmsKey = k : SimpleRDBMS::RdbmsKey {
rdbmsColumn = cl : SimpleRDBMS::RdbmsColumn{}
}
};
when { PackageToSchema(p, s); }
where { ClassToPkey(c, k); prefix = cn;
AttributeToColumn(c, t, prefix); }
}
Relational approach: QVT Relations (2)
relation AttributeToColumn {
checkonly domain.uml c : SimpleUML::UmlClass {
}
enforce domain rdbms t : SimpleRDBMS::RdbmsTable {
}
primitive domain prefix : String;
where {
ComplexAttributeToColumn(c, t, prefix);
PrimitiveAttributeToColumn(c, t, prefix);
SuperAttributeToColumn(c, t, prefix);
}
}
rel { as : A; b : B; c : C; }
Graph-transformation approach: AGG
Modelling with UML, with semantics
Graph-transformation approach: MOFLON
Graph-transformation approach: Mola
# Model-to-model approaches: Comparison (1)
<table>
<thead>
<tr>
<th>Transformation scenarios</th>
<th>ATL</th>
<th>QVT Rel.</th>
<th>QVT Op.</th>
<th>MOFLON</th>
<th>AGG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model synchronisation</td>
<td>×</td>
<td>✓</td>
<td>×</td>
<td>×</td>
<td>×</td>
</tr>
<tr>
<td>Conformance checking</td>
<td>×</td>
<td>✓</td>
<td>×</td>
<td>×</td>
<td>×</td>
</tr>
<tr>
<td>Model transformation</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>In-place update</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Interactive transformation</td>
<td>×</td>
<td>×</td>
<td>×</td>
<td>×</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Paradigm</th>
<th>ATL</th>
<th>QVT Rel.</th>
<th>QVT Op.</th>
<th>MOFLON</th>
<th>AGG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Declarative</td>
<td>✓</td>
<td>✓</td>
<td>×</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Hybrid</td>
<td>✓</td>
<td>×</td>
<td>×</td>
<td>×</td>
<td>×</td>
</tr>
<tr>
<td>Imperative</td>
<td>✓</td>
<td>×</td>
<td>✓</td>
<td>×</td>
<td>×</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Directionality</th>
<th>ATL</th>
<th>QVT Rel.</th>
<th>QVT Op.</th>
<th>MOFLON</th>
<th>AGG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Unidirectional</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Multidirectional</td>
<td>×</td>
<td>✓</td>
<td>×</td>
<td>✓</td>
<td>×</td>
</tr>
</tbody>
</table>
## Model-to-model approaches: Comparison (2)
<table>
<thead>
<tr>
<th></th>
<th>ATL</th>
<th>QVT Rel.</th>
<th>QVT Op.</th>
<th>MOFLON</th>
<th>AGG</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Cardinality</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>M-to-N</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>1-to-1</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td><strong>Traceability</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Automatic</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>User-specified</td>
<td>×</td>
<td>×</td>
<td>×</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td><strong>Query language</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>OCL-based</td>
<td></td>
<td>Object patterns</td>
<td>OCL-based</td>
<td>Graph patterns</td>
<td>Graph patterns</td>
</tr>
<tr>
<td><strong>Rule scheduling</strong></td>
<td></td>
<td>implicit, explicit</td>
<td>implicit</td>
<td>explicit</td>
<td>implicit, explicit</td>
</tr>
<tr>
<td><strong>Rule organisation</strong></td>
<td></td>
<td>inherit., libraries</td>
<td>inherit.</td>
<td>inherit.</td>
<td>layering</td>
</tr>
<tr>
<td><strong>Reflection</strong></td>
<td></td>
<td>runtime access to transf.</td>
<td>×</td>
<td>×</td>
<td>×</td>
</tr>
</tbody>
</table>
Java Emitter Templates (JET)
• Template-based model-to-text transformation approach
• avoiding to write repetitive glue code
• code generation from Java objects
• transformation of XML, XMI
• integrated with EMF
• Like Java Server Pages (JSPs)
• expressions (<% = . . . %>)
• scriptlets for inserting arbitrary Java statements (<% ... %>)
• JET translated into Java class behind the scenes
• JET1
• generate(Object argument)
• JET2
• generate(JET2Context context, JET2Writer out)
JET2: Example — Template
```html
<%@jet package="purchase"
class="PurchaseOrderTest"
imports="java.util.*" %>
<% PurchaseOrder order = (PurchaseOrder)context.getSource(); %>
<HTML>
<HEAD>Purchases</HEAD>
<BODY>
<P>Order to: <%=order.getShipTo()%> (bill to: <%=order.getBillTo()%>)
</P>
<UL>
<% for (Item item : order.getItems()) { %>
<LI>Item <%=item.getProductName()%>
<% } %>
</UL>
</P>
</BODY>
</HTML>
```
public void generate(final JET2Context context, final JET2Writer __out) {
JET2Writer out = __out;
out.write("<?xml version="1.0" encoding="utf-8"?>"); //NON-NLS-1$
out.write(NL); out.write(NL);
PurchaseOrder order = (PurchaseOrder)context.getSource();
out.write(NL); out.write("<HTML>"); //NON-NLS-1$
out.write(NL); out.write("<HEAD>Purchases</HEAD>"); //NON-NLS-1$
out.write(NL); out.write(NL); out.write("<BODY>"); //NON-NLS-1$
out.write(NL); out.write("<P>Order to: "); //NON-NLS-1$
out.write(order.getShipTo()); out.write(" (bill to: "); //NON-NLS-1$
out.write(order.getBillTo()); out.write(")"); //NON-NLS-1$
out.write(NL); out.write(NL); out.write("<UL>"); //NON-NLS-1$
out.write(NL);
for (Item item : order.getItems()) {
out.write("<LI>Item "); //NON-NLS-1$
out.write(item.getProductName());
out.write(NL);
}
out.write("</UL>"); //NON-NLS-1$
out.write(NL); out.write("</P>"); //NON-NLS-1$
out.write(NL); out.write("</BODY>"); //NON-NLS-1$
out.write(NL); out.write("</HTML> "); //NON-NLS-1$}
PurchaseFactory purchaseFactory = PurchaseFactory.eINSTANCE;
PurchaseOrder order1 = purchaseFactory.createPurchaseOrder();
order1.setBillTo("A");
order1.setShipTo("B");
Item item1 = purchaseFactory.createItem();
item1.setProductName("X"); item1.setPrice(100.0f); item1.setQuantity(3);
item1.setOrder(order1);
Item item2 = purchaseFactory.createItem();
item2.setProductName("Y"); item2.setPrice(200.0f); item2.setQuantity(2);
item2.setOrder(order1);
JET2Writer writer = new BodyContentWriter();
new PurchaseOrderTest().generate(new JET2Context(order1), writer);
System.out.println(writer.toString());
Domain-Specific Languages
UML – one size fits all?
- While the OMG MDA promotes UML as the visual “universal” glue suitable for modelling everything, there exists also a trend towards development and co-existence of several domain-specific modelling languages (DSLs).
- UML is seen as a “general-purpose” language while DSLs may be more expressive for most purposes.
- A model-driven framework needs to acknowledge the existence of different models and views expressed in different modelling languages.
- The MDA technologies (MOF, UML) can help to align these models through a common (meta-)meta-modelling language (MOF) on which model transformations and model mappings can be defined.
Domain-Specific Languages
Solve problem in domain terms
Map to UML
UML Model
Map to code, implement
Code
Generate, Add bodies
Assembler
No need to map!
Domain Model
Generates code
Domain Framework
Finished Product
© MetaCase
Advantages of using UML profiles
- UML is open standard language: many available books and training courses.
- UML is a recognized and transferable skill for software developers
- UML profiles provide a lightweight approach that is easily implemented using readily available UML tooling.
- Models with UML profiles applied can be read by all UML tools, even if they don’t have any knowledge of the profile.
- Basing all DSLs on UML creates a set of related languages that share common concepts.
- makes new profiles more readily understandable
- enables models expressed by different DSLs to be integrated easily
Disadvantages of using UML profiles
• New meta-models are adjusted to specific user groups, application domains, and usage context
• UML profiles only permit a limited amount of customisation
• New modelling concepts that can be only expressed by extending existing UML elements
• In DSLs the semantics of the modelling language is better understandable to the users of the application domain. The scope of DSLs is customized to its application domain and use
• User will be guided by the modelling language towards certain types of solutions
• The use of UML does require familiarity with modelling concepts.
• It is necessary to restrict the usage of UML with UML profiles, since most of UML usages only rely on a small subset of the entire meta-model
• In general is much more difficult to work by restriction than by extension (developing new meta-models
• Working by extension fosters the automation of code generation, since code generation does have to take into account less modelling and interpretation possibilities
Rationale for Using Profiles vs. MOF (benefits)
- Profiles
- are used for extending the UML language (the “reference meta-model”)
- are supported by UML Case tools
- guarantee the UML conformance of the extensions
- provide a dynamic extension capacity (i.e. extending an existing model)
- Typical example: *UML for a certain purpose*
- MOF extensions
- are used to create new meta-models
- apply to any meta-model
- New models are created from MOF extensions (no existing model updates)
- are supported by meta-CASE tools or infrastructure
- Typical example: *New meta-model* (e.g. DSLs for workflows, services etc.)
Meta-model characteristics
• Suited for target roles
• Support domain concepts and scenarios of target roles
• Ease-of-use and understandable for modeler (use terms)
• Support precise details and correctness for solution architect
• Avoid unnecessary complexity
• Keep it simple, stupid (KISS)
• Number of elements and associations
• Type and navigation of associations
• Make it modular
• Provide core with extensions
• Define and illustrate possible subsets (“dialects”) that support scenarios
• Consider integration and extension points
• Suited for implementation
• EMF representation
• Transformation from/to UML profile
• Transformation to PSM/Code
|
{"Source-Url": "http://theo.cs.uni-magdeburg.de/lehre/lehre15s/modelling/slides17.pdf", "len_cl100k_base": 9558, "olmocr-version": "0.1.53", "pdf-total-pages": 72, "total-fallback-pages": 0, "total-input-tokens": 98874, "total-output-tokens": 11891, "length": "2e13", "weborganizer": {"__label__adult": 0.0003113746643066406, "__label__art_design": 0.00032210350036621094, "__label__crime_law": 0.0002620220184326172, "__label__education_jobs": 0.0005974769592285156, "__label__entertainment": 3.808736801147461e-05, "__label__fashion_beauty": 0.00010794401168823242, "__label__finance_business": 0.000179290771484375, "__label__food_dining": 0.00022172927856445312, "__label__games": 0.0003609657287597656, "__label__hardware": 0.0003483295440673828, "__label__health": 0.0002157688140869141, "__label__history": 0.00015783309936523438, "__label__home_hobbies": 5.5730342864990234e-05, "__label__industrial": 0.0002765655517578125, "__label__literature": 0.0001857280731201172, "__label__politics": 0.0002079010009765625, "__label__religion": 0.0003426074981689453, "__label__science_tech": 0.002613067626953125, "__label__social_life": 6.604194641113281e-05, "__label__software": 0.00435638427734375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00024890899658203125, "__label__transportation": 0.0003135204315185547, "__label__travel": 0.00017011165618896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39097, 0.00474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39097, 0.35665]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39097, 0.73571]], "google_gemma-3-12b-it_contains_pii": [[0, 22, false], [22, 720, null], [720, 1054, null], [1054, 1433, null], [1433, 1639, null], [1639, 2729, null], [2729, 3528, null], [3528, 4032, null], [4032, 4449, null], [4449, 5276, null], [5276, 5907, null], [5907, 6367, null], [6367, 6968, null], [6968, 7696, null], [7696, 8763, null], [8763, 9364, null], [9364, 9930, null], [9930, 10525, null], [10525, 11173, null], [11173, 11861, null], [11861, 11877, null], [11877, 12418, null], [12418, 12657, null], [12657, 14444, null], [14444, 14817, null], [14817, 15643, null], [15643, 16101, null], [16101, 16504, null], [16504, 16902, null], [16902, 17501, null], [17501, 17907, null], [17907, 18771, null], [18771, 19546, null], [19546, 19943, null], [19943, 20887, null], [20887, 21469, null], [21469, 21702, null], [21702, 21702, null], [21702, 21958, null], [21958, 22637, null], [22637, 23403, null], [23403, 23792, null], [23792, 23823, null], [23823, 24399, null], [24399, 24497, null], [24497, 24529, null], [24529, 25054, null], [25054, 25618, null], [25618, 26344, null], [26344, 27029, null], [27029, 27979, null], [27979, 29081, null], [29081, 29133, null], [29133, 29897, null], [29897, 30260, null], [30260, 30290, null], [30290, 30361, null], [30361, 30399, null], [30399, 30435, null], [30435, 31620, null], [31620, 32568, null], [32568, 33072, null], [33072, 33492, null], [33492, 34586, null], [34586, 35190, null], [35190, 35216, null], [35216, 35880, null], [35880, 36118, null], [36118, 36736, null], [36736, 37775, null], [37775, 38415, null], [38415, 39097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 22, true], [22, 720, null], [720, 1054, null], [1054, 1433, null], [1433, 1639, null], [1639, 2729, null], [2729, 3528, null], [3528, 4032, null], [4032, 4449, null], [4449, 5276, null], [5276, 5907, null], [5907, 6367, null], [6367, 6968, null], [6968, 7696, null], [7696, 8763, null], [8763, 9364, null], [9364, 9930, null], [9930, 10525, null], [10525, 11173, null], [11173, 11861, null], [11861, 11877, null], [11877, 12418, null], [12418, 12657, null], [12657, 14444, null], [14444, 14817, null], [14817, 15643, null], [15643, 16101, null], [16101, 16504, null], [16504, 16902, null], [16902, 17501, null], [17501, 17907, null], [17907, 18771, null], [18771, 19546, null], [19546, 19943, null], [19943, 20887, null], [20887, 21469, null], [21469, 21702, null], [21702, 21702, null], [21702, 21958, null], [21958, 22637, null], [22637, 23403, null], [23403, 23792, null], [23792, 23823, null], [23823, 24399, null], [24399, 24497, null], [24497, 24529, null], [24529, 25054, null], [25054, 25618, null], [25618, 26344, null], [26344, 27029, null], [27029, 27979, null], [27979, 29081, null], [29081, 29133, null], [29133, 29897, null], [29897, 30260, null], [30260, 30290, null], [30290, 30361, null], [30361, 30399, null], [30399, 30435, null], [30435, 31620, null], [31620, 32568, null], [32568, 33072, null], [33072, 33492, null], [33492, 34586, null], [34586, 35190, null], [35190, 35216, null], [35216, 35880, null], [35880, 36118, null], [36118, 36736, null], [36736, 37775, null], [37775, 38415, null], [38415, 39097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39097, null]], "pdf_page_numbers": [[0, 22, 1], [22, 720, 2], [720, 1054, 3], [1054, 1433, 4], [1433, 1639, 5], [1639, 2729, 6], [2729, 3528, 7], [3528, 4032, 8], [4032, 4449, 9], [4449, 5276, 10], [5276, 5907, 11], [5907, 6367, 12], [6367, 6968, 13], [6968, 7696, 14], [7696, 8763, 15], [8763, 9364, 16], [9364, 9930, 17], [9930, 10525, 18], [10525, 11173, 19], [11173, 11861, 20], [11861, 11877, 21], [11877, 12418, 22], [12418, 12657, 23], [12657, 14444, 24], [14444, 14817, 25], [14817, 15643, 26], [15643, 16101, 27], [16101, 16504, 28], [16504, 16902, 29], [16902, 17501, 30], [17501, 17907, 31], [17907, 18771, 32], [18771, 19546, 33], [19546, 19943, 34], [19943, 20887, 35], [20887, 21469, 36], [21469, 21702, 37], [21702, 21702, 38], [21702, 21958, 39], [21958, 22637, 40], [22637, 23403, 41], [23403, 23792, 42], [23792, 23823, 43], [23823, 24399, 44], [24399, 24497, 45], [24497, 24529, 46], [24529, 25054, 47], [25054, 25618, 48], [25618, 26344, 49], [26344, 27029, 50], [27029, 27979, 51], [27979, 29081, 52], [29081, 29133, 53], [29133, 29897, 54], [29897, 30260, 55], [30260, 30290, 56], [30290, 30361, 57], [30361, 30399, 58], [30399, 30435, 59], [30435, 31620, 60], [31620, 32568, 61], [32568, 33072, 62], [33072, 33492, 63], [33492, 34586, 64], [34586, 35190, 65], [35190, 35216, 66], [35216, 35880, 67], [35880, 36118, 68], [36118, 36736, 69], [36736, 37775, 70], [37775, 38415, 71], [38415, 39097, 72]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39097, 0.05211]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ddf0601d8b94b267664ae951d6b786b3e1dee421
|
Jinn: Hijacking Safe Programs with Trojans
Komail Dharsee
University of Rochester
John Criswell
University of Rochester
Abstract
Untrusted hardware supply chains enable malicious, powerful, and permanent alterations to processors known as hardware trojans. Such hardware trojans can undermine any software-enforced security policies deployed on top of the hardware. Existing defenses target a select set of hardware components, specifically those that implement hardware-enforced security mechanisms such as cryptographic cores, user/kernel privilege isolation, and memory protections.
We observe that computing systems exercise general-purpose processor logic to implement software-enforced security policies. This makes general purpose logic security critical since tampering with it could violate software-based security policies. Leveraging this insight, we develop a novel class of hardware trojans, which we dub Jinn trojans, that corrupt general-purpose hardware and can hide in many places within a processor to enable flexible and powerful high-level attacks. Jinn trojans deactivate compiler-based security-enforcement mechanisms, making type-safe software vulnerable to memory-safety attacks by compromising a single bit of architectural state. We show that Jinn trojans are effective even when planted in general purpose hardware, disjoint from any hardware-enforced security components. We show that protecting hardware-enforced security logic is insufficient to keep a system secure from hardware trojans.
1 Introduction
The increasing complexity of modern System-on-chips (SoCs) incentivises companies to outsource the designs of hardware design blocks [21, 35]. Similar to code reuse via software libraries, sourcing third-party intellectual property (3PIP) allows system integrators to benefit from highly optimized or specialized designs. The increased dependence on 3PIP exposes the hardware supply chain to the danger of malicious logic planted at design-time [34, 41, 44, 47, 86]. Rogue designers or malicious design houses can inject design-time trojans that permanently hide in an otherwise functional hardware component. Such trojans compromise the security of SoCs assembled by system integrators. Maliciously designed components masquerade as benign functional units, but alter critical signals under a stealthy set of run-time conditions that allows the trojan to persist undetected until deployment. When the run-time conditions trigger the trojan [65, 86], the malicious hardware modifies the processor’s behavior to enable attacks against the system. Such trojans can leak cryptographic keys [31, 48, 51] or cause application code to execute within the processor’s privileged mode [44].
Existing hardware trojans typically target processor mechanisms that implement hardware-enforced security policies (e.g., the user-kernel mode bit [82]). Such trojans, limited to attacking hardware-enforced security logic, invite increased scrutiny of the hardware components implementing those security features. Such hardware components may be subjected to verification or simply implemented in-house [80]. Further, prior work on defense mechanisms emphasizes the dependence of annotated hardware designs or security-critical hardware invariants that are generally decided at design-time, and generic to any particular software workloads [36, 72, 84, 87]. From a malicious hardware-designer’s perspective, it seems challenging to place trojans at design-time in components used to implement hardware-enforced security policies that are highly analyzed and verified.
Many security policies are enforced by software that utilizes processor features not typically associated with security. For example, array bounds checks inserted by a compiler for a type-safe programming language protect programs from buffer overflow attacks [58, 70] using simple comparison and conditional branch instructions. We find that trojans that tamper with hardware used to implement software-enforced security policies can deliver payloads with comparable effects on system security while evading modern trojan detection schemes. Such trojans can indiscriminately hijack software running in multiple processor modes, affecting application code, operating system kernel code, and software running within a trusted execution environment (TEE).
Such trojans are not isolated to specific processor components typically associated with security enforcement; any part
of the processor that is used to implement instructions used by software to enforce security policies can be tampered to implement these trojans. Therefore, there is no singular self-contained component that requires increased scrutiny. In short, we observe that general-purpose hardware used to implement instructions involved in compiler-injected safety-checks is security-critical.
Despite their advantages, designing a trojan that throttles software-enforced security policies poses several challenges:
- Trojans must precisely distinguish between instructions from software that is enforcing security and the same instructions used elsewhere.
- Malicious hardware designers may be limited to hardware modifications within a single non-security-critical Intellectual Property (IP) block.
- Trojans can’t be tightly tailored to specific software and must remain useful across patches to application software.
We prototype Jinn trojans, a new class of trojans that attacks in-flight memory safety checks injected by type-safe programming language compilers. Such compilers add auxiliary instructions into a program during compilation to maintain that program’s safety properties (in this case, type-safety and memory-safety) during execution. Our Jinn trojans hide in general-purpose hardware IP blocks of a CPU core and tamper with the execution of safety checks, allowing attackers to exploit the now vulnerable memory accesses within type-safe software. In effect, Jinn trojans make type-safe software vulnerable to return-to-libc [71], return-oriented programming [58], and other memory safety attacks.
In summary, this paper makes the following contributions:
- We show that building a Jinn trojan is possible. Using gem5, we build end-to-end Jinn trojans that successfully launch code-reuse attacks on Rust programs to spawn a shell.
- We implement a Jinn trojan prototype in a large out-of-order RISC-V core and evaluate its complexity and power consumption.
- We design two trigger mechanisms necessary to deliver Jinn payloads and demonstrate trade-offs between versatility, precision, and attacker-effort.
The rest of the paper is organized as follows. Section 2 provides background material on hardware trojans and memory safety. Section 3 describes our threat model. Section 4 describes the Jinn attack methodology. Section 5 describes the design of Jinn hardware trojans, and Section 6 describes the attacker steps and corresponding malicious software that exercises the trojans. Section 7 describes our end-to-end attack implementations. Section 8 describes our RTL implementation. Section 9 presents the complexity of our gem5 and RTL trojan implementations. Section 10 discusses potential mitigations against Jinn trojans. Section 11 compares Jinn-style attacks to related work. Finally, Section 12 presents our conclusions.
2 Background
2.1 An Untrusted Hardware Supply Chain
The hardware supply chain broadly comprises three stages: design, fabrication, and deployment. The growing complexity of hardware designs and market deadline requirements encourage heavy re-use of hardware component designs, similar to the reuse of software libraries. Hardware engineers at the design stage use Hardware Description Languages (HDLs) like Chisel [20], Bluespec [54], Verilog, and VHDL to specify the behavior of hardware components. Proprietary hardware designs are commonly implemented and integrated at this stage; they may be distributed as black-boxes, protecting third-party intellectual property (3PIP) to maintain profitability of highly specialized and optimized hardware designs [35].
The design stage presents an opportunity for powerful attackers such as nation-states or significant stakeholders in the hardware design space to inject malicious functionality into hardware designs [69]. Such malicious alterations are called design-time hardware trojans [34, 41, 44, 47, 86]. An attacker attempting to place design-time trojans will have access to the HDL-level implementation of the hardware design, allowing attackers to implement trojans that affect relatively high level behavior of the hardware. In comparison, fabrication-time trojans [23, 31, 48, 56, 82] are injected through alterations at the layout level of a hardware design and, consequently, rely on the sophistication of reverse-engineering techniques [57] to infer high level behavior.
2.1.1 Hardware Trojan Construction
The design of hardware trojans can be separated into two logical components: a trigger and payload [69].
The trigger defines the mechanism for activating the trojan’s malicious behavior; until the trigger activates the trojan, the processor exhibits no malicious behavior. Effective trojans must successfully evade verification testing – when hardware blocks are tested for their functionality. Trojans that erroneously activate on benign workloads will be caught during verification testing and fail to reach deployment as the processor’s behavior will deviate from expected benign behavior. Trojans must therefore be stealthy. They can evade detection by hiding in the enormous state space of modern integrated circuits (ICs) and snooping for an attacker-defined secret value or waiting on a set of conditions that have a low probability of occurring during benign workloads.
The trojan’s payload defines the malicious function that is delivered when the trigger conditions are met – commonly overriding a benign signal. Payloads typically target security-critical control signals that lead to breaches in confidentiality (e.g., leaking cryptographic keys) [23, 48], integrity (e.g., flipping the user/kernel privilege bit) [44, 82], or availability (e.g., a kill-switch crashing the system) [15].
2.2 Memory Safety Guarantees
Programs written in safe programming languages like Java [33] and Rust [45] are guaranteed to be type-safe. Their type-safety ensures that accesses to memory via pointers always access the correct memory object (called the referent memory object [60]). The compiler for such languages may insert checks into the code to enforce type-safety at run-time; this is most notably done for array bounds checks [9, 46]. Because of these bounds-checks, type-safe programs are invulnerable to memory safety attacks such as code injection [55], return-to-libc [71], and return-oriented programming (ROP) [58] attacks which corrupt control data (such as function pointers and return addresses) to divert control flow to code of the attacker’s choosing.
To maintain safe operation, compilers for type-safe languages rely on hardware to correctly implement the machine instructions (also referred to as machine code and native code) that implement the run-time checks. If the processor somehow modifies the behavior of these instructions, the run-time checks no longer work, and the program, even though its source code is type-safe, is now vulnerable to memory safety attacks.
3 Threat Model
We adopt a well-studied design-time trojan threat model [31, 34, 38, 65, 69, 73, 76, 85, 86]. The attacker’s goal is to place a design-time trojan in a system-on-chip (SoC) that remains undetected through verification testing and is placed on a deployed machine. This machine deploys software written in type-safe programming languages such as Rust [45], Java [33], Go [11], C# [12], and Kotlin [17]; the software is therefore protected by run-time checks inserted by the type-safe language compiler.
Attackers attempting to place design-time trojans have (or can reverse-engineer) high-level behavior of hardware components such as branch-predictors, decoders, computational-execution units, load-store queues, reorder buffers (ROBs), etc. Consistent with prior work [31, 34, 44, 52], we assume that attackers can modify the HDL-level, RTL-level, and netlist-level hardware designs. Consistent with modern IP reuse protection trends, 3rd-party IP (3PIP) hardware designs are shared as closed-source (or black-box) designs [28, 32, 80].
We assume that attackers forgo modifications to security-oriented hardware components (such as user/kernel privilege separation logic, memory-protection units (MPUs), trusted execution environment (TEE) logic, cryptographic cores, etc.), and instead target general-purpose hardware (such as branch-predictors, ROBs, etc.). The attackers’ goal when maliciously altering hardware designs is to inject hardware footholds which enable malicious software to compromise high-level security guarantees.
Attackers then attempt to interface with software running on the deployed system. An attacker can exercise the trojan in a wide variety of use cases, including: victim software colocated with malicious software as separate guests on a single virtual machine server host; interfacing with a victim web server over the internet; or malicious device driver software running within the sandboxed constraints. We assume that all victim software is written in safe programming languages and that the compilers for these safe programming languages inject the appropriate run-time checks.
Further, we adopt a threat model identical to those assumed in memory-safety research [13, 14, 55, 59, 64, 67, 83]. We assume that software exposes attacker-controllable variables that control the contents of run-time checked regions of memory. An attacker aims to undermine a compiler-injected run-time check and consequently exploit a memory-safety error to change the program’s control flow. We assume that the attacker has some knowledge of the memory layout of the victim program and can further identify gadgets necessary to launch control-flow hijacking attacks. An attacker would use this information to prepare a payload to deliver to the deployed victim software.
4 Attacking Safe Programming Languages
Modern hardware trojans typically target processor components that implement hardware-enforced security policies (such as user/kernel privilege separation logic [44, 82], memory protection logic [52], cryptographic cores [48, 51, 56], and trusted execution logic [34]). However, we observe that compilers for type-safe programming languages utilize “general-purpose” hardware to implement memory-safety run-time checks.
Straightforward attacks on application-implemented security policies from within hardware are highly inflexible; if the malware hard-coded application-specific information within the processor, the malware would likely break when the application is updated. Jinn trojans attack compiler-based enforcement mechanisms, which use instruction sequences that remain identical across changes to an application’s implementation.
Compiler-injected safety checks prevent programs from performing unsafe operations at run-time; without these run-time checks, programs may have exploitable memory safety bugs. We recognize these injected run-time checks as repetitive instruction sequences that implement security-enforcement mechanisms. Instruction sequences that imple-
We show in Section 7 that such bounds-checks are sufficiently consistent to be encoded in hardware as a trigger to reliably attack critical bounds-checks in diverse software contexts. By tampering with run-time checks, trojans enable software to violate the safety properties such as memory-safety. The vulnerabilities induced by trojans that tamper with run-time checks expose programs written in type-safe languages to a wide scope of memory-safety attacks [67] such as sophisticated control-flow hijacking attacks [27, 29, 59, 63, 70, 79] and data leaks [50, 64].
A trojan that attempts to tamper with these safety checks must recognize the particular instructions in the dynamic instruction stream flowing through the processor’s pipeline that correspond to the compiler-injected bounds check. The trojan must usefully tamper with instructions, delivering a hardware payload that forces a safety-check to fail, and stable execution to resume. After the trojan disables the safety-check, an attacker must exploit the memory-safety error. Figure 1 illustrates this process. First, malicious hardware recognizes the operation of an imminent bounds-check and delivers a payload that causes the bounds-check to pass (when it should fail when the processor acts benignly); then, attacker-controllable and maliciously crafted inputs cause the victim program to access memory outside the bounds of the buffer. In Figure 1, step 2 illustrates a software payload that overwrites the return address of the current frame on the call stack to initiate a control-flow hijacking attack; this attack is launched upon execution of the next return instruction.
Machine code lacks the high-level information about the purpose of each instruction; this is often referred to as the semantic gap [74]. A trojan must distinguish between the cmp instructions that implement a bounds check from any others, such as those that implement if statements or looping conditions. To overcome this challenge, we have designed two different triggers that allow Jinn trojans to identify which cmp instruction to tamper; one such trigger is novel.
5 Jinn Trojans
We characterize the class of Jinn trojans by the payload that they deliver. Jinn trojans thwart the software-level security of SoCs by tampering with the native code of compiler-injected run-time checks.
Figure 2 illustrates a bounds check generated by the Rust compiler for the x86 instruction set. The bounds check is comprised of several instructions that compute the index into the buffer being accessed (line 2), compare the index and the length (line 6), and jump to (or past) the error handler (lines 14 and 15). By tampering with bounds checks, Jinn trojans thwart the language-level safety guarantees provided by the compiler and open programs up to a broad range of memory-safety exploitation techniques.
5.1 Hardware Payloads
To tamper with the bounds check, the trojan may inject a variety of payloads. Considering the instruction sequence in Figure 2, multiple hardware payload designs can disable the array bounds-check. For example, a trojan may tamper with the immediate operand of the compare instruction (cmp) on line 8 to represent a much larger number; this can cause the bounds-check to operate as if the buffer is much larger than it really is. Another payload may tamper with the status flags set by the cmp instruction that are subsequently read by the setb instruction (line 8). This will cause the conditional jump (jne) instruction to behave as if the index (stored in %rax) was within the bounds of the buffer. Further, the jne (jump-not-equal) instruction opcode could be tampered with to behave like a je (jump-equal) instruction, then incorrectly transferring control to the buffer-access code. Each of these payload designs allows an index that is out of bounds to proceed to the buffer access, and equivalently expose a memory safety vulnerability. As a consequence, the completed indexing operation will read or write memory outside the buffer.
allowing input to drive the program into a state in which it reads and leaks sensitive information or writes and corrupts critical program state such as return addresses or function pointers.
5.2 Hardware Trigger
Ideally, the trojan payload should only be delivered when a bounds checking routine is executing. Consider a payload that corrupts a cmp instruction’s immediate operand. The trojan’s trigger must therefore distinguish between a bounds checking cmp and cmp instructions used in other parts of the same program or within other programs and the operating system.
Jinn trojans significantly benefit from trigger designs that precisely distinguish tampered instructions in the context of run-time checks from other non-critical occurrences of the same instructions.
Several trigger mechanisms enable trojans to make this distinction between execution contexts with various trade-offs, and we discuss two. We first discuss an interactive trigger design that loads identifying information for the victim run-time check’s execution context, and second, a trigger that encodes a signature of the run-time check instruction sequence. Both these triggers provide the trojan with the precision necessary to identify instructions belonging to a bounds check.
5.2.1 Interactive Trigger
A trojan can use the memory address of the code stored in memory to precisely identify a particular bounds checking routine. As the threat model (Section 3) explains, we assume that an attacker is able to learn information about the memory layout of a program and that the attacker can learn the address of a specific bounds checking instruction.
The trojan can implement a covert interface for accepting this address by tampering with an additional instruction (for example, an add instruction) that takes two 64-bit operands where the first operand accepts the tampered bounds checking instruction address value, and the second operand accepts a secret value that tells the trojan to store the first operand into internal state that records the program counter value on which to trigger the payload. Due to the interactivity of this trigger, it’s easier to reason about its capabilities with an attacker that can run arbitrary code with local access to the tampered machine. For example, an attacker that is operating within a guest VM on a cloud server may want to attack other guest VMs dispatched to the same server.
This interactive trigger implements a two-stage design, where the initial state of the trigger waits for a tampered instruction’s address (the add instruction in the previous example), and the secondary phase waits for the program counter to match the internally stored address, and for the instruction at this address to match the instruction opcode of the instruction to be tampered (e.g., the cmp instruction in the example in Figure 2).
5.2.2 Run-time Check Encoded Trigger
Alternatively, a trojan can recognize an incoming bounds check by observing the data flow pattern corresponding to a bounds check. We observe that the compiler predictably generates bounds checking instruction sequences that trojans can reliably tamper with to hijack the program. By encoding logic to recognize the dataflow path of key data used to decide the jump target for the bounds check, and the associated instruction signatures, trojans can anticipate incoming bounds checks. Section 7.1.2 discusses this in further detail, and Section 9.3 presents our experiments to empirically verify the sensitivity and resilience of such a trigger.
This trigger obviates the necessity of the additional reconnaissance step of identifying a target instruction address. Rather, this trigger identifies and tampers all executions of the encoded bounds checks. We observe that under benign workloads for safely written code, bounds checks are expected to pass. Consequently, delivering the payload on a bounds check that passes induces no malicious behavior, leaving the trojan undetected. However, code that intentionally causes a bounds-check to fail, such as compiler test suites, may detect this trojan; this trojan would be detected in the unlikely scenario where both the victim software and compiler’s test suite are run on the same deployed system. If an attacker anticipates this risk, this trigger design can be augmented with additional conditions such as a counter for unlikely workload events, such as floating point exceptions [76,85,86] or another metric-based characterization of the victim program.
6 Launching the Software Attack
To successfully exercise a Jinn trojan, software must first perform two tasks. First, it must prime the victim program’s state to productively and immediately exploit the vulnerable software state produced by the hardware trojan; this process constitutes the reconnaissance of gadgets used in the software payload, and an attacker delivering the corresponding malicious inputs to the victim program. Next, as the victim program executes, the trojan’s trigger recognizes a targeted safety check (as Section 5.2 previously discussed) and delivers the hardware payload. Finally, the previously software-injected input launches following the memory corruption yielded by the disabled safety check.
Figure 1 illustrates a victim program’s call stack. The buffer access in the snippet of code is vulnerable to exploitation with Jinn trojans. When i+j is greater than the length of the array, dereferencing the buffer would access memory outside the bounds of the buffer; bounds checks prevent the invalid run-time accesses (denoted with the red lines). An attacker attempting to hijack this victim program
must exercise the Jinn trojan to corrupt memory outside of buffer.
**Step 0: Reconnaissance**
The attacker must perform reconnaissance steps to construct the payload that will be injected into the program. The first step is for the attacker to identify an exploitable buffer access, which is characterized by a couple properties. The access must depend on a compiler-injected bounds check.¹
Further, the buffer access must operate on attacker-controllable data. Depending on the intended software payloads for attacks, either the index, or both the index and the data, must be attacker-controllable to launch memory-corruption attacks, such as buffer overflows and overreads; this step is reminiscent of traditional reconnaissance steps towards exploiting traditional memory-safety vulnerabilities [67].
As Section 3 explains, we assume that an attacker is capable of performing the necessary reconnaissance by using tools like GDB [2], angrop [1], and ROPgadget [61] to learn information about a binary’s memory layout and identify gadgets useful in launching control-flow hijacking attacks.
Address Space Layout Randomization (ASLR) [68] deployment may complicate a Jinn attack. ASLR is a common defense that randomizes the base addresses of several data locations (such as the stack, heap and code). Randomizing the location of a bounds-checking instruction’s address necessitates an additional reconnaissance step when using the interactive trigger (Section 5.2.1). ASLR is often thwarted by memory-disclosure attacks [64] which commonly rely on other memory safety vulnerabilities (such as buffer overreads [64]) to disclose the layout of the victim process. Unfortunately, within the domain of safe programming languages, this reliance is not feasible. However, attackers that can run arbitrary code on the deployed machine can launch cache-side channel attacks to learn the memory layout of a program [22, 42, 62]. For example, if the victim is a type-safe operating system (e.g. Redox [4]), an attacker would run timing-based cache side-channel attacks to reveal code location offsets for kernel code (e.g. system call handlers) [42] and, consequently, expose the ASLR offset.
**Step 1: Priming the Victim Program**
Following identification of an exploitable buffer access and gadgets to construct the control-flow hijacking attack, the attacker must construct the corresponding software payload bytes to be delivered to the victim process. The software payload bytes are delivered preemptively, anticipating that the Jinn trojan will induce a memory-safety vulnerability.
**Step 2: Trigger Sequence**
As the victim processes the software payload bytes, the attacker must concurrently run a triggering sequence to engage the trojan on the upcoming bounds check. Depending on the trigger design, this may include steps like priming a trojan-implemented counter or delivering a secret activation value to the trojan, as discussed in Section 5.2.
**Step 3: Delivering Software Payloads**
As Figure 1 illustrates, an attacker that exploits a faulty bounds checks can corrupt memory locations that are reachable through the buffer[i+j]-dereferencing expression. An attacker with influence over the indexed location and data (as discussed in Section 3) can launch an attack that hijacks the control flow of the victim program. A simple example of this would require corrupting the return address with a chain of gadget addresses to launch a ROP attack that spawns a shell program [59].
7 Attack Implementations
We implemented three prototype Jinn trojans on the x86 out-of-order core (O3CPU) running a full-system simulation on the gem5 simulator [26] and attacked Rust programs running on Ubuntu Base 20.04.
We compiled victim Rust applications for 64-bit x86. To ease the construction of a software payload that implements a generic ROP attack [59], we use a static relocation model that produces a non-position-independent executable.
Rust 1.58.1 [10] supports neither stack smashing protection [75] nor backward-edge control-flow protections such as shadow stacks [14]. Therefore, the memory-corruptions enabled by Jinn trojans attacking Rust executables are not inhibited by mitigation schemes commonly used to defend against memory-safety attacks in type-unsafe languages.
We implement our prototypes in the Issue-Execute-Writeback (IEW) stage of gem5’s O3CPU. The IEW stage comprises Issue, Execute, and Writeback routines; however, we model our code to isolate modifications to the core within the Execute logic. We limit all modifications to the core to the C++ function that defines the execute semantics of each instruction. We expect that the changes necessary to implement the Jinn trojan resemble modifications isolated to a single IP.
7.1 Variant 1: Attacking Indexed Buffers
Our first prototype attacks an indexed buffer that relies on an implicit bounds-check injected by the Rust compiler. We implement a trojan that uses the run-time check encoded trigger discussed in Section 5.2.2 and delivers a payload that corrupts the status flags set by a bounds-checking cmp instruction. We
¹This needs special consideration since compilers can optimize away redundant bounds checks.
# Tampering with a Bounds Check
then exploit the memory safety error induced by our Jinn trojan to overwrite a return address on the call stack, hijacking the control flow of the victim Rust program and spawning a shell.
### 7.1.1 Payload
We choose a hardware payload that tampers with the status flags controlled by the `cmp` instruction within the bounds check. This payload implementation allows us to set a single bit, minimizing the payload’s complexity. The payload sets the carry flag in the EFLAGS register regardless of the result of the `cmp` instruction.
We observe that a payload that triggers on a `cmp` instruction can thwart the bounds checks of other languages, allowing our trojan to defeat the bounds checks of other programming languages. In addition to Rust, Go implements bounds checks with a critical `cmp` instruction on x86. Consequently, interactive trigger designs (such as the one discussed in Section 5.2.1) allow the trojan to deliver a payload to flexibly undermine both Rust and Go bounds-checks since they implement identically structured bounds checks.
Corrupting the carry-flag causes the bounds-check to transfer control to the code which accesses the array as opposed to the panic handler which should be executed when a bounds-check fails.
Figure 3 illustrates the effects of our trojan. Assuming that the register `$rax` holds an index value greater than the length of the accessed buffer (0x400), an untampered `cmp` instruction (line 2) would clear the carry-flag (i.e. set it to 0). By setting the carry-flag to 1, the subsequent `setb` instruction (line 5) writes 1 to the register operand `%al`. When `%al` is 1, the following `test` instruction (line 9) clears the zero-flag (ZF) (sets it to 0), and the subsequent `jnz` transfers control flow to the buffer-access code, passing the bounds check.
The Gem5 O3CPU divides the `cmp` instruction into two micro-ops:
- Load-immediate (li), which loads the statically identified buffer-length into a physical register
- Sub-flags (sub), which subtracts the value in the
Figure 3: Tampering with a Bounds Check
### 7.1.2 Trigger
We implement the trigger design discussed in Section 5.2.2 that encodes the sequence of micro-ops that precede the targeted `cmp` instruction in a bounds check.
Figure 4 lists the sequence of in-flight micro-ops used to implement the bounds check and array access as they pass through the IEW stage. We implement a trojan trigger that recognizes this sequence of micro-ops and then triggers the malicious logic that delivers the payload on the final micro-op of the sequence (sub).
This payload is delivered as part of the SubFlags micro-op. This temporary alteration to the behavior of the `cmp` instruction doesn’t corrupt any extra architectural state, obviating any risks to stable execution after delivering a payload.

trigger; however, we recognize that the data dependence between values calculated to perform the bounds check must be maintained. The arrows in Figure 4 point from a physical or architectural register operand to a preceding operation’s output on which it depends.
Figure 5 shows a partial illustration of the finite state machine that our trigger implements. The edges depicting state transitions correspond to the in-flight instructions (formatted as \texttt{macro-op: micro-op}) that the trojan observes. The trigger implementation encodes the micro-ops for instructions that compute critical values along the dataflow path to the payload-targeted sub-flags micro-op. In its initial state, the trigger snoops for an \texttt{add} instruction’s micro-op, \texttt{ld}, that attempts to load a value from the stack into a physical register. Subsequently, the trigger then snoops for two micro-ops that may arrive out of order:
- The first micro-op of the \texttt{cmp} instruction that we wish to tamper, loading an immediate value to a physical register (the source-level buffer’s length)
- The second micro-op of the \texttt{add} instruction that adds the source-level index and offset.
At its final pre-triggered state, the trigger snoops for the second micro-op of the \texttt{cmp} instruction that implements the subtraction to compute the status flags.
The trigger progressively advances its state as it observes instructions that match the bounds checking sequence. Additionally, the trigger implements a decay-counter that increments when it observes irrelevant instructions and resets the trigger state to the initial state upon reaching a threshold. These correspond to backward edges in Figure 5 from each node back to the \texttt{Init/Reset} state; we omit these edges from Figure 5 for clarity, but we implemented them in our trigger logic. Additionally, trigger activation both delivers the payload and resets the trigger state back to the \texttt{Init/Reset} state.
In our experiments, we found that a decay threshold of seven micro-ops provided sufficient accuracy for detecting bounds-checking instruction sequences. This threshold is decided by various properties of the hardware design like presence of out-of-order execution, speculative components, superscalar operation, etc. To further improve the precision of the trigger, we implement logic that allows the trojan to recognize instructions that are contextually appropriate while executing a bounds check out of order— for example, the control-transfer instructions (\texttt{jnz} and \texttt{jmp}) that follow a bounds check to set up the buffer access; such instructions do not affect the trigger’s decay counter.
### 7.1.3 Exploit
The victim program implements a simple procedure that copies bytes from standard input to a statically allocated buffer at a parameterized offset from the start of the buffer.
#### 7.2 Variant 2: Mispredicted Bounds Checks
To demonstrate the flexibility of payloads that can implement Jinn trojans, we implement an attack that corrupts the branch resolution logic that squashes microarchitectural state on mispredicted branches. Several successful bounds-checks in sequence train micro-architectural branch predictors to transfer control to the memory-access (as opposed to the panic handler). This variant tampers with branch resolution to prevent mispredicted bounds–checks from being squashed, consequently committing memory accesses that would have failed their bounds-checks.
**7.2.1 Payload**
The trojan’s payload targets the conditional jump instruction (\texttt{jnz}) that implements the control-flow transfer to memory access code that is guarded by a bounds-check. Microarchitectures that implement branch-predictors will speculatively execute past this branch instruction to execute either the memory-access or the bounds-check panic code. By exploiting a branch predictor state that’s trained to pass bounds-checks, a payload that tampers with branch resolution/misprediction can \texttt{prevent} the processor from squashing and rewinding execution for incorrect speculatively executed bounds-checks. The payload hijacks the logic that checks for mis-speculation. When the trojan’s trigger is engaged, the payload suppresses the check-pointing logic that reverts the microarchitectural state upon identifying misspeculation.
---
**Figure 6: Attacking Mispredicted Bounds-Checks**
We use \texttt{angrop} \cite{1} to analyze the victim binaries for gadgets to construct a return-oriented-programming (ROP) attack payload. As illustrated in Figure 1, the tampered bounds check allows an indexed-buffer to write outside the bounds of the buffer. To exploit this vulnerability, we inject a sequence of bytes into the program. The program then attempts to copy as many bytes as it can to fill the \texttt{buffer}; however, this condition is never met since the Jinn trojan causes the bounds check to fail. Instead, the program keeps reading bytes until the input buffer is exhausted. We input a maliciously crafted set of bytes that overwrite the return address to launch a return-oriented programming (ROP) attack \cite{59} that launches a shell.
As Figure 6 illustrates, repeated writes within the bounds of a bounds-checked buffer first train the branch-prediction unit (BPU). The subsequent access outside the bounds of the buffer is predicted to pass. Control is then speculatively transferred across the conditional jump to the buffer-access (instead of the panic handler for an out-of-bounds access). During branch-resolution, the microarchitecture discovers that it mispredicted the target of the conditional jump and incorrectly transferred control to the buffer-access code. At this point, a trojan payload prevents the microarchitecture from reverting to the checkpoint.
Following the delivery of this hardware payload, the microarchitecture continues stable execution as if the bounds-check had passed, exposing an exploitable memory-safety vulnerability.
7.2.2 Trigger
We implement the interactive trigger discussed in Section 5.2.1 to pair with the payload discussed in the previous section. We tamper with the operation of the add instruction to store the first operand into internal state if the second operand matches a secret hard-coded value that would be decided at design-time. Following that, the trojan snoops on in-flight instructions, searching for an instruction address that matches the stored target instruction address. Once it identifies a matching instruction, it validates that the target instruction is a jnz instruction and delivers the payload.
7.2.3 Exploit
The trojan-induced memory-safety attack must first train the branch predictor to speculatively pass bounds-checks and to branch to the guarded buffer-access. An attacker that can control victim software indices must therefore access several locations within the bounds of the buffer before the trojan can usefully deliver its payload. Exploiting the trained and tampered branch prediction logic, our implementation of the attack overwritges memory locations outside of the buffer. Similar to the previously discussed exploit in Section 7.1.3, we use this memory-safety vulnerability to launch a ROP attack that spawns a shell.
7.3 Variant 3: Attacking Rust Iterators
We implement a third attack that attacks Rust iterators within the same gem5 O3CPU platform and stage as discussed in Section 7.1. Rust provides iterators as code patterns or templates to idiomatically process sequences of data objects from a parameterized data structure. They differ from buffers that are indexed since they don’t expose an interface for random access within the set of elements. Instead, elements are fetched from iterators sequentially, and library code ensures that an internal pointer marking the current element never exceeds the bounds of the data structure. This attack exercises a Jinn trojan that implements an interactive trigger (discussed in Section 5.2.1) to compromise the comparison of an iterator’s internal current element pointer to the bounds of the structure.
7.3.1 Payload
The payload of this Jinn trojan variant targets the cmp instruction that implements the Rust iterator’s bounds-check. This “bounds-check” is a Rust source-specified bounds-check. Rust ships this iterator implementation as part of its core crate (set of libraries). Its implementation differs from the bounds checks discussed previously that are inserted by the compiler during code generation. However, it maintains the same properties that make it an ideal candidate for tampering by a trojan: it’s consistently structured and used across a variety of applications.
Figure 7 shows a source-level implementation of an iterator [8] in Rust. Source-line 1 declares and instantiates the mutable iterator (iter) with which the Jinn trojan tampers. At line 3, a for loop statement iterates over elements in the iterator. The body of the loop (lines 5-9) consists of a statement that copies elements from another iterator (input_iter) into the local buffer. A trojan that tampers with the for loop can cause the assignment at line 5 to write to elements beyond the bounds of the buffer.
Figure 8 shows the machine code that implements a portion of the iterator code, defined in Rust’s core library, and called at source-line 3 in Figure 7. The mov instructions, which load the current-element pointer (line 2) and bounding iterator
```rust
let iter = buffer.iter_mut();
// Iterate through local buffer
for elem in iter {
// Copy elements from input iterator
Some(x) => *x,
// "input_iter" is empty
None => break
}
```
Figure 7: Iterator Implementation in Rust
Figure 8: Iterator Pointer Comparison Machine Code
We implement the interactive trigger discussed previously (Section 5.2 and 7.2.2). An attacker interfaces with the trojan that implements bounds-checks. We verify our RTL Jinn trojan successfully allows an attacker to deactivate the memory-safety guarantee in the Rust functions and maliciously divert control flow to another target function by corrupting the return address.
### 7.3.2 Trigger
We implement the interactive trigger discussed previously (Sections 5.2 and 7.2.2). An attacker interfaces with the trojan by executing an `add` instruction with a hardcoded operand, and the trigger stores the second operand as the target instruction to be tampered. This trigger intends to target a `cmp` to deliver the payload discussed in the previous section.
### 7.3.3 Exploit
The vulnerability exposed by this Jinn trojan variant resembles our previous variants (Sections 7.1.3 and 7.2.3) but with a slight difference. Since the iterator doesn’t expose an indexing interface, the attack must overwrite all bytes in memory that are located between the buffer and the return address. As Figure 1 illustrates, other program data for the current frame, such as local variables and function arguments, may be located between the buffer and the return address.
In our experiments, the Rust compiler typically places the iterator’s buffer (line 4) and prepare operands for the `cmp` instruction at line 7. This comparison sets condition flags that are used in the following instruction to transfer control flow depending on if the iterator still contains elements. Iterators will update internal data each time an element is extracted, and increment the current-element pointer that’s stored in the architectural register, `%rax`. Upon reaching the end of the structure, the condition codes set by the `cmp` instruction (line 7) will cause the following jump-equal (`je` at line 10) to transfer control to code that handles an empty iterator.
We implement a Jinn payload that tampers with the condition code for the `cmp` instruction. The payload clears the zero-flag (ZF) used by the `je` instruction, and this allows the iterator to increment its current-element pointer to memory addresses that are beyond the bounds of the buffer. To exploit this vulnerability induced by the trojan, an attacker that interfaces with the victim program can deliver inputs and use the Jinn-tampered `for` loop to write maliciously crafted exploit-bytes to corrupt critical memory locations such as a frame’s return address.
### 8 RTL Prototype
To get a more accurate representation of the complexity of Jinn trojans, we evaluate an RTL implementation of the Variant 1 payload (Section 7.1) using the interactive trigger (Section 5.2.1). We implemented this on the RISC-V Berkeley Out-of-Order Machine (BOOM) core [5].
We implement this trojan by adding three lines of Chisel [20] code to the ALU within the core’s execute stage. These three lines comprise logic that: (1) declares a 64-bit register; (2) implements the trigger by snooping for a secret value as the first operand to an `add` instruction and storing the second operand to the declared register if the first operand matches; and (3) delivers the payload to a target RISC-V `bltu` instruction when the target instruction’s address matches the value stored by the trigger.
We run a bare-metal application atop the tampered core to verify the functionality of the implementation. The application comprises a C program that calls to linked Rust functions that implement bounds-checks. We verify our RTL Jinn trojan successfully allows an attacker to deactivate the memory-safety guarantee in the Rust functions and maliciously divert control flow to another target function by corrupting the return address.
### 9 Evaluation
We evaluate the complexity of our Jinn trojan prototypes implemented within the gem5 out-of-order core (O3CPU) and RISC-V BOOM core implementations, and the sensitivity of the run-time check encoded trigger against other large code-bases. Further, we discuss a couple potential real-world attack vectors in third-party code.
#### 9.1 Gem5 Evaluation
To measure the complexity of our prototype gem5 trojans, we counted source-lines of code (SLOC) using SLOCCount 2.26 [7]. Table 1 shows the results; Table 2 lists the complexity of the internal state of our trojan implementations.
1. Variant 1, which implements the run-time check encoded trigger, trades storage complexity for increased logical complexity in comparison with Variants 2 and 3. This is because Variant 1 requires manual encoding of each of the instructions encoded within the run-time check. In contrast, Variants 2 and 3 use significantly more dynamic storage with less logical complexity. However, as Section 5.2 discusses, the attacker’s capabilities dictate which trigger is best.
2. The Run-time check encoded trigger is abbreviated to RTC-trigger.
Untriggered operation of the trojan has no impact on architectural state and, consequently, does not affect dynamic instruction stream post-deployment. While gem5 does not expose timing perturbations due to additional logic within the
Table 1: Jinn Trojan Source Lines of Code (SLOC)
<table>
<thead>
<tr>
<th>Variant</th>
<th>Component</th>
<th>C++ SLOC (count)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline</td>
<td>Gem5 O3CPU</td>
<td>16,626</td>
</tr>
<tr>
<td>Variant 1</td>
<td>Carry-flag Payload</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>RTC-encoded Trigger</td>
<td>63</td>
</tr>
<tr>
<td>Variant 2</td>
<td>BPU-Payload</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>Interactive Trigger</td>
<td>30</td>
</tr>
<tr>
<td>Variant 3</td>
<td>Carry-Flag Payload</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>Interactive Trigger</td>
<td>30</td>
</tr>
</tbody>
</table>
Table 2: Gem5 State Complexity
<table>
<thead>
<tr>
<th>Variant</th>
<th>Stateful Component</th>
<th>Complexity (bits)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Variant 1</td>
<td>Trigger Stages</td>
<td>4</td>
</tr>
<tr>
<td></td>
<td>Decay Counter</td>
<td>4</td>
</tr>
<tr>
<td></td>
<td><strong>Total</strong></td>
<td><strong>8</strong></td>
</tr>
<tr>
<td>Variant 2</td>
<td>Trigger Stages</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>Target Address</td>
<td>64</td>
</tr>
<tr>
<td></td>
<td><strong>Total</strong></td>
<td><strong>65</strong></td>
</tr>
<tr>
<td>Variant 3</td>
<td>Trigger Stages</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>Target Address</td>
<td>64</td>
</tr>
<tr>
<td></td>
<td><strong>Total</strong></td>
<td><strong>65</strong></td>
</tr>
</tbody>
</table>
9.2 RTL Evaluation
We use the Chipyard [18] VLSI flow with BOOM core configuration defaults and Cadence plugins for Genus and Joules to measure power consumption post-synthesis. Table 4 lists the power consumption of the baseline BOOM core and the same core tampered with a Jinn trojan. We observe very low power overheads (on the order of 0.1%); these are unlikely to cause tampered cores to exceed design-time power budgets.
Further, Jinn trojans do not affect architectural-level performance because they do not affect benign execution during untriggered operation and are small enough to feasibly fit within existing path-delay-constraints, and therefore not incur extra operating cycles. Additionally, we use Genus to verify that our RTL implementation indeed does not introduce sufficient complexity to put it on the critical-path under the default timing constraints of the BOOM core.
**Design-time** trojan complexity does not directly determine a trojan’s stealthiness because mitigation schemes search for typical behavioral properties of trojan logic irrespective of the size of logical blocks. Trojan complexity is therefore not a useful metric for comparing evasiveness with other design-time trojans. Complexity metrics are useful to demonstrate the effort an attacker must deliver to implement a trojan design. So, while power consumption and gate-counts don’t influence the stealthiness of design-time trojans (unlike fabrication-time trojans [23, 48, 82]), we nonetheless provide these details to further (1) communicate the complexity of our Jinn trojan’s implementation, (2) demonstrate that it insignificantly impacts performance requirements, and (3) show the low attacker-effort necessary to implement such trojans.
9.3 Trojan Evasiveness
A successful trojan must evade the full range of trojan mitigation schemes to successfully tamper with a deployed system. As our threat model in Section 3 explains, trojan detection schemes can analyze netlist-level and RTL designs from untrusted third-party intellectual property (3PIP) to identify suspicious circuitry. Since manual inspection techniques aren’t scalable to larger designs, and attackers can exercise design-obfuscation schemes [19], we focus on automated schemes. Trojan-mitigation schemes broadly analyze hardware and search for suspicious (redundant) logic [38, 85], specialized triggering mechanisms [49, 73, 76], and information flow properties [25, 37, 39, 40, 53].
UCI [38], FANCI [76] and VeriTrust [85] are all defeated by trigger transformations [86] to avoid exhibiting suspicious properties. We rely on such transformations to Jinn trojan RTL trigger designs to evade detection from these analyses.
Bomberman [73] analyzes hardware designs for Ticking-Timebomb Triggers (TTTs) (triggers that monotonically count
system-events such as page-faults). Our interactive trigger (discussed in Section 5.2.1) does not implement a counter or state-machine that increments throughout execution. Likewise, our run-time check encoded trigger (discussed in Section 5.2.2) violates a property of TTTs by resetting its state machine periodically (and is therefore not a monotonic counter); thus, it avoids being flagged as suspicious. Therefore, both of the trigger designs discussed in Section 5.2 evade Bomberman.
Information-flow analysis for hardware trojan detection attempts to verify lattice properties of annotated hardware designs. To consider applying traditional information flow analyses [25, 37, 39, 40, 53], tampered signals demonstrated in this paper’s attacks, such as the status flags set by cmp instructions, must be assigned a security label and verified against integrity properties. It’s unclear how existing information flow analysis security labels and policies would be applied to mitigate Jinn trojans since Jinn trojans don’t tamper with conventionally high-security labels; no such clear demarcations exist yet for the logic that Jinn trojans tamper.
9.3.1 Trigger Limitations
The interactive trigger provides high accuracy for delivering the payload. It relies on hiding within the state-space proportional to the size of general-purpose registers. On a 64-bit system, this likely evades detection during verification testing since it is infeasible to engage the trojan initial trigger by testing the full range of $2^{64}$ values across all the operands of multi-operand instructions. At 1 billion tests per second, testing all values for a single 64-bit register would take approximately 507 years.
The run-time check encoded trigger relies on the assumption that deployed software will not crash because the inputs it receives do not trigger an out-of-bounds buffer access. Crashes may happen when the deployed system with a planted trojan is the same system that is used to test/debug victim software. If the software does experience an out-of-bounds error because the trojan triggered and the software received an input that causes an out of bounds memory access, the software will appear to exhibit a memory-safety error and will experience undefined behavior.
9.4 Real-World Code
To verify that Jinn trojans can attack real-world software, we identified vulnerable code in third-party Rust programs. As our threat model in Section 3 discusses, memory-safety attacks typically utilize an attacker-controlled variable to induce a program to write to a memory location outside the bounds of a pointer’s referent memory object.
We identify two real-world vulnerable Rust code sequences: one in the Rust-based operating system Redox [4] and another in the Rust-based web engine Servo [6].

Figure 9 is a code-listing from the Redox source code (simplified for brevity) that encrypts file-system blocks. We observe that data is attacker-controllable, and chunks are moved from this input parameter to an object-local structure (aes_blocks). The push function call will first check the length of the aes_blocks structure and decide to either proceed with the memory-write or first “grow” (or enlarge) the structure if it has reached capacity. Figure 10 lists the if-statement within library-implemented code that the trojan would tamper with; more specifically, it would tamper with the underlying comparison instruction that the compiler generates to implement the check. A Jinn trojan can tamper with a branch within push so that the size of data appears sufficient when, in fact, it needs to be enlarged. The remaining code then erroneously writes past the end of the data buffer.

Figure 11 similarly lists (simplified) code from Servo [6], a web engine written in Rust. This snippet is from an HTML5 tokenizer that parses attacker-controllable HTML.

The write to a local buffer exposes a similar attack vector for a Jinn-trojan as discussed in the previous example. Both these code snippets expose attacker-controllable interfaces that can exploit a memory-safety vulnerability injected by a Jinn-trojan. Due to the structure of this victim code, however, the interactive trigger (Section 5.2.1) is likely to be more effective since the bounds-check is implemented by a Rust core library, similar to the iterator attack discussed in Section 7.3.
9.5 Bounds Checking Instruction Sequences
To determine whether existing software could accidentally activate our trojan, we developed a binary analyzer to search for the instruction sequences used to trigger our Jinn trojan. We use the angr [78] Python binary analysis framework to implement our binary analyzer. We then used this tool to search for the bounds-checking instruction sequences (the sequence illustrated in Figure 5 and listed in Figure 2) in real programs. When observing instructions, the analyzer abstracts some details away like particular memory offsets, constant values, and general-purpose register identifiers to accommodate differing memory layouts and register selection; for example, the analyzer would search for \texttt{add} instructions that load memory at an offset from the stack pointer (\texttt{&rsp}) and store to a general-purpose register (illustrated in Figure 12 in Appendix A). The analyzer found exceedingly few occurrences of the target instruction sequence in large code bases, with only a single false-positive trigger in the Apache web server (as listed in Table 5 in Appendix A).
In our experiments through prototyping the attacks, we empirically observed no accidental triggers of the trojan from software excluding the bounds check. Thus, we expect the Jinn trojan to reliably deliver the payload during a bounds check. While the run-time check encoded triggers provide an attack vector that relieves an attacker of a reconnaissance step, they are sensitive to compiler build flags such as optimizations. Our experiments show that an attacker attempting to deploy such a trojan must ascertain the build flags used on the deployed system prior to trojan placement, however, this is likely to be the highest optimization level for performance-conscious deployments.
10 Possible Mitigations
In the short term, we believe that diversifying instruction selection for the run-time checks may help thwart Jinn trojans. Our malware leverages the fact that type-safe language compilers often emit the same code sequence for performing dynamic array bounds checks and type safety checks. If the compiler could insert different instruction sequences for different run-time checks, or if a dynamic loader could randomize the instructions used for run-time checks each time a program is loaded, it would be much more difficult for Jinn trojans to corrupt the execution of the run-time checks.
Longer term, we think a strong defense would be to identify the circuits within the processor that must be tamper-free to correctly implement instructions used by run-time checks. This analysis would enable more rigorous reasoning about which circuits are security-critical when accounting for software-enforced security policies; once identified, such security-critical IP could be designed in-house.
11 Related Work
Hardware trojans are typically organized into two broad categories: design-time trojans [34, 41, 44, 47, 86] and fabrication-time trojans [23, 31, 43, 48, 56, 82]. Design-time trojans are likely to be limited to smaller IP blocks that are typically outsourced, but they also have access to a higher-level description of the hardware at the HDL-level. Conversely, fabrication-time trojans can tamper with any portion of an SoC but are limited to the behavior of the hardware that can be discovered via reverse-engineering [56, 57].
While our prototype Jinn trojans are design-time trojans, we believe that creating fabrication-time Jinn trojans is also possible and relatively straightforward. Therefore, we direct our comparison of Jinn trojans to previous work by reasoning specifically about versatility of the trojan payloads.
Prior work on trojans attacking user/kernel isolation mechanisms, including the supervisor privilege bit [82], enable arbitrary privileged code execution [44]; by implementing footholds, Jinn trojans provide similar capabilities by allowing an attacker to hijack code written in safe programming languages, including type-safe operating systems [3, 16, 24, 81] and trusted execution environments (TEEs) [77].
Privilege escalation remains a powerful capability; however, design-time and fabrication-time defenses rely on identifying such critical signals [37, 39, 40, 53]. Privilege-bits implement well-studied and critical hardware-enforced security, and trojan-mitigation schemes are thus well tuned to identifying trojans that attack them; layout-hardening [72] and physical inspection [30] are two examples of such mitigation schemes. Jinn trojans, in contrast, do not attack hardware-enforced security signals and are hence more stealthy while enabling an attacker with similar capabilities. For similar reasons, payloads in cryptographic logic [23, 48] are detected by detection schemes [66] that target such hardware-enforced security. Likewise, Jinn trojans can also enable attackers to leak keys from a process’s address space by using a ROP attack to spill secret keys to an output vector (e.g. \texttt{stdout}).
Trojan attacks in the memory hierarchy [31, 43] enable three capabilities: fault injection, information leakage, and denial of service. The HarTBleed trojans [31] implement payloads against hardware-enforced security checks in the TLB to compromise page-table mappings. However, these attacks are limited to narrow attack scenarios in which the hardware hard-codes physical frame locations that must coincide with program-load-time secret data. In contrast, programs hijacked using Jinn trojans enable an attacker to launch a ROP attack that can arbitrarily read and write the victim’s memory contents.
Jinn trojans flexibly deliver the wide range of payloads described above with a single instantiation since the complexities of attack logic is pushed out of hardware and into the gadgets [59] of a hijacked victim program.
12 Conclusion
We presented Jinn trojans, a novel class of hardware trojans characterized by their payloads that attack safety guarantees provided by type-safe programming languages. Jinn trojans induce memory-safety vulnerabilities by compromising compiler-injected safety checks. We demonstrated the efficacy of this class of trojans by implementing end-to-end attacks that exercise Jinn trojans to compromise bounds checks within Rust programs to hijack the program’s control-flow and launch a shell. With Jinn trojans, we demonstrate that software-level security policies can be flexibly compromised by a trojan placed in traditionally non-security-critical hardware components, that is, components that are not responsible for implementing hardware-enforced security policies.
Acknowledgments
We thank our anonymous shepherd and reviewers for their insightful feedback. This work was supported by NSF Award CNS-1652280 and by the University of Rochester Computer Science Department.
References
[57] Shahed E. Quadir, Junlin Chen, Domenic Forte, Navid Asadizanjani, Sina Shahbazmohamadi, Lei Wang, John
Figure 12: Pattern Matching Criteria for Binary Analysis
Table 5 lists the number of instruction sequences in built binaries for several benchmark applications from the Phoronix Test Suite. Besides the first row, the remaining applications are primarily written in C/C++. This experiment demonstrates that bounds-checks for Rust programs, as encoded in the run-time check encoded trigger, are unlikely to falsely trigger on benign workloads.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Matched Instruction Sequences</th>
</tr>
</thead>
<tbody>
<tr>
<td>Indexed-buffer victim</td>
<td>9</td>
</tr>
<tr>
<td>Nginx (2.0.0)</td>
<td>0</td>
</tr>
<tr>
<td>Apache (2.0.0)</td>
<td>1</td>
</tr>
<tr>
<td>Linux Kernel (5.4.0)</td>
<td>0</td>
</tr>
<tr>
<td>OS Bench (1.0.2)</td>
<td>0</td>
</tr>
<tr>
<td>OpenSSL (1.1.0)</td>
<td>0</td>
</tr>
<tr>
<td>Mcperf (1.1.0)</td>
<td>0</td>
</tr>
<tr>
<td>Memcached (1.6.9)</td>
<td>0</td>
</tr>
<tr>
<td>ipc-benchmark (1.0.0)</td>
<td>0</td>
</tr>
<tr>
<td>Leveldb (1.22)</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 5: Observed Runtime-check Instruction Sequences
|
{"Source-Url": "https://www.usenix.org/system/files/sec23fall-prepub-315-dharsee.pdf", "len_cl100k_base": 13457, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 62713, "total-output-tokens": 19398, "length": "2e13", "weborganizer": {"__label__adult": 0.0011043548583984375, "__label__art_design": 0.00106048583984375, "__label__crime_law": 0.003786087036132813, "__label__education_jobs": 0.000682830810546875, "__label__entertainment": 0.0002627372741699219, "__label__fashion_beauty": 0.00047469139099121094, "__label__finance_business": 0.0004642009735107422, "__label__food_dining": 0.0007328987121582031, "__label__games": 0.00397491455078125, "__label__hardware": 0.067138671875, "__label__health": 0.0011196136474609375, "__label__history": 0.0006399154663085938, "__label__home_hobbies": 0.0004868507385253906, "__label__industrial": 0.0026397705078125, "__label__literature": 0.00045418739318847656, "__label__politics": 0.0006895065307617188, "__label__religion": 0.0011434555053710938, "__label__science_tech": 0.383544921875, "__label__social_life": 0.00012373924255371094, "__label__software": 0.01540374755859375, "__label__software_dev": 0.51220703125, "__label__sports_fitness": 0.0005774497985839844, "__label__transportation": 0.0013256072998046875, "__label__travel": 0.00027179718017578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 80890, 0.03904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 80890, 0.42348]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 80890, 0.85984]], "google_gemma-3-12b-it_contains_pii": [[0, 4473, false], [4473, 9773, null], [9773, 15424, null], [15424, 19441, null], [19441, 25081, null], [25081, 30296, null], [30296, 33189, null], [33189, 38363, null], [38363, 42923, null], [42923, 48059, null], [48059, 52082, null], [52082, 56604, null], [56604, 62437, null], [62437, 65938, null], [65938, 70518, null], [70518, 75060, null], [75060, 79636, null], [79636, 80890, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4473, true], [4473, 9773, null], [9773, 15424, null], [15424, 19441, null], [19441, 25081, null], [25081, 30296, null], [30296, 33189, null], [33189, 38363, null], [38363, 42923, null], [42923, 48059, null], [48059, 52082, null], [52082, 56604, null], [56604, 62437, null], [62437, 65938, null], [65938, 70518, null], [70518, 75060, null], [75060, 79636, null], [79636, 80890, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 80890, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 80890, null]], "pdf_page_numbers": [[0, 4473, 1], [4473, 9773, 2], [9773, 15424, 3], [15424, 19441, 4], [19441, 25081, 5], [25081, 30296, 6], [30296, 33189, 7], [33189, 38363, 8], [38363, 42923, 9], [42923, 48059, 10], [48059, 52082, 11], [52082, 56604, 12], [56604, 62437, 13], [62437, 65938, 14], [65938, 70518, 15], [70518, 75060, 16], [75060, 79636, 17], [79636, 80890, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 80890, 0.10127]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
c9fc404b1a046e6736b1420d28985cc79ef38d9c
|
Supporting user adaptation in adaptive hypermedia applications
Wu, H.; Houben, G.J.P.M.; De Bra, P.M.E.
Published in:
Proceedings Conferentie Informatiewetenschap 2000 (De Doelen, Utrecht, 5 april 2000)
Published: 01/01/2000
Document Version
Publisher's PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
Citation for published version (APA):
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Supporting User Adaptation in Adaptive Hypermedia Applications
Hongjing Wu, Geert-Jan Houben, Paul De Bra
Department of Computing Science
Eindhoven University of Technology
PO Box 513, 5600 MB Eindhoven
the Netherlands
phone: +31 40 2472733
fax: +31 40 2463992
e-mail: {hongjing,houben,debra}@win.tue.nl
Abstract
A hypermedia application offers its users a lot of freedom to navigate through a large hyperspace. The rich link structure of the hypermedia application can not only cause users to get lost in the hyperspace, but can also lead to comprehension problems because different users may be interested in different pieces of information or a different level of detail or difficulty. Adaptive hypermedia systems (or AHS for short) aim at overcoming these problems by providing adaptive navigation support and adaptive content. The adaptation is based on a user model that represents relevant aspects about the user.
At the Eindhoven University of Technology we developed an AHS, named AHA [DC98]. To describe its functionality and that of future adaptive systems we also developed a reference model for the architecture of adaptive hypermedia applications, named AHAM (for Adaptive Hypermedia Application Model) [DHW99]. In AHAM knowledge is represented through hierarchies of large composite abstract concepts as well as small atomic ones. AHAM also divides the different aspects of an AHS into a domain model (DM), a user model (UM) and an adaptation model (AM). This division provides a clear separation of concerns when developing an adaptive hypermedia application.
In this paper, we concentrate on the user modeling aspects of AHAM, but also describe how they relate to the domain model and the adaptation model. Also, we provide a separation between the adaptation rules an author or system designer writes (as part of the adaptation model) and the system's task of executing these rules in the right order. This distinction leads to a simplification of the author's or system designer's task to write adaptation rules. We illustrate authoring and adaptation in by some examples in the AHS AHA.
Keywords: adaptive hypermedia, user modeling, adaptive presentation, adaptive navigation, hypermedia reference model
1. Introduction
Hypermedia systems, and Web-based systems in particular, are becoming increasingly popular as tools for user-driven access to information. Hypermedia applications typically offer users a lot of freedom to navigate through a large hyperspace. Unfortunately, this rich link structure of the hypermedia application causes some serious usability problems:
- A typical hypermedia system presents the same links on a page, regardless the path a user followed to reach this page. When providing navigational help, e.g. through a map (or some fish-eye view) the system does not know which part of the link structure is most important for the user. The map cannot be simplified by filtering (or graying) out links that are less relevant for the user. Not having personalized maps is a typical navigation problem of hypermedia applications.
- Navigation in ways the author did not anticipate also causes comprehension problems: for every page the author makes an assumption about the foreknowledge the user has when accessing that page. However, there are too many ways to reach a page to make it possible for an author to anticipate all possible variations in foreknowledge when a user visits that page. A page is always presented in the same way. This often results in users visiting pages containing a lot of redundant information and pages that they cannot fully understand because they lack some expected foreknowledge.
Adaptive hypermedia systems (or AHS for short) aim at overcoming these problems by providing adaptive navigation support and adaptive content. Adaptive hypermedia is a recent area of research on the crossroad of hypermedia and the area of user-adaptive systems. The goal of this research is to improve the usability of hypermedia systems by making them personalized. The personalization or adaptation is based on a user model that represents relevant aspects about the user. The system gathers information about the user by observing the use of the application, and in particular by observing the browsing behavior of the user.
Many adaptive hypermedia systems exist to date. The majority of them are used in educational applications, but some are used for on-line information systems, on-line help systems, information retrieval systems, etc. An overview of systems, methods and techniques for adaptive hypermedia can be found in [B961]. At the Eindhoven University of Technology we developed an AHS system [DC98] out of Web-based courseware for an introductory course on hypermedia. In this system, called AHA, knowledge is represented with the same granularity as content: at the page level. In earlier versions of AHA, the user's knowledge about a given concept was a binary value: known or not known. The current version supports a more sophisticated representation in the sense that the knowledge level is represented by a percentage: reading a page can lead to an increase (or decrease) of the percentage. As part of the redesign process for AHA we have developed a reference model for the architecture of adaptive hypermedia applications: AHAM (for Adaptive Hypermedia Application Model) [DHW99], which is an extension of the Dexter hypermedia reference model [HS90, HS94]. AHAM acknowledges that doing "useful" and "usable" adaptation in a given application depends on three factors:
- The application must be based on a domain model, describing how the information content of the application (or "hyperdocument") is structured. This model must indicate what the relationship is between the high (and low) level concepts the application deals with, and it must indicate how concepts are tied to information fragments and pages.
- The system must construct and maintain a fine-grained user model that represents a user's preferences, knowledge, goals, navigation history and possibly other relevant aspects. The system can learn more about the user by observing the user's behavior. The user's knowledge is represented using the concepts from the domain model.
- The system must be able to adapt the presentation (of both content and link structure) to the reading and navigation style the user prefers and to the user's knowledge level. In order to do so the author must provide an adaptation model consisting of adaptation rules, for instance indicating how relations between concepts influence whether it will be desirable to guide the user towards or away from pages about certain concepts. Most AHS will offer a default adaptation model, relieving the author from explicitly writing these rules. In the original definition of AHAM [DHW99] we used the terms teaching model (TM) and pedagogical rules. These terms stem from the primary application of AHS's which is in education.
The key elements in AHAM are thus the domain model (DM), user model (UM) and adaptation model (AM). This division of adaptive hypermedia applications provides a clear separation of concerns when developing an adaptive hypermedia application.
The main shortcoming in many current AHS is that these three factors or components are not clearly separated:
- The relationship between pages and concepts is sometimes too vague (e.g. in [PDS98]). When an author decides that two pages each represent 30% of the same concept, there is no way of inferring whether together they represent 30%, 60% of the concept or any value in between. On the other hand systems like AHA [DC98] the relation between pages and concepts is strictly one-to-one, which leads to a very fragmented user model without high-level concepts.
- The adaptation rules can often not be defined at the conceptual level but only at the page level. In AHA [DC98], ELM-ART [BSW96a] and Interbook [BSW96b] for instance the destination of a link is (in almost all cases) a fixed page, described through a plain HTML anchor tag. (The "teach me" button in Interbook is an exception.)
- There may be a mismatch between the high level of detail in the user model and the low reliability of the information on which an AHS must update that user model. The basic information available to most AHS is the time at which a user requests a page (through a WWW-browser). Many educational AHS compensate for the unreliable event information by offering (multiple-choice) tests. A few systems, including AHA [DC98], capture reading time by logging both requests for pages and the time at which the user leaves a page (even when jumping to a different Web-site).
In this paper we focus on the user modeling aspects of AHAM and the use of adaptation rules to generate adaptive presentations and to update the user model. We extend the results of [WHD99b] by separating adaptation rules from the specification of the execution of these rules.
This paper is organized as follows. In Section 2 we describe the AHAM reference model for adaptive hypermedia applications. In Section 3 we elaborate on user modeling and on the use of adaptation rules in AHAM, that is how to construct the user model, update the user model by observing the user's behavior, and how to make content adaptation and link adaptation depending on the user model. In Section 4 we use AHAM to describe the user modeling and adaptation features of the AHA system, before we conclude in Section 5.
2. AHAM, a Dexter-based Reference Model
In hypermedia applications the emphasis is always on the information nodes and on the link structure connecting these nodes. The Dexter model captures this in what it calls the Storage Layer. It represents a domain model DM, i.e. the author's view on the application domain expressed in terms of concepts.
In adaptive hypermedia applications the central role of DM is shared with a user model UM. UM represents the relationship between the user and the domain model by keeping track of how much the user knows about each of the concepts in the application domain.
In order to perform adaptation based on DM and UM an author needs to specify how the user's knowledge influences the presentation of the information from DM. In AHAM this is done by means of a teaching model TM consisting of pedagogical rules. In this paper we use the terms adaptation model (AM) and adaptation rules to avoid the association with educational applications. An adaptive engine uses these rules to manipulate link anchors (from the Dexter model's anchoring) and to generate what the Dexter model calls the presentation specifications. Like the Dexter model, AHAM focuses on the Storage Layer, the anchoring and the presentation specifications. Figure 1 shows the structure of adaptive hypermedia applications in the AHAM model.

In this section we present the elements of AHAM that we will use in Section 3 to illustrate the user modeling and adaptation.
2.1 The domain model
A component is an abstract notion in an AHS. It is a pair (uid, cinfo) where uid is a globally unique (object) identifier for the component and cinfo represents the component's information. A component's information consists of:
- A set of attribute-value pairs;
- A sequence of anchors (for attaching links);
- A presentation specification.
We distinguish two "kinds" of components: concepts and concept relationships. A concept is a component representing an abstract information item from the application domain. It can be either an atomic concept or a composite concept. An atomic concept corresponds to a fragment of information. It is primitive in the model (and can thus not be adapted). Its attribute and anchor values belong to the "Within-component layer" and are thus implementation dependent and not described in the model. A composite component has two "special" attributes:
- A sequence of children (concepts);
- A constructor function (to denote how the children belong together).
The children of a composite concept are all atomic concepts (then we call it a page or in typical hypertext terms a node) or all composite concepts. The composite concept component hierarchy must be a DAG (directed acyclic graph). Also, every atomic concept must be included in some composite concept. Figure 2 illustrates a part of a concept hierarchy.

An anchor is a pair (aid, avalue), where aid is a unique (object) identifier for the anchor within the scope of its component and avalue is an arbitrary value that specifies some location, region, item or substructure within a concept component.
Anchor values of atomic concepts belong to the (implementation dependent) Within-Component layer. Anchor values of composite concepts are identifiers of concepts that belong to that composite.
A specifier is a tuple (uid, aid, dir, pres), where uid is the identifier of a concept, aid is the identifier of an anchor, dir is a direction (FROM, TO, BIDIRECT, or NONE), and pres is a presentation specification.
A concept relationship is a component, with two additional attributes:
- A sequence of specifiers
- A concept relationship type.
The most common type of concept relationship is the type link. This corresponds to the link components in the Dexter model, or links in most hypermedia systems. (Links typically have at least one FROM element and one TO or BIDIRECT element.) In AHAM we consider other types of relationships as well, which play a role in the adaptation.
A common type of concept relationship is prerequisite. When a concept C is a prerequisite for C2, it means that the user should read C1 before C2. It does not mean that there must be a link from C1 to C2. It only means that the system somehow takes into account that reading about C2 is not desired before some (enough) knowledge about C1 has been acquired. Every prerequisite must have at least one FROM element and one TO element. Figure 3 shows a small set of (only binary) relationships, both prerequisites and links.

The atomic concepts, composite concepts and concept relationships together form the domain model DM of an adaptive hypermedia application.
### 2.2 The user model
An AHS associates a number of user model attributes with each concept component of DM. For each user the AHS maintains a table-like structure, in which for each concept the attribute values for that concept are stored. Section 3 describes the user model in detail. For now it suffices to know that because of the relationships between abstract concepts and concrete content elements like fragments and pages a user model may contain other attributes than simply a knowledge level. For instance, the user model may also store information about what a user has actually read about a concept or whether a concept is considered relevant for the user.
Since the user model consists of "named entities" for which we store a number of attribute/value pairs, there is no reason to limit these "entities" to concepts about which the knowledge level is stored and updated. Concepts can be used (some might say abused) to represent other user features, such as preferences, goals, background and hyper-space experience. For the AHS (or the AHAM model) the actual meaning of concepts is irrelevant.
2.3 The adaptation (teaching) model
The adaptation of the information content of a hyperdocument and of the link structure is based on a set of rules. These rules form the connection between DM, UM and the presentation (specification) to be generated [WHD99a].
We partition the rules into four groups according to the adaptation "steps" to which they belong. These steps are IU, UU-Pre, GA, and UU-Post. An algorithm applies rules in each group. IU is to initialize the user model, under control of Initialize-UM; UU-Pre is to update UM before generating the page, under control of Update-UM-pre; GA is to generate adaptation, under control of Adaptation; UU-Post is to update UM after generating the page, under control of Update-UM-post. The four algorithms control how the rules in each group work together. By this we mean that an algorithm will trigger applicable rules (in some order) until no more rules can be applied or until the application of rules would no longer incur any change to UM.
A generic adaptation rule is a rule in which (bound) variables are used that represent concepts and concept relationships. A specific adaptation rule uses concrete concepts from DM instead of variables. Other than that both types of rules look the same. The syntax of the permissible rules depends on the AHS. In Section 3 we give examples of adaptation rules, using an arbitrarily chosen syntax. In Section 4 we give examples of adaptation rules as they are implemented in the AHA system [DC98]. Generic adaptation rules are often system-defined, meaning that an author need not specify them. Such a rule may for instance define how the knowledge level of an arbitrary concept C_i influences the relevance of other concepts for which C_i is a prerequisite. Author-defined rules always take precedence over (conflicting) system-defined rules. (Some AHS do not provide the possibility for authors to define their own generic adaptation rules.) Specific rules always take precedence over generic rules.
While specific rules are typically used to create exceptions to generic rules they can also be used to perform some ad-hoc adaptation based on concepts for which DM does not provide a relationship. Specific adaptation rules must always be defined by the author.
The adaptation model AM of an AHS is the set of (generic and specific) adaptation rules.
An AHS does not only have a domain model, user model en adaptation model, but also an adaptive engine, which is a software environment that performs the following functions:
- It offers generic page selectors and constructors. For each composite concept the constructor is used to determine which page to display when the user follows a link to that composite concept. For each page the constructor is used for building the adaptive presentation of that page.
- It optionally offers a (very simple programming) language for describing new page selectors and constructors. Generic and specific adaptation rules (from UU-pre and GA) are used during page selection and construction.
- It performs adaptation by executing the page selectors and constructors. This means selecting a page, selecting fragments, sorting them, maybe presenting them in a specific way, etc. It also means performing adaptation to links by manipulating link anchors depending on the state of the link (like enabled, disabled, hidden, etc.).
- It updates the user model (instance) each time the user visits a page. It does so by triggering the necessary adaptation rules in UU-post. The engine will thus set the knowledge value for each atomic concept of displayed fragments of the page to a value that depends on a configurable amount (this could be 1 by default but possibly overridden by the author). It determines the influence on the knowledge value for page- and composite concepts. It also maintains other attribute values for each concept.
The adaptive engine thus provides the implementation dependent aspects while DM, UM and AM describe the information and adaptation at the conceptual, implementation independent level. An adaptive hypermedia application is a 4-tuple (DM, UM, AM, AE), where DM is a domain model. UM is a user model, AM is a adaptation model, and AE is an adaptive engine.
3. User Modeling and Adaptation in AHAM
According to AHAM the AHS maintains a fine-grained user model that represents the state of the user’s features not only at the page level but also at the abstract conceptual level. It offers the ability to consider navigation history and other relevant user aspects as part of the user model UM. The maintenance of the relevant user aspects in UM is achieved by the application of the adaptation rules that are part of the adaptation model AM.
3.1 Representation of user features using (attribute/value) pairs
By definition adaptive hypermedia applications reflect some features of the user in the user model. This model is used to express various visible aspects of the system that depend on the user and that are visible to that user. Brusilovsky [B96] states which aspects of the user can be taken into account when providing adaptation. Generally, there are five user features that are used by existing AHS:
- knowledge
- user goals
- background
- hyperspace experience
- preferences
Almost every adaptive presentation technique relies on the user’s knowledge as a source of adaptation. The system has to recognize the changes in the user’s knowledge state and update its user model accordingly. Often the user’s knowledge is represented by an overlay model. This overlay model is based on a conceptual structure of the subject domain. Sometimes a simpler stereotype user model is used to represent the user’s knowledge: this means that the user is classified according to some stereotype. As many adaptation techniques require a rather fine-grained approach, stereotype models are often too simple to provide adequate personalization and adaptation. Overlay models on the other hand are generally hard to initialize. Acceptable results are often achieved by combining stereotype and overlay modeling: stereotype modeling is used in the beginning to classify a new user and to set initial values for the overlay model; later a more fine-grained overlay model is used. Using the AHAM definition for user model, it is fairly straightforward how a user’s knowledge state can be represented by associating a knowledge value attribute to each concept.
Apart from the concept’s identifier (which may be just a name) a typical AHS will store not only a knowledge value for each concept, but also a read value which indicates whether (and how much) information about the concept has been read by the user, and possibly some other attribute values as well. While the model uses a table representation, implementations of AHS may use different data structures. For instance, a logfile can be used for the read attribute.
Table 1 illustrates the (conceptual) structure of a user model for a course on hypermedia: the concepts Xanadu and KMS were at least partially learnt. The concept WWW, consisting of two sub-parts, is partially learnt because WWW-page1 has been read but WWW-page2 has not been read. One can see that WWW must be a composite concept that is not a page, because it is already partially learnt while it has not been read at all.
<table>
<thead>
<tr>
<th>concept name (uid)</th>
<th>Knowledge value</th>
<th>read</th>
</tr>
</thead>
<tbody>
<tr>
<td>Xanadu</td>
<td>well learned</td>
<td>true</td>
</tr>
<tr>
<td>KMS</td>
<td>learned</td>
<td>true</td>
</tr>
<tr>
<td>WWW-page1</td>
<td>well learned</td>
<td>true</td>
</tr>
<tr>
<td>WWW-page2</td>
<td>not known</td>
<td>false</td>
</tr>
<tr>
<td>WWW</td>
<td>learned</td>
<td>false</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Table 1: Example user model (instance).
The second kind of user feature is the user's goal. The user's goal or task is a feature that is related with the context of the user's working activities rather than with the user as an individual. The user's goal is the most volatile of all user features. It can be considered as a very important user feature for AHS. One representation of possible user goals uses a hierarchy (a tree) of tasks. Another representation of the user's current goal uses a set of pairs (Goal, Value), where Value is the probability that Goal is the current goal of the user. The latter representation perfectly matches the way in which AHAM models the user's state.
Two features of the user that are similar to the user's knowledge of the subject but that functionally differ from it, are the user's background and the user's experience in the given hyperspace. By background we mean all the information related to the user's previous experience outside the subject of the hypermedia system. By user's experience in the given hyperspace we mean how familiar is the user with the structure of the hyperspace and how easy can the user navigate in it. Again, these features can be modeled in AHAM using concepts' attribute/value pairs.
For different possible reasons the user can prefer some nodes and links over others or some parts of a page over others. This is used most heavily in information retrieval hypermedia applications. In fact in most adaptive information retrieval hypermedia applications preferences are the only information that is stored about the user. User preferences differ from other user model components, since in most cases they cannot be deduced by the system. The user has to inform the system directly or indirectly about the preferences. AHAM's attribute/value pairs can again be used to model the user's preferences.
From the above descriptions we can conclude that although a user model needs to represent (five) very different aspects of a user, all of these kinds of aspects can be implemented as sets of concepts with associated attribute/value pairs. For presentation purposes it is not necessary to treat these different kinds of aspects in a different way, but for implementation purposes it is often needed to treat these in different ways in adaptive hypermedia applications.
The knowledge value of a concept can be a Boolean, discrete or continuous value depending on the choice of the author (or the properties of the AHS). By using a Boolean value, the knowledge about the concept can be either known or unknown.
By using a discrete value the knowledge about the concept can be one of a small set of values, like unknown, learnt, well learnt or well known. By using continuous values from the range of \([0..1]\), the value can more precisely describe the user's knowledge, and even describe the loss or decay of knowledge over time. In conclusion, AHAM's user model UM has enough expressive power to model all user features that current AHS take into account.
### 3.2 Changes in user features
In the previous subsection we discussed features that describe the user's state in the browsing process. Usually in adaptive hypermedia applications (as opposed to adaptable hypermedia applications, see [DHW99]), only the browsing behavior is observed in order to influence the adaptation. Basically, there are five ways in which the user features can change in an adaptive hypermedia application:
1. the user clicks on an anchor (and follows a link);
2. the user performs a test (explicitly);
3. information (about the user) is imported from an external testing system;
4. a user preference is (explicitly) set or declared by the user (initially);
5. a user preference is (automatically) inferred from the user's behavior.
Besides observing the browser behavior, the application can change the user features based on information that is explicitly imported from its environment or that is explicitly declared or implicitly inferred about the user's preferences.
These five different kinds of changes lead to five kinds of "rules" how to maintain the user features. The system can be made more author centered by including rules of types 2 and 3 (besides rules of type 1), while the application can become more user centered by including rules of types 4 and 5. It is also possible to choose a combination that suits the application.
### 3.3 Adaptation based on the user model
By maintaining the user model the system can infer how relevant aspects of the user change while the user is using the application and thus is using the adaptation. The adaptive engine realizes adaptive presentation and adaptive navigation (or link adaptation) according to the (adaptation) rules that are system-defined or written by the author and that depend on the user model.
Below we give a number of examples to show how adaptation rules are used to do adaptation. The syntax used for the rules is arbitrary and only exemplary. AHAM does not prescribe any specific syntax. Normally every AHS will provide its own syntax for defining adaptation rules.
Example 1 For atomic concepts (fragments) let us assume that the presentation specification is a two-valued (almost Boolean) field, which is either “show” or “hide”. When a page is being accessed, the following rule sets the visibility for fragments that belong to a “page” concept, depending on their “relevance” attribute-value.
\[
< \text{access(C)} \text{ and } \text{F IN C.children} \text{ and } \text{F.relevance = true} \Rightarrow \text{F.pres := show} >
\]
Here we simplified things, by assuming that we can treat C.children as if it were a set, whereas it really is a sequence. It is common to execute rules for generating presentation specifications before generate the page, so it is in GA.
Example 2 The following rules set the presentation specification for a specifier that denotes a link (source) anchor depending on whether the destination of the link is considered relevant and whether the destination has been read before. For simplicity we consider a link with just one source and one destination.
\[
< \text{CR.type = link and CR.cinfo.dir[1]} = \text{FROM and CR.cinfo.dir[2]} = \text{TO} \text{ and CR.ss[2].uid.relevant = true} \text{ and CR.ss[2].uid.read = false} \Rightarrow \text{CR.ss[1].pres := GOOD} >
\]
\[
< \text{CR.type = link and CR.cinfo.dir[1]} = \text{FROM and CR.cinfo.dir[2]} = \text{TO} \text{ and CR.ss[2].uid.relevant = true} \text{ and CR.ss[2].uid.read = true} \Rightarrow \text{CR.ss[1].pres := NEUTRAL} >
\]
\[
< \text{CR.type = link and CR.cinfo.dir[1]} = \text{FROM and CR.cinfo.dir[2]} = \text{TO} \text{ and CR.ss[2].uid.relevant = false} \text{ and CR.ss[1].pres := BAD} >
\]
These rules say that links to previously unread but “relevant” pages are “GOOD”. Links to previously read and “relevant” pages are “NEUTRAL” and links to pages that are not “relevant” are “BAD”. In the AHA system [DC98] this results in the link anchors being colored blue, purple or black respectively. In ELM-ART [BSW96a] and Interbook [BSW96b] the links would be annotated with a green, yellow or red ball. We can consider the actual presentation (the coloring of the anchors) as belonging to the Run-time Layer and thus outside the scope of AHAM. However, should we opt to include the color preferences for GOOD, NEUTRAL and BAD links in the user model then the translation of the presentation specification to the color could still be described using an adaptation rule. These rules are in GA also.
3.4 Maintenance of user model
To record the reading history of the user and the evolution of the user’s knowledge, the system updates the user model based on the observation of the user’s browsing process. The rules that the author has defined in AM describe how to keep track of the evolution of the user’s knowledge. For the application of adaptation rules we assume that the \texttt{FollowLink} operation from the Dexter (and thus AHAM) model’s Run-time Layer results in a call to a \textit{resolver function} for a given specifier. In AHAM the resolver translates the given specifier to the uid of a composite concept component that corresponds to a page, or to a set of such uid’s. Which page exactly is selected depends on DM and UM. For the selected page an \textit{accessor function} is called, according to the Dexter model, which returns the (page) concept component that corresponds to the resolved uid. Then the rules for presentation are executed, as shown in Subsection 3.3.
Example 3 The following rule expresses that when a page is accessed the “read” user-model attribute for the corresponding concept is set to true:
\[
< \text{access(C)} \Rightarrow \text{C.read := true} >
\]
This rule is in \textit{UU-post}. It is the \textit{Update-UM-post} that will trigger other rules that have \textit{read} on their left-hand side in the same group.
Example 4 The following rule expresses that when a page is “relevant” and it is accessed, the knowledge value of the corresponding concept becomes “well-learnt”. This is somewhat like the behavior of Interbook [BSW96b].
\[
< \text{access(C)} \text{ and } \text{C.relevant = true} \Rightarrow \text{C.knowledge := well-learnt} >
\]
In Interbook, as well as in AHA [DC98], knowledge is actually updated before the page is generated. These rules thus are in \textit{UU-pre}. At the end of Section 4 we shall describe why this option is chosen, and which problems it creates. In general one wishes to have the option to base some adaptation on the knowledge state \textit{before} accessing a page and some adaptation on the knowledge state \textit{after} reading the page.
Example 5: The following rule expresses that after a user has taken a test about a concept \( C \), his knowledge about concept \( C \) is changed (a rule of “type 2” from Subsection 3.2). Here, an action “test” is used that represents that a test has been taken. It is in UU-pre
\[ \text{< test}(C) \text{ and } C.\text{test} > 60 \Rightarrow C.\text{knowledge} := \text{known} \]
4. User Modeling and Adaptation in the AHA system
AHA [DC98] is a simple adaptive hypermedia system. We describe the properties of the version that is currently being used for two on-line courses and one on-line information kiosk, plus some features of the next version that is currently being developed.
- In AHA the domain model consists of three types of concepts: abstract concepts, fragments and pages. Concepts are loosely associated with (HTML) pages, not with fragments.
- The user model consists of:
- Color preferences for link anchors which the user can customize. (These preferences result in “non-relevant” link anchors to be hidden if their color is set to black, or visibly “annotated” if their color is set to a non-black color, different from that of “relevant” link anchors.)
- For each abstract concept, a knowledge attribute with percentage values. (100 means the concept is fully known). For pages and fragments there is no knowledge attribute value.
- For each page, a Boolean read attribute. (True means the page was read, false means it was not read.) AHA actually logs access and reading times, but they cannot be used in a more sophisticated way in the current version. For abstract concepts and fragments there is no read attribute value.
- AHA comes with an adaptation model containing system-defined generic adaptation rules. It offers a simple language for creating author-defined specific adaptation rules (but no author-defined generic rules).
The domain model can only contain concept relationships of the types that are shown below. An author cannot define new types. The influence of these relationships on the adaptation and the user model updates is defined by system-defined generic adaptation rules. In AHA all rules are executed before generate the page and are triggered directly by a page access, thus eliminating the need for propagation.
- When a page is accessed, its read attribute in the user model is updated as follows (it is in UU-pre):
\[ \text{< access}(P) \Rightarrow P.\text{read} := \text{true} > \]
- The relationship type generates links a page to an abstract concept. A generates relationship between \( P \) and \( C \) means that reading page \( P \) generates knowledge about \( C \) (it is in UU-pre):
\[ \text{< access}(P) \Rightarrow C.\text{knowledge} := 100 > \]
This “generation” of knowledge in AHA is controlled by a structured comment in an HTML page:
\[ <!-- \text{generates readme } --> \]
This example generates comment denotes that the concept readme becomes known when the page is accessed.
- The relationship type requires links a concept to a virtual composite concept that is defined by a (constructor which is a) Boolean expression of concepts. Although in principle this composite concept is unnamed, we shall use a “predicate” or “pseudo attribute of the page” to refer to it. \( P.\text{requires} \) is used as a Boolean attribute of which the value is always that of the corresponding Boolean expression. It is not a user model attribute as its value is always computed on the fly and not stored in the user model. A requires relationship is implemented using a structured comment at the top of an HTML page, e.g.:
\[ <!-- \text{requires \ (readme and intro) } --> \]
This example expresses that this page is only considered relevant when the concepts readme and intro are both known (100%). In AHA, links to a page for which \( \text{requires} \) is false are considered BAD, and reading such a page generates less knowledge than reading a GOOD page. Below we give the rules in GA that determines how the link anchors will be presented. They are very similar to the rules in Example 2 (Subsection 3.3):
\[ \text{< CR.type = link and CR.cinfo.dir[1] = FROM and CR.cinfo.dir[2] = TO and CR.ss[2].uid.requires = true and CR.ss[2].uid.read = false} \Rightarrow \text{CR.ss[1].pres = GOOD} > \]
\[ \text{< CR.type = link and CR.cinfo.dir[1] = FROM and CR.cinfo.dir[2] = TO and CR.ss[2].uid.requires =} \]
true and CR.ss[2].uid.read = true => CR.ss[1].pres = NEUTRAL >
- The relationship type link only applies to pairs of pages in AHA. "Page selectors" that exist in AHAM in general are thus not needed (or possible) in AHA.
AHAM allows author-defined specific adaptation rules only for the conditional inclusion of fragments in HTML pages. Structured HTML comments are used for specifying these rules. With a fragment F we can associate a "pseudo attribute" requires to indicate the condition, just like for whole pages. The syntax is illustrated by the following example:
<!-- if (readme and not intro) -->
... here comes the content of the fragment ...
<!-- else -->
... here is an alternative fragment ...
<!-- endif -->
AHA only includes fragments when their requires "attribute" is true.
The above examples illustrate that representing the actual functionality of an existing AHS in the AHAM model is fairly straightforward. The main reasons for using such a representation are to be able to compare different AHS, to possibly translate an adaptive hypermedia application from one AHS to another, and to identify potential problems or shortcomings in existing AHS.
We conclude this Section with an illustration of one specific shortcoming that we have found in both AHA [DC98] and Interbook [BSW95b]: the "new" knowledge values are calculated before generating the page (and in fact these systems do not support calculating knowledge values after generating a page at all). When a user requests a page, the knowledge generated by reading this page is already taken into account during the generation of the page. This has desirable as well as undesirable side-effects:
- When links to other pages become relevant after reading the current page it makes sense to already annotate the link anchors as relevant when presenting the page. Once a page is generated its presentation remains static while the user is reading it (and rightfully so). The new knowledge thus needs to be taken into account before the page is actually read.
- Pages contain information that becomes relevant or non-relevant depending on the user’s knowledge. In some cases the relevance of a fragment may depend on the user having read the page that contains this fragment. This means that a fragment may be relevant the first time a page is visited and non-relevant thereafter, or just the other way round.
By already taking into account the knowledge before the page is generated for the first time a different "first time version" becomes impossible to create. (Some readers may argue that having content that changes in this way may not be desirable in any case, but not having this possibility limits the general applicability of the AHS.)
5. Conclusions and Future Work
Over the past few years we have developed an AHS, mainly for use in courseware. We have come across a number of other AHS, with different interesting properties. As part of the redesign of AHA [DC98] we developed a reference model for AHS, named AHAM. The description of adaptive hypermedia applications in terms of this model has provided us with valuable redesign issues. The three most important ones are:
- The division of an adaptive hypermedia application into a domain model, user model, and adaptation model provides a clear separation of concerns and will lead to a better separation of orthogonal parts of the AHS functionality in the implementation of the next version of AHA. We believe that a system which supports this separation of concerns will not only result in a cleaner implementation, but also in a more usable authoring environment [WHD97a].
- In this paper we have described the adaptation rules in such a way that the rule definition is independent of the rule execution. This makes authoring easier.
By representing AHA in the AHAM model we have identified another shortcoming: the lack of a two-phase application of rules. We found that this shortcoming is present in other AHS as well.
We deliberately based the AHAM model on the Dexter hypermedia reference model [HS90, HS94], to show that AHS are "true" hypermedia systems. In this paper we have concentrated on user modeling and adaptation. The description of these aspects at an abstract level sets AHAM apart from other descriptions of AHS that are too closely related to the actual implementation of these AHS.
In the near future we will develop a new version of the AHA system, in which the separation of domain model, user model and adaptation model will be more complete. We also plan an extended paper with a complete formal definition of AHAM, including a formal specification of a language for specifying adaptation rules.
References
|
{"Source-Url": "https://pure.tue.nl/ws/files/3219209/356692827386889.pdf", "len_cl100k_base": 9274, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34523, "total-output-tokens": 10513, "length": "2e13", "weborganizer": {"__label__adult": 0.00044798851013183594, "__label__art_design": 0.001617431640625, "__label__crime_law": 0.0004677772521972656, "__label__education_jobs": 0.033843994140625, "__label__entertainment": 0.0002727508544921875, "__label__fashion_beauty": 0.00033211708068847656, "__label__finance_business": 0.000640869140625, "__label__food_dining": 0.0005288124084472656, "__label__games": 0.00106048583984375, "__label__hardware": 0.001293182373046875, "__label__health": 0.0008382797241210938, "__label__history": 0.0008363723754882812, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.0006976127624511719, "__label__literature": 0.0013141632080078125, "__label__politics": 0.0004355907440185547, "__label__religion": 0.0007796287536621094, "__label__science_tech": 0.182861328125, "__label__social_life": 0.00027370452880859375, "__label__software": 0.07391357421875, "__label__software_dev": 0.69580078125, "__label__sports_fitness": 0.00034999847412109375, "__label__transportation": 0.0009150505065917968, "__label__travel": 0.0004189014434814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45558, 0.01813]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45558, 0.64509]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45558, 0.91383]], "google_gemma-3-12b-it_contains_pii": [[0, 2192, false], [2192, 5858, null], [5858, 11137, null], [11137, 14235, null], [14235, 17159, null], [17159, 21834, null], [21834, 25356, null], [25356, 30143, null], [30143, 34999, null], [34999, 39393, null], [39393, 43307, null], [43307, 45558, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2192, true], [2192, 5858, null], [5858, 11137, null], [11137, 14235, null], [14235, 17159, null], [17159, 21834, null], [21834, 25356, null], [25356, 30143, null], [30143, 34999, null], [34999, 39393, null], [39393, 43307, null], [43307, 45558, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45558, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45558, null]], "pdf_page_numbers": [[0, 2192, 1], [2192, 5858, 2], [5858, 11137, 3], [11137, 14235, 4], [14235, 17159, 5], [17159, 21834, 6], [21834, 25356, 7], [25356, 30143, 8], [30143, 34999, 9], [34999, 39393, 10], [39393, 43307, 11], [43307, 45558, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45558, 0.03756]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0093ba6f6e968f717a225b5b2c48e19f8d183e5f
|
<table>
<thead>
<tr>
<th>1</th>
<th>Widgets</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Scripting</td>
<td>47</td>
</tr>
<tr>
<td>3</td>
<td>Indices and tables</td>
<td>49</td>
</tr>
</tbody>
</table>
CHAPTER 1
Widgets
1.1 Corpus
Load a corpus of text documents, (optionally) tagged with categories.
Inputs
• None
Outputs
• Corpus: A collection of documents.
Corpus widget reads text corpora from files and sends a corpus instance to its output channel. History of the most recently opened files is maintained in the widget. The widget also includes a directory with sample corpora that come pre-installed with the add-on.
The widget reads data from Excel (.xlsx), comma-separated (.csv) and native tab-delimited (.tab) files.
1. Browse through previously opened data files, or load any of the sample ones.
2. Browse for a data file.
3. Reloads currently selected data file.
4. Information on the loaded data set.
5. Features that will be used in text analysis.
6. Features that won’t be used in text analysis and serve as labels or class.
You can drag and drop features between the two boxes and also change the order in which they appear.
### 1.1.1 Example
The first example shows a very simple use of **Corpus** widget. Place **Corpus** onto canvas and connect it to **Corpus Viewer**. We’ve used *book-excerpts.tab* data set, which comes with the add-on, and inspected it in **Corpus Viewer**.
The second example demonstrates how to quickly visualize your corpus with **Word Cloud**. We could connect **Word Cloud** directly to **Corpus**, but instead we decided to apply some preprocessing with **Preprocess Text**. We are again working with **book-excerpts.tab**. We’ve put all text to lowercase, tokenized (split) the text to words only, filtered out English stopwords and selected a 100 most frequent tokens.
1.2 Import Documents
Import text documents from folders.
**Inputs**
- None
**Outputs**
- Corpus: A collection of documents from the local machine.
*Import Documents* widget retrieves text files from folders and creates a corpus. The widget reads .txt, .docx, .odt, .pdf and .xml files. If a folder contains subfolders, they will be used as class labels.
1. Folder being loaded.
2. Load folder from a local machine.
3. Reload the data.
4. Number of documents retrieved.
If the widget cannot read the file for some reason, the file will be skipped. Files that were successfully retrieved will still be on the output.
### 1.2.1 Example
To retrieve the data, select the folder icon on the right side of the widget. Select the folder you wish to turn into corpus. Once the loading is finished, you will see how many documents the widget retrieved. To inspect them, connect the widget to *Corpus Viewer*. We’ve used a set of Kennedy’s speeches in a plain text format.
Now let us try it with subfolders. We have placed Kennedy’s speeches in two folders - pre-1962 and post-1962. If I load the parent folder, these two subfolders will be used as class labels. Check the output of the widget in a Data Table.
1.3 The Guardian
Fetching data from The Guardian Open Platform.
**Inputs**
- None
**Outputs**
- Corpus: A collection of documents from the Guardian newspaper.
Guardian retrieves articles from the Guardian newspaper via their API. For the widget to work, you need to provide the API key, which you can get at their access platform.
1. Insert the API key for the widget to work.
2. Provide the query and set the time frame from which to retrieve the articles.
3. Define which features to retrieve from the Guardian platform.
4. Information on the output.
5. Press Search to start retrieving the articles or Stop to stop the retrieval.
1.3.1 Example
Guardian can be used just like any other data retrieval widget in Orange, namely NY Times, Wikipedia, Twitter or PubMed.
We will retrieve 240 articles mentioning slovenia between september 2017 and september 2018. The text will include article headline and content. Upon pressing Search, the articles will be retrieved.
We can observe the results in the Corpus Viewer widget.
1.4 NY Times
Loads data from the New York Times’ Article Search API.
Inputs
- None
Outputs
NYTimes widget loads data from New York Times’ Article Search API. You can query NYTimes articles from September 18, 1851 to today, but the API limit is set to allow retrieving only a 1000 documents per query. Define which features to use for text mining, Headline and Abstract being selected by default.
To use the widget, you must enter your own API key.
1. To begin your query, insert NY Times’ Article Search API key. The key is securely saved in your system keyring service (like Credential Vault, Keychain, KWallet, etc.) and won’t be deleted when clearing widget settings.
2. Set query parameters:
- *Query*
- Query time frame. The widget allows querying articles from September 18, 1851 onwards. Default is set to 1 year back from the current date.
3. Define which features to include as text features.
4. Information on the output.
5. Produce report.
6. Run or stop the query.
1.4.1 Example
**NYTimes** is a data retrieving widget, similar to Twitter and Wikipedia. As it can retrieve geolocations, that is geographical locations the article mentions, it is great in combination with GeoMap widget.
First, let’s query **NYTimes** for all articles on Slovenia. We can retrieve the articles found and view the results in Corpus Viewer. The widget displays all the retrieved features, but includes on selected features as text mining features.
Now, let’s inspect the distribution of geolocations from the articles mentioning Slovenia. We can do this with GeoMap. Unsurprisingly, Croatia and Hungary appear the most often in articles on Slovenia (discounting Slovenia itself), with the rest of Europe being mentioned very often as well.
1.5 Pubmed
Fetch data from PubMed journals.
**Inputs**
- None
**Outputs**
- Corpus: A collection of documents from the PubMed online service.
*PubMed* comprises more than 26 million citations for biomedical literature from MEDLINE, life science journals, and online books. The widget allows you to query and retrieve these entries. You can use regular search or construct
advanced queries.
1. Enter a valid e-mail to retrieve queries.
2. Regular search:
- **Author**: queries entries from a specific author. Leave empty to query by all authors.
- **From**: define the time frame of publication.
- **Query**: enter the query. **Advanced search**: enables you to construct complex queries. See PubMed’s website to learn how to construct such queries. You can also copy-paste constructed queries from the website.
3. **Find records** finds available data from PubMed matching the query. Number of records found will be displayed above the button.
4. Define the output. All checked features will be on the output of the widget.
5. Set the number of record you wish to retrieve. Press **Retrieve records** to get results of your query on the output.
Below the button is an information on the number of records on the output.
### 1.5.1 Example
**PubMed** can be used just like any other data widget. In this example we’ve queried the database for records on orchids. We retrieved 1000 records and kept only 'abstract' in our meta features to limit the construction of tokens only to this feature.
We used **Preprocess Text** to remove stopword and words shorter than 3 characters (regexp \b\w{1,2}\b). This will perhaps get rid of some important words denoting chemicals, so we need to be careful with what we filter out. For the sake of quick inspection we only retained longer words, which are displayed by frequency in **Word Cloud**.
### 1.6 Twitter
Fetching data from The Twitter Search API.
**Inputs**
- None
**Outputs**
- Corpus: A collection of tweets from the Twitter API.
**Twitter** widget enables querying tweets through Twitter API. You can query by content, author or both and accumulate results should you wish to create a larger data set. The widget only supports REST API and allows queries for up to two weeks back.
1. To begin your queries, insert Twitter key and secret. They are securely saved in your system keyring service (like Credential Vault, Keychain, KWallet, etc.) and won’t be deleted when clearing widget settings. You must first create a Twitter app to get API keys.
2. Set query parameters:
- **Query word list**: list desired queries, one per line. Queries are automatically joined by OR.
- **Search by**: specify whether you want to search by content, author or both. If searching by author, you must enter proper Twitter handle (without @) in the query list.
- **Language**: set the language of retrieved tweets. Any will retrieve tweets in any language.
- **Max tweets**: set the top limit of retrieved tweets. If box is not ticked, no upper bound will be set - widget will retrieve all available tweets.
- **Allow retweets**: if ‘Allow retweets’ is checked, retweeted tweets will also appear on the output. This might duplicate some results.
- **Collect results**: if ‘Collect results’ is ticked, widget will append new queries to the previous ones. Enter new queries, run Search and new results will be appended to the previous ones.
3. Define which features to include as text features.
4. Information on the number of tweets on the output.
5. Run query.
1.6.1 Examples
First, let’s try a simple query. We will search for tweets containing either ‘data mining’ or ‘machine learning’ in the content and allow retweets. We will further limit our search to only a 100 tweets in English.
First, we’re checking the output in Corpus Viewer to get the initial idea about our results. Then we’re preprocessing the tweets with lowercase, url removal, tweet tokenizer and removal of stopword and punctuation. The best way to see the results is with Word Cloud. This will display the most popular words in field of data mining and machine learning in the past two weeks.
Our next example is a bit more complex. We’re querying tweets from Hillary Clinton and Donald Trump from the presidential campaign 2016.
Then we’ve used Preprocess Text to get suitable tokens on our output. We’ve connected Preprocess Text to Bag of Words in order to create a table with words as features and their counts as values. A quick check in Word Cloud gives us an idea about the results.
Now we would like to predict the author of the tweet. With Select Columns we’re setting ‘Author’ as our target variable. Then we connect Select Columns to Test & Score. We’ll be using Logistic Regression as our learner, which we also connect to Test & Score.
We can observe the results of our author predictions directly in the widget. AUC score is quite ok. Seems like we can to some extent predict who is the author of the tweet based on the tweet content.
## 1.7 Wikipedia
Fetching data from MediaWiki RESTful web service API.
**Inputs**
- None
**Outputs**
- Corpus: A collection of documents from the Wikipedia.
Wikipedia widget is used to retrieve texts from Wikipedia API and it is useful mostly for teaching and demonstration.
1. Query parameters:
- Query word list, where each query is listed in a new line.
- Language of the query. English is set by default.
- Number of articles to retrieve per query (range 1-25). Please note that querying is done recursively and that disambiguations are also retrieved, sometimes resulting in a larger number of queries than set on the slider.
2. Select which features to include as text features.
3. Information on the output.
4. Produce a report.
1.7. Wikipedia
5. Run query.
### 1.7.1 Example
This is a simple example, where we use Wikipedia and retrieve the articles on ‘Slovenia’ and ‘Germany’. Then we simply apply default preprocessing with Preprocess Text and observe the most frequent words in those articles with Word Cloud.
Wikipedia works just like any other corpus widget (NY Times, Twitter) and can be used accordingly.
### 1.8 Preprocess Text
Preprocesses corpus with selected methods.
**Inputs**
- Corpus: A collection of documents.
**Outputs**
- Corpus: Preprocessed corpus.
**Preprocess Text** splits your text into smaller units (tokens), filters them, runs normalization (stemming, lemmatization), creates n-grams and tags tokens with part-of-speech labels. Steps in the analysis are applied sequentially and can be turned on or off.
1. **Information on preprocessed data.** *Document count* reports on the number of documents on the input. *Total tokens* counts all the tokens in corpus. *Unique tokens* excludes duplicate tokens and reports only on unique tokens in the corpus.
2. **Transformation** transforms input data. It applies lowercase transformation by default.
- *Lowercase* will turn all text to lowercase.
- *Remove accents* will remove all diacritics/accents in text. naïve $\rightarrow$ naive
- *Parse html* will detect html tags and parse out text only. `<a href...>Some text</a>` $\rightarrow$ Some text
- *Remove urls* will remove urls from text. This is a http://orange.biolab.si/ url. $\rightarrow$ This is a url.
3. **Tokenization** is the method of breaking the text into smaller components (words, sentences, bigrams).
- *Word & Punctuation* will split the text by words and keep punctuation symbols. This example. $\rightarrow$ (This), (example), (.)
- *Whitespace* will split the text by whitespace only. This example. $\rightarrow$ (This), (example).
**Sentence** will split the text by full stop, retaining only full sentences. This example. Another example. → (This example.), (Another example.)
**Regexp** will split the text by provided regex. It splits by words only by default (omits punctuation).
**Tweet** will split the text by pre-trained Twitter model, which keeps hashtags, emoticons and other special symbols. This example. :-) #simple → (This), (example), (.), (:-)), (#simple)
4. **Normalization** applies stemming and lemmatization to words. (I've always loved cats. → I have alway love cat.) For languages other than English use Snowball Stemmer (offers languages available in its NLTK implementation).
- **Porter Stemmer** applies the original Porter stemmer.
- **Snowball Stemmer** applies an improved version of Porter stemmer (Porter2). Set the language for normalization, default is English.
- **WordNet Lemmatizer** applies a networks of cognitive synonyms to tokens based on a large lexical database of English.
5. **Filtering** removes or keeps a selection of words.
- **Stopwords** removes stopwords from text (e.g. removes ‘and’, ‘or’, ’in’...). Select the language to filter by, English is set as default. You can also load your own list of stopwords provided in a simple *.txt file with one stopword per line.

Click ‘browse’ icon to select the file containing stopwords. If the file was properly loaded, its name will be displayed next to pre-loaded stopwords. Change ‘English’ to ‘None’ if you wish to filter out only the provided stopwords. Click ‘reload’ icon to reload the list of stopwords.
- **Lexicon** keeps only words provided in the file. Load a *.txt file with one word per line to use as lexicon. Click ‘reload’ icon to reload the lexicon.
- **Regexp** removes words that match the regular expression. Default is set to remove punctuation.
- **Document frequency** keeps tokens that appear in not less than and not more than the specified number / percentage of documents. If you provide integers as parameters, it keeps only tokens that appear in the specified number of documents. E.g. DF = (3, 5) keeps only tokens that appear in 3 or more and 5 or less documents. If you provide floats as parameters, it keeps only tokens that appear in the specified percentage of documents. E.g. DF = (0.3, 0.5) keeps only tokens that appear in 30% to 50% of documents. Default returns all tokens.
- **Most frequent tokens** keeps only the specified number of most frequent tokens. Default is a 100 most frequent tokens.
6. **N-grams Range** creates n-grams from tokens. Numbers specify the range of n-grams. Default returns one-grams and two-grams.
7. **POS Tagger** runs part-of-speech tagging on tokens.
- **Averaged Perceptron Tagger** runs POS tagging with Matthew Honnibal’s averaged perceptron tagger.
- **Treebank POS Tagger (MaxEnt)** runs POS tagging with a trained Penn Treebank model.
- **Stanford POS Tagger** runs a log-linear part-of-speech tagger designed by Toutanova et al. Please download it from the provided website and load it in Orange. You have to load the language-specific model in Model and load stanford-postagger.jar in the Tagger section.
8. Produce a report.
9. If **Commit Automatically** is on, changes are communicated automatically. Alternatively press **Commit**.
Note! Preprocess Text applies preprocessing steps in the order they are listed. This means it will first transform the text, then apply tokenization, POS tags, normalization, filtering and finally constructs n-grams based on given tokens. This is especially important for WordNet Lemmatizer since it requires POS tags for proper normalization.
1.8.1 Useful Regular Expressions
Here are some useful regular expressions for quick filtering:
- \bword\b: matches exact word
- \w+: matches only words, no punctuation
- \b (B | b) \w+ \b: matches words beginning with the letter b
- \w{4,}: matches words that are longer than 4 characters
- \b \w+ (Y | y) \b: matches words ending with the letter y
1.8.2 Examples
In the first example we will observe the effects of preprocessing on our text. We are working with book-excerpts.tab that we’ve loaded with Corpus widget. We have connected Preprocess Text to Corpus and retained default preprocessing methods (lowercase, per-word tokenization and stopword removal). The only additional parameter we’ve added as outputting only the first 100 most frequent tokens. Then we connected Preprocess Text with Word Cloud to observe words that are the most frequent in our text. Play around with different parameters, to see how they transform the output.
The second example is slightly more complex. We first acquired our data with Twitter widget. We queried the internet for tweets from users @HillaryClinton and @realDonaldTrump and got their tweets from the past two weeks, 242 in total.
In **Preprocess Text** there’s **Tweet** tokenization available, which retains hashtags, emojis, mentions and so on. However, this tokenizer doesn’t get rid of punctuation, thus we expanded the Regexp filtering with symbols that we wanted to get rid of. We ended up with word-only tokens, which we displayed in **Word Cloud**. Then we created a schema for predicting author based on tweet content, which is explained in more details in the documentation for **Twitter** widget.
### 1.9 Bag of Words
Generates a bag of words from the input corpus.
**Inputs**
- Corpus: A collection of documents.
**Outputs**
- Corpus: Corpus with bag of words features appended.
**Bag of Words** model creates a corpus with word counts for each data instance (document). The count can be either absolute, binary (contains or does not contain) or sublinear (logarithm of the term frequency). Bag of words model is required in combination with **Word Enrichment** and could be used for predictive modelling.
1. Parameters for bag of words model:
• Term Frequency:
– Count: number of occurrences of a word in a document
– Binary: word appears or does not appear in the document
– Sublinear: logarithm of term frequency (count)
• Document Frequency:
– (None)
– IDF: inverse document frequency
– Smooth IDF: adds one to document frequencies to prevent zero division.
• Regularization:
– (None)
– L1 (Sum of elements): normalizes vector length to sum of elements
– L2 (Euclidean): normalizes vector length to sum of squares
2. Produce a report.
3. If Commit Automatically is on, changes are communicated automatically. Alternatively press Commit.
1.9.1 Example
In the first example we will simply check how the bag of words model looks like. Load book-excerpts.tab with Corpus widget and connect it to Bag of Words. Here we kept the defaults - a simple count of term frequencies. Check what the Bag of Words outputs with Data Table. The final column in white represents term frequencies for each document.
In the second example we will try to predict document category. We are still using the book-excerpts.tab data set, which we sent through Preprocess Text with default parameters. Then we connected Preprocess Text to Bag of Words to obtain term frequencies by which we will compute the model.
Connect **Bag of Words** to **Test & Score** for predictive modelling. Connect **SVM** or any other classifier to **Test & Score** as well (both on the left side). **Test & Score** will now compute performance scores for each learner on the input. Here we got quite impressive results with SVM. Now we can check, where the model made a mistake.
Add **Confusion Matrix** to **Test & Score**. Confusion matrix displays correctly and incorrectly classified documents. **Select Misclassified** will output misclassified documents, which we can further inspect with **Corpus Viewer**.
### 1.10 Similarity Hashing
Computes documents hashes.
**Inputs**
- Corpus: A collection of documents.
**Outputs**
- Corpus: Corpus with simhash value as attributes.
**Similarity Hashing** is a widget that transforms documents into similarity vectors. The widget uses **SimHash** method from from Moses Charikar.
1. Set Simhash size (how many attributes will be on the output, corresponds to bits of information) and shingle length (how many tokens are used in a shingle).
2. Commit Automatically output the data automatically. Alternatively, press Commit.
1.10.1 Example
We will use deerwester.tab to find similar documents in this small corpus. Load the data with Corpus and pass it to Similarity Hashing. We will keep the default hash size and shingle length. We can observe what the widget outputs in a Data Table. There are 64 new attributes available, corresponding to the Simhash size parameter.
1.10.2 References
1.11 Sentiment Analysis
Predict sentiment from text.
Inputs
- Corpus: A collection of documents.
Outputs
- Corpus: A corpus with information on the sentiment of each document.
Sentiment Analysis predicts sentiment for each document in a corpus. It uses Liu Hu and Vader sentiment modules from NLTK. Both of them are lexicon-based. For Liu Hu, you can choose English or Slovenian version.
1. Method:
- Liu Hu: lexicon-based sentiment analysis (supports English and Slovenian)
- Vader: lexicon- and rule-based sentiment analysis
2. Produce a report.
3. If Auto commit is on, sentiment-tagged corpus is communicated automatically. Alternatively press Commit.
1.11.1 Example
Sentiment Analysis can be used for constructing additional features with sentiment prediction from corpus. First, we load Election-2016-tweets.tab in Corpus. Then we connect Corpus to Sentiment Analysis. The widget will append 4 new features for Vader method: positive score, negative score, neutral score and compound (combined score).
We can observe new features in a Data Table, where we sorted the compound by score. Compound represents the total sentiment of a tweet, where -1 is the most negative and 1 the most positive.
Now let us visualize the data. We have some features we are currently not interested in, so we will remove them with Select Columns.
Then we will make our corpus a little smaller, so it will be easier to visualize. Pass the data to **Data Sampler** and retain a random 10% of the tweets.
Now pass the filtered corpus to **Heat Map**. Use **Merge by k-means** to merge tweets with the same polarity into one line. Then use **Cluster by rows** to create a clustered visualization where similar tweets are grouped together. Click on a cluster to select a group of tweets - we selected the negative cluster.
To observe the selected subset, pass the tweets to Corpus Viewer.
1.11.2 References
1.12 Tweet Profiler
Detect Ekman’s, Plutchik’s or Profile of Mood States’ emotions in tweets.
**Inputs**
- Corpus: A collection of tweets (or other documents).
**Outputs**
- Corpus: A corpus with information on the sentiment of each document.
Tweet Profiler retrieves information on sentiment from the server for each given tweet (or document). The widget sends data to the server, where a model computes emotion probabilities and/or scores. The widget support three classifications of emotion, namely Ekman’s, Plutchik’s and Profile of Mood States (POMS).
1. **Options:**
- Attribute to use as content.
- Emotion classification, either Ekman’s, Plutchik’s or Profile of Mood States. Multi-class will output one most probable emotion per document, while multi-label will output values in columns per each emotion.
- The widget can output classes of emotion (categorical), probabilities (numeric), or embeddings (an emotional vector of the document).
2. **Commit Automatically** automatically outputs the result. Alternatively, press Commit.
1.12.1 Example
We will use `election-tweets-2016.tab` for this example. Load the data with Corpus and connect it to Tweet Profiler. We will use Content attribute for the analysis, Ekman’s classification of emotion with multi-class option and we will output the result as class. We will observe the results in a Box Plot. In the widget, we have selected to observe the Emotion variable, grouped by Author. This way we can see which emotion prevails by which author.
1.12.2 References
1.13 Topic Modelling
Inputs
- Corpus: A collection of documents.
Outputs
- Corpus: Corpus with topic weights appended.
- Topics: Selected topics with word weights.
- All Topics: Topic weights by tokens.
**Topic Modelling** discovers abstract topics in a corpus based on clusters of words found in each document and their respective frequency. A document typically contains multiple topics in different proportions, thus the widget also reports on the topic weight per document.
1. Topic modelling algorithm:
- Latent Semantic Indexing
- Latent Dirichlet Allocation
- Hierarchical Dirichlet Process
2. Parameters for the algorithm. LSI and LDA accept only the number of topics modelled, with the default set to 10. HDP, however, has more parameters. As this algorithm is computationally very demanding, we recommend you to try it on a subset or set all the required parameters in advance and only then run the algorithm (connect the input to the widget).
- First level concentration ($\gamma$): distribution at the first (corpus) level of Dirichlet Process
- Second level concentration ($\alpha$): distribution at the second (document) level of Dirichlet Process
- The topic Dirichlet ($\alpha$): concentration parameter used for the topic draws
- Top level truncation (T): corpus-level truncation (no of topics)
- Second level truncation (K): document-level truncation (no of topics)
- Learning rate ($\kappa$): step size
- Slow down parameter ($\tau$)
3. Produce a report.
4. If Commit Automatically is on, changes are communicated automatically. Alternatively press Commit.
### 1.13.1 Example
In the first example, we present a simple use of the Topic Modelling widget. First we load grimm-tales-selected.tab data set and use Preprocess Text to tokenize by words only and remove stopwords. Then we connect Preprocess Text to Topic Modelling, where we use a simple Latent Semantic Indexing to find 10 topics in the text.
LSI provides both positive and negative weights per topic. A positive weight means the word is highly representative of a topic, while a negative weight means the word is highly unrepresentative of a topic (the less it occurs in a text, the more likely the topic). Positive words are colored green and negative words are colored red.
We then select the first topic and display the most frequent words in the topic in Word Cloud. We also connected Preprocess Text to Word Cloud in order to be able to output selected documents. Now we can select a specific word in the word cloud, say little. It will be colored red and also highlighted in the word list on the left.
Now we can observe all the documents containing the word little in Corpus Viewer.
In the second example, we will look at the correlation between topics and words/documents. Connect Topic Modelling to Heat Map. Ensure the link is set to All Topics - Data. Topic Modelling will output a matrix of topic weights by words from text (more precisely, tokens).
We can observe the output in a Data Table. Tokens are in rows and retrieved topics in columns. Values represent how much a word is represented in a topic.
To visualize this matrix, open **Heat Map**. Select **Merge by k-means** and **Cluster - Rows** to merge similar rows into one and sort them by similarity, which makes the visualization more compact.
In the upper part of the visualization, we have words that highly define topics 1-3 and in the lower part those that define topics 5 and 10.
We can similarly observe topic representation across documents. We connect another **Heat Map** to **Topic Modelling** and set link to **Corpus - Data**. We set **Merge** and **Cluster** as above.
In this visualization we see how much is a topic represented in a document. Looks like Topic 1 is represented almost across the entire corpus, while other topics are more specific. To observe a specific set of document, select either a clustering node or a row in the visualization. Then pass the data to **Corpus Viewer**.
1.14 Corpus Viewer
Displays corpus content.
**Inputs**
- Corpus: A collection of documents.
**Outputs**
- Corpus: Documents containing the queried word.
**Corpus Viewer** is meant for viewing text files (instances of Corpus). It will always output an instance of corpus. If **RegExp** filtering is used, the widget will output only matching documents.
1. **Information**:
- **Documents**: number of documents on the input
- **Preprocessed**: if preprocessor is used, the result is True, else False. Reports also on the number of tokens and types (unique tokens).
- **POS tagged**: if POS tags are on the input, the result is True, else False.
- **N-grams range**: if N-grams are set in Preprocess Text, results are reported, default is 1-1 (one-grams).
- **Matching**: number of documents matching the RegExp Filter. All documents are output by default.
2. **RegExp Filter**: Python regular expression for filtering documents. By default no documents are filtered (entire corpus is on the output).
3. **Search Features**: features by which the RegExp Filter is filtering. Use Ctrl (Cmd) to select multiple features.
4. **Display Features**: features that are displayed in the viewer. Use Ctrl (Cmd) to select multiple features.
5. **Show Tokens & Tags**: if tokens and POS tag are present on the input, you can check this box to display them.
6. If **Auto commit is on**, changes are communicated automatically. Alternatively press **Commit**.
### 1.14.1 Example
*Corpus Viewer* can be used for displaying all or some documents in corpus. In this example, we will first load *book-excerpts.tab*, that already comes with the add-on, into Corpus widget. Then we will preprocess the text into words, filter out the stopwords, create bi-grams and add POS tags (more on preprocessing in *Preprocess Text*). Now we want to see the results of preprocessing. In *Corpus Viewer* we can see, how many unique tokens we got and what they are (tick **Show Tokens & Tags**). Since we used also POS tagger to show part-of-speech labels, they will be displayed alongside tokens underneath the text.
Now we will filter out just the documents talking about a character Bill. We use regular expression \bBill\b to find the documents containing only the word Bill. You can output matching or non-matching documents, view them in another Corpus Viewer or further analyse them.
1.15 Word Cloud
Generates a word cloud from corpus.
**Inputs**
- Topic: Selected topic.
- Corpus: A collection of documents.
**Outputs**
- Corpus: Documents that match the selection.
- Word: Selected word that can be used as query in Concordance.
Word Cloud displays tokens in the corpus, their size denoting the frequency of the word in corpus. Words are listed by their frequency (weight) in the widget. The widget outputs documents, containing selected tokens from the word cloud.
1. Information on the input.
- number of words (tokens) in a topic
- number of documents and tokens in the corpus
2. Adjust the plot.
- If Color words is ticked, words will be assigned a random color. If unchecked, the words will be black.
- Word tilt adjust the tilt of words. The current state of tilt is displayed next to the slider (‘no’ is the default).
- Regenerate word cloud plot the cloud anew.
3. Words & weights displays a sorted list of words (tokens) by their frequency in the corpus or topic. Clicking on a word will select that same word in the cloud and output matching documents. Use Ctrl to select more than one word. Documents matching ANY of the selected words will be on the output (logical OR).
4. Save Image saves the image to your computer in a .svg or .png format.
1.15.1 Example
Word Cloud is an excellent widget for displaying the current state of the corpus and for monitoring the effects of preprocessing.
Use Corpus to load the data. Connect Preprocess Text to it and set your parameters. We’ve used defaults here, just to see the difference between the default preprocessing in the Word Cloud widget and the Preprocess Text widget.
We can see from the two widgets, that **Preprocess Text** displays only words, while default preprocessing in the **Word Cloud** tokenizes by word and punctuation.
### 1.16 Concordance
Display the context of the word.
**Inputs**
- Corpus: A collection of documents.
**Outputs**
- Selected Documents: Documents containing the queried word.
- Concordances: A table of concordances.
**Concordance** finds the queried word in a text and displays the context in which this word is used. Results in a single color come from the same document. The widget can output selected documents for further analysis or a table of concordances for the queried word. Note that the widget finds only exact matches of a word, which means that if you query the word ‘do’, the word ‘doctor’ won’t appear in the results.
1. **Information**:
- **Documents**: number of documents on the input.
- **Tokens**: number of tokens on the input.
- **Types**: number of unique tokens on the input.
- **Matching**: number of documents containing the queried word.
2. **Number of words**: the number of words displayed on each side of the queried word.
3. **Queried word**.
4. If **Auto commit is on**, selected documents are communicated automatically. Alternatively press **Commit**.
### 1.16.1 Examples
*Concordance* can be used for displaying word contexts in a corpus. First, we load *book-excerpts.tab* in **Corpus**. Then we connect **Corpus** to **Concordance** and search for concordances of a word ‘doctor’. The widget displays all documents containing the word ‘doctor’ together with their surrounding (contextual) words.
Now we can select those documents that contain interesting contexts and output them to **Corpus Viewer** to inspect them further.
In the second example, we will output concordances instead. We will keep the book-excerpts.tab in Corpus and the connection to Concordance. Our queried word remains ‘doctor’.
This time, we will connect Data Table to Concordance and select Concordances output instead. In the Data Table, we get a list of concordances for the queried word and the corresponding documents. Now, we will save this table with Save Data widget, so we can use it in other projects or for further analysis.
1.17 GeoMap
Displays geographic distribution of data.
Inputs
- Data: Data set.
Outputs
• Corpus: Documents containing mentions of selected geographical regions.
**GeoMap** widget shows geolocations from textual (string) data. It finds mentions of geographic names (countries and capitals) and displays distributions (frequency of mentions) of these names on a map. It works with any Orange widget that outputs a data table and that contains at least one string attribute. The widget outputs selected data instances, that is all documents containing mentions of a selected country (or countries).
1. Select the meta attribute you want to search geolocations by. The widget will find all mentions of geolocations in a text and display distributions on a map.
2. Select the type of map you wish to display. The options are *World*, *Europe* and *USA*. You can zoom in and out of the map by pressing + and - buttons on a map or by mouse scroll.
3. The legend for the geographic distribution of data. Countries with the boldest color are most often mentioned in the selected region attribute (highest frequency).
To select documents mentioning a specific country, click on a country and the widget will output matching documents. To select more than one country hold Ctrl/Cmd upon selection.
### 1.17.1 Example
**GeoMap** widget can be used for simply visualizing distributions of geolocations or for a more complex interactive data analysis. Here, we’ve queried NY Times for articles on Slovenia for the time period of the last year (2015-2016). First we checked the results with Corpus Viewer.
Then we sent the data to GeoMap to see distributions of geolocations by country attribute. The attribute already contains country tags for each article, which is why NY Times is great in combinations with GeoMap. We selected Germany, which sends all the documents tagged with Germany to the output. Remember, we queried NY Times for articles on Slovenia.
We can again inspect the output with Corpus Viewer. But there’s a more interesting way of visualizing the data. We’ve sent selected documents to Preprocess Text, where we’ve tokenized text to words and removed stopwords.
Finally, we can inspect the top words appearing in last year’s documents on Slovenia and mentioning also Germany with Word Cloud.
### 1.18 Word Enrichment
Word enrichment analysis for selected documents.
**Inputs**
- Corpus: A collection of documents.
- Selected Data: Selected instances from corpus.
**Outputs**
- None
Word Enrichment displays a list of words with lower p-values (higher significance) for a selected subset compared to the entire corpus. Lower p-value indicates a higher likelihood that the word is significant for the selected subset (not...
randomly occurring in a text). FDR (False Discovery Rate) is linked to p-value and reports on the expected percent of false predictions in the set of predictions, meaning it account for false positives in list of low p-values.
1. Information on the input.
- Cluster words are all the tokens from the corpus.
- Selected words are all the tokens from the selected subset.
- After filtering reports on the enriched words found in the subset.
2. Filter enables you to filter by:
- p-value
- false discovery rate (FDR)
1.18.1 Example
In the example below, we’re retrieved recent tweets from the 2016 presidential candidates, Donald Trump and Hillary Clinton. Then we’ve preprocessed the tweets to get only words as tokens and to remove the stopwords. We’ve connected the preprocessed corpus to Bag of Words to get a table with word counts for our corpus.
Then we’ve connected Corpus Viewer to Bag of Words and selected only those tweets that were published by Donald Trump. See how we marked only the Author as our Search feature to retrieve those tweets.
Word Enrichment accepts two inputs - the entire corpus to serve as a reference and a selected subset from the corpus to do the enrichment on. First connect Corpus Viewer to Word Enrichment (input Matching Docs → Selected Data) and then connect Bag of Words to it (input Corpus → Data). In the Word Enrichment widget we can see the list of words that are more significant for Donald Trump than they are for Hillary Clinton.
1.19 Duplicate Detection
Detect & remove duplicates from a corpus.
**Inputs**
- Distances: A distance matrix.
**Outputs**
- Corpus Without Duplicated: Corpus with duplicates removed.
- Duplicates Cluster: Documents belonging to selected cluster.
- Corpus: Corpus with appended cluster labels.
Duplicate Detection uses clustering to find duplicates in the corpus. It is great with the Twitter widget for removing retweets and other similar documents.
To set the level of similarity, drag the line vertical line left or right in the visualization. The further left the line, the more similar the documents have to be in order to be considered duplicates. You can also set the threshold manually in the control area.
1. Information on unique and duplicate documents.
2. Linkage used for clustering (Single, Average, Complete, Weighted and Ward).
3. Distance threshold sets the similarity cutoff. The lower the value, the more similar the data instances have to be to belong to the same cluster. You can also set the cutoff by dragging the vertical line in the plot.
4. Cluster labels can be appended as attributes, class or metas.
5. List of clusters at the selected threshold. They are sorted by size by default. Click on the cluster to observe its content on the output.
1.19.1 Example
This simple example uses iris data to find identical data instances. Load iris with the File widget and pass it to Distances. In Distances, use Euclidean distance for computing the distance matrix. Pass distances to Duplicate Detection.
It looks like cluster C147 contain three duplicate entries. Let us select it in the widget and observe it in a Data Table. Remember to set the output to Duplicates Cluster. The three data instances are identical. To use the data set without duplicates, use the first output, Corpus Without Duplicates.
The same procedure can be used also for corpora. Remember to use the Bag of Words between Corpus and Distances.
1.19. Duplicate Detection
CHAPTER 2
Scripting
2.1 Corpus
2.2 Preprocessor
2.3 Twitter
2.4 New York Times
2.5 The Guardian
2.6 Wikipedia
2.7 Bag of Words
2.8 Topic Modeling
2.9 Tag
CHAPTER 3
Indices and tables
- genindex
- modindex
- search
|
{"Source-Url": "https://buildmedia.readthedocs.org/media/pdf/orange3-text/latest/orange3-text.pdf", "len_cl100k_base": 9491, "olmocr-version": "0.1.48", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 81569, "total-output-tokens": 11898, "length": "2e13", "weborganizer": {"__label__adult": 0.00037550926208496094, "__label__art_design": 0.0010576248168945312, "__label__crime_law": 0.00042319297790527344, "__label__education_jobs": 0.005809783935546875, "__label__entertainment": 0.0003578662872314453, "__label__fashion_beauty": 0.0001908540725708008, "__label__finance_business": 0.00048470497131347656, "__label__food_dining": 0.0002765655517578125, "__label__games": 0.0013475418090820312, "__label__hardware": 0.0008358955383300781, "__label__health": 0.0003685951232910156, "__label__history": 0.000583648681640625, "__label__home_hobbies": 0.00019633769989013672, "__label__industrial": 0.0003371238708496094, "__label__literature": 0.0014562606811523438, "__label__politics": 0.00045680999755859375, "__label__religion": 0.0006275177001953125, "__label__science_tech": 0.06329345703125, "__label__social_life": 0.0003147125244140625, "__label__software": 0.2296142578125, "__label__software_dev": 0.69091796875, "__label__sports_fitness": 0.00021564960479736328, "__label__transportation": 0.00025844573974609375, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43156, 0.03018]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43156, 0.31003]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43156, 0.80822]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 124, false], [124, 124, null], [124, 659, null], [659, 1338, null], [1338, 1757, null], [1757, 2727, null], [2727, 3128, null], [3128, 4001, null], [4001, 4527, null], [4527, 5067, null], [5067, 6205, null], [6205, 6992, null], [6992, 8085, null], [8085, 9367, null], [9367, 10112, null], [10112, 11115, null], [11115, 11604, null], [11604, 12404, null], [12404, 13468, null], [13468, 16786, null], [16786, 18316, null], [18316, 19309, null], [19309, 20364, null], [20364, 20655, null], [20655, 21556, null], [21556, 22448, null], [22448, 23698, null], [23698, 24170, null], [24170, 24236, null], [24236, 24588, null], [24588, 26113, null], [26113, 26909, null], [26909, 28388, null], [28388, 29567, null], [29567, 30791, null], [30791, 32545, null], [32545, 33309, null], [33309, 34493, null], [34493, 35295, null], [35295, 36243, null], [36243, 36818, null], [36818, 38329, null], [38329, 39473, null], [39473, 40342, null], [40342, 41422, null], [41422, 42913, null], [42913, 42939, null], [42939, 42939, null], [42939, 43095, null], [43095, 43095, null], [43095, 43156, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 124, true], [124, 124, null], [124, 659, null], [659, 1338, null], [1338, 1757, null], [1757, 2727, null], [2727, 3128, null], [3128, 4001, null], [4001, 4527, null], [4527, 5067, null], [5067, 6205, null], [6205, 6992, null], [6992, 8085, null], [8085, 9367, null], [9367, 10112, null], [10112, 11115, null], [11115, 11604, null], [11604, 12404, null], [12404, 13468, null], [13468, 16786, null], [16786, 18316, null], [18316, 19309, null], [19309, 20364, null], [20364, 20655, null], [20655, 21556, null], [21556, 22448, null], [22448, 23698, null], [23698, 24170, null], [24170, 24236, null], [24236, 24588, null], [24588, 26113, null], [26113, 26909, null], [26909, 28388, null], [28388, 29567, null], [29567, 30791, null], [30791, 32545, null], [32545, 33309, null], [33309, 34493, null], [34493, 35295, null], [35295, 36243, null], [36243, 36818, null], [36818, 38329, null], [38329, 39473, null], [39473, 40342, null], [40342, 41422, null], [41422, 42913, null], [42913, 42939, null], [42939, 42939, null], [42939, 43095, null], [43095, 43095, null], [43095, 43156, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43156, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43156, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 124, 3], [124, 124, 4], [124, 659, 5], [659, 1338, 6], [1338, 1757, 7], [1757, 2727, 8], [2727, 3128, 9], [3128, 4001, 10], [4001, 4527, 11], [4527, 5067, 12], [5067, 6205, 13], [6205, 6992, 14], [6992, 8085, 15], [8085, 9367, 16], [9367, 10112, 17], [10112, 11115, 18], [11115, 11604, 19], [11604, 12404, 20], [12404, 13468, 21], [13468, 16786, 22], [16786, 18316, 23], [18316, 19309, 24], [19309, 20364, 25], [20364, 20655, 26], [20655, 21556, 27], [21556, 22448, 28], [22448, 23698, 29], [23698, 24170, 30], [24170, 24236, 31], [24236, 24588, 32], [24588, 26113, 33], [26113, 26909, 34], [26909, 28388, 35], [28388, 29567, 36], [29567, 30791, 37], [30791, 32545, 38], [32545, 33309, 39], [33309, 34493, 40], [34493, 35295, 41], [35295, 36243, 42], [36243, 36818, 43], [36818, 38329, 44], [38329, 39473, 45], [39473, 40342, 46], [40342, 41422, 47], [41422, 42913, 48], [42913, 42939, 49], [42939, 42939, 50], [42939, 43095, 51], [43095, 43095, 52], [43095, 43156, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43156, 0.00915]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
f095e91607415e5d9dc68fc220d3f3b4f667a948
|
A Visual Specification Language for Model-to-Model Transformations
Esther Guerra
Computer Science Department
Universidad Carlos III de Madrid, Madrid, Spain
eguerra@inf.uc3m.es
Juan de Lara
School of Computer Science
Universidad Autónoma de Madrid, Madrid, Spain
Juan.deLara@uam.es
Dimitris Kolovos, Richard Paige
Computer Science Department
University of York, York, UK
{dkolovos, paige}@cs.york.ac.uk
Abstract—Model Driven Engineering promotes models as the core assets of projects and hence model transformations become first-class citizens in this approach. Likewise, the development of large scale transformations necessitates a systematic engineering process and supporting modelling notations. However, although many languages have been proposed to implement transformations, few allow their specification at a higher level of abstraction.
In this paper we present a visual, formal, declarative specification language to express model-to-model transformations and their correctness properties. The language supports the two main approaches to model-to-model transformation – trace-based and traceless – with a unified formal semantics. Moreover, we provide a compilation of specifications into OCL as this has many practical applications, e.g. it allows injecting assertions and correctness properties for automated testing of transformation implementations based on OMG standards.
Keywords—model-driven engineering; model-to-model transformation; specification languages; transformation testing
I. INTRODUCTION
Model Driven Engineering (MDE) is a software engineering approach that seeks increasing productivity and quality by raising the level of abstraction at which engineers work. For this purpose, models (in contrast to programs) are key assets in the development, and hence model transformations are the pillars of the process. A model transformation receives one input model and produces one output model, in the simplest case. If both models conform to the same meta-model the transformation is called endogenous, whereas if the meta-models are different it is called exogenous or model-to-model (M2M) transformation [1].
In order to become useful in industrial practice, engineers need methods and tools to analyse, design, implement and test complex and large M2M transformations. However, although many languages have been proposed to implement transformations [1], [2], [3], there is a lack of methods, notations and tools to cover further stages of the complete transformation life-cycle.
In standard software development, specification languages are commonly used to express desired properties about the applications to be built [4]. They focus on what the application should do without stating how to do it. Hence, they are closer to the system analysis (which could also be refined into design) than to its actual implementation. Formal specification languages like Z [4] or Alloy [5] have a mathematical underpinning that allows formal reasoning, refinement, proof, and specification-based testing of implementations [6].
In this paper we propose a high-level, formal, visual, declarative language to specify M2M transformations. Its purpose is not to implement transformations, but to express what the transformation is to do (but not how), as well as properties that transformed models should satisfy. In this sense, the role of our language for transformations is similar to the role of Z for software: providing support to the analysis and design of transformations. The language provides constructive and non-constructive primitives to specify relations that should hold between the input and output models, or forbidden situations. It also supports the two usual approaches to M2M transformation: trace-based, where explicit mappings define relations between the input and output models (as e.g. in QVT-Core [7] and triple graph grammars [8]), and traceless (as e.g. in QVT-R [7], ATL [2] and ETL [3]). Specifications can be used in two ways: (i) as a functional, potentially loose, definition of (part of) the expected behaviour of transformation implementations; and (ii) to provide correctness properties of the transformation.
Fig. 1 outlines our approach. First, the transformation designer uses our specification language to define the transformation behaviour, verification properties, and requirements on the valid input models (label 1). This specification has a formal semantics and can be analysed to discover redundancies, contradictions, and to measure coverage of the involved languages (label 2). Next, the developer uses the specification as a high-level model to implement the transformation (label 3). This implementation is tested by injecting assertions automatically derived from the specification (label 4). Assertions act as an oracle describing structural invariants that output models should satisfy, and are used for automated testing (labels 5, 6). They are also used to test whether a model can be used as input for the transformation.
Altogether, the contributions of this paper are the following. We propose a novel, visual specification language for M2M transformation, supporting both trace-based and
traceless styles. This language can be used in initial stages of the transformation development cycle (analysis and design) and enables the automated verification of implementations. To the best of our knowledge, no such language has been proposed before. We also provide a compilation of specifications into OCL. This enables the injection of correctness assertions in order to automate testing of implementations for QVT and other languages based on OMG standards, such as ATL or ETL. Since complex, large-scale transformations are frequently encoded using textual languages, we aim at keeping the best of visual and textual transformation languages. Finally, we report on an Eclipse-based prototype, and illustrate the injection of OCL assertions for testing ETL transformations.
**Paper organization.** Section II presents the syntax and formal semantics of our language. Section III shows its compilation into OCL. Section IV describes tool support and an example. Finally, Section V discusses related research and Section VI concludes.
II. A M2M SPECIFICATION LANGUAGE
Our language is used to test implementations, but it is independent of them. It supports both trace-based and traceless styles of specification, which allows one to express properties for implementation languages that use an explicit handling of traces (e.g. QVT-Core, TGGs), and also for languages that do not make use of them (e.g. QVT-R, ATL, ETL). The first style is closer to implementation, since it implies creating traces for each transformed element and its targets. Traceless specifications do not use traces, but a mechanism to express pre- and post-conditions which may refer to other parts of the specification. For instance, QVT-R uses `when` and `where` constructs to define pre- and post-conditions, respectively. Interestingly, both styles share similar semantics and can be formalized in a unified framework.
A. Constraint triples
A specification in our language is made of patterns. Here we extend our theory developed in [9] for the definition of both trace-based and traceless specifications. Patterns are based on the concept of triple graph [8] to represent the input, output and trace models (called source, target and correspondence). A triple graph \( G = (G_S, G_C, G_T, cs, ct) \) is made of two graphs \( G_S \) and \( G_T \) called source and target, related through a correspondence graph \( G_C \) and two graph morphisms \( cs: G_C \rightarrow G_S \) and \( ct: G_C \rightarrow G_T \). For trace-based patterns, \( G_C \) contains the traces between nodes in \( G_S \) and \( G_T \), while for traceless patterns, \( G_C \) is empty.
We use symbolic graphs [10] to describe the structure of the three graphs. Symbolic graphs are typed and have attributed nodes and edges, but instead of having a possibly infinite set of data values, they use a finite set of sorted variables \( \nu \), and a formula \( \alpha \) constraining the allowed values for these variables. Thus, constraint triples have the form \( C = (G, \nu, \alpha) \), where \( G \) is a triple graph whose data nodes are replaced by variables in \( \nu \). We use constraint triples to represent both usual models (called ground constraints, where \( \alpha \) restricts the attributes to take exactly one value), as well as constraints to be satisfied by models.
**Example.** Fig. 2 shows examples of trace-based and traceless constraints, modelling part of the class-to-relational transformation [7]. They relate persistent UML packages with RDBMS schemas. The traceless constraint does not show the correspondence graph as it is empty. In both cases the formula \( \alpha \) is shown at the bottom, we omit the conjunctions between terms, and place in the left compartment the terms containing only variables from the source graph, in the right compartment the terms containing only variables from the target, and in the middle the terms containing variables of both graphs. Note that “=” denotes equality, not assignment. We can use any logic for \( \alpha \), but here we use first-order logic with OCL-like syntax.

**Constraint triples** are related through \( C \)-morphisms \( \alpha: C_1 \rightarrow C_2 \), made of a triple graph morphism with the following conditions: the formula \( \alpha_2 \) of \( C_2 \) must imply the formula \( \alpha_1 \) of \( C_1 \), and the same implication is demanded for the source and target restrictions (\( \alpha_2|_S \Rightarrow \alpha_1|_S \) and \( \alpha_2|_T \Rightarrow \alpha_1|_T \)). Roughly, the source restriction \( \alpha|_S \) (resp. target \( \alpha|_T \)) of a formula is the formula \( \alpha \) but considering the variables of the source (resp. target) graph only [9]. The source restriction \( C|_S \) of a constraint triple \( C \) is made of the source graph and the formula \( \alpha|_S \), and similar for the target restriction.
B. Trace-based Patterns
We use the previous concepts to build trace-based patterns. We define two kinds of pattern with same structure
but different interpretation: positive and negative (called P-patterns and N-patterns). A pattern is made of a main constraint triple $Q$ (to be satisfied in the case of P-patterns, and forbidden to occur in the case of N-patterns), and may contain a positive pre-condition $C$ and a set of negative pre-conditions $N_i$.
**Def. 1** (Trace-based pattern). A trace-based pattern $P = \langle C \rightarrow Q, N_{i \in I} \rangle$ consists of a main constraint triple $Q$, a (possibly empty) positive pre-condition $C$, a C-morphism $q$, and a set $N_{i \in I}$ of negative pre-conditions.
**Example.** Fig. 3 shows our concrete visual syntax for a trace-based P-pattern, which contains a main constraint (named `ClassTable`), a negative pre-condition (denoted by $N(\text{Parent})$), and a positive pre-condition (annotated on the main constraint with $\langle (\text{param}) \rangle$). The pattern states that each persistent class should be related to a table, when the class has no parent (negative pre-condition) and if the class’ package is mapped to a schema (positive pre-condition).

Fig. 4 shows an N-pattern. While the P-pattern can be used to specify constructively a transformation, this N-pattern expresses an invariant, i.e. a verification property reflecting the beliefs of the designer about the properties that should hold in all related models. The N-pattern states that if a class has no two attributes with same name (negative pre-condition $N(\text{AttrDup})$) then the associated table should not have duplicated columns (main constraint $N(\text{ColDup})$). This property is indeed false, if attributes of children classes are stored in the same table, and classes can redefine attributes.

Sometimes, M2M transformations are not designed to cope with every valid source model, but to work with a subset of models of the source language. Our patterns can also be used to explicitly express the conditions that we ask the source models to qualify for the transformation.
As an example, Fig. 5 shows an N-pattern that forbids attribute redefinition (operation $\text{ancestors}$ returns the set of ancestors of a given class). Similarly, we can also use patterns to specify properties that any output model of the transformation should fulfill.

In M2M transformation, we are interested in knowing whether a target model is a correct translation of a source model, or vice-versa. For this purpose we interpret patterns either source-to-target or target-to-source (forwards/backwards). In the former case, we check that each forward-enabled pattern is actually satisfied, and similar for the backward case. If two models satisfy the patterns both forwards and backwards, we say that they are synchronized. We start by defining enabledness of a pattern for the forward case; the backward case is symmetrical.
**Def. 2** (Forward pre-condition). Given a pattern $P$, its forward positive pre-condition $F^+(P) = C +_{C|S} Q|S$ is given by the pushout of its positive pre-condition $C$ and the source restriction of the main constraint $Q$, while its set of forward negative pre-conditions is $F^-(P) = \{q^S: F^+(P) \rightarrow N^S_i, \text{ with } N^S_i = C +_{C|S} N_i|S\}_{i \in I}$.
**Example.** Fig. 6 shows the forward positive pre-condition of pattern `ClassTable`, $F^+(\text{ClassTable})$, which results from merging $C$ (objects $p$, $s$ and $m$) and $Q|S$ (objects $p$, $c$ and their link) through $C|S$ (object $p$). This is called a pushout in category theory. In our case, pushouts are made like in triple graphs, and then taking the conjunction of the formulae [9]. The pattern has one forward negative pre-condition $N^S_1$, depicted to the left.

A pattern \( P \) is forward-enabled in a constraint triple \( M \) (not necessarily ground) if an occurrence of its forward positive pre-condition \( F^+(P) \) is found in \( M \), and no occurrence of its negative forward pre-conditions is found. A P-pattern (N-pattern) is satisfied at an enabled match, if the match can be (cannot be) extended to the pattern’s main constraint \( Q \).
**Def. 3** (Forward enabledness). Given pattern \( P \) and constraint triple \( M \), \( P \) is forward-enabled at \( m^S \): \( F^+(P) \rightarrow M \), written \( M \models_{m^S,F} P \), iff \( \forall i \in I, \exists \iota^S_i: N_i^S \rightarrow M \) s.t. (1) commutes in the diagram below.
\[
\begin{align*}
N_i^S & \rightarrow F^+(P) \rightarrow Q \\
\downarrow \ & \Downarrow \\
\downarrow \iota^S_i & \rightarrow \end{align*}
\]
**Def. 4** (Forward satisfaction). Given pattern \( P \), constraint triple \( M \) and \( M \models_{m^S,F} P \), \( P \) is forward-satisfied at \( m^S \), written \( M \models F \rightarrow P \), iff \( \exists m^S: Q \rightarrow M \) s.t. (2) commutes in the diagram above if \( P \) is a P-pattern, or iff \( \exists m^S: Q \rightarrow M \) if \( P \) is an N-pattern. \( P \) is forward-satisfied in \( M \), written \( M \models_P F \), iff \( \forall m^S \) s.t. \( M \models_{m^S,F} P \), \( M \models_{m^S,F} P \).
**Example.** The pattern in Fig. 6 is forward-enabled in one occurrence, the one identifying objects \( p, m, s \) and \( c \) in \( F^+(ClassTable) \) and \( M \), as the forward negative precondition \( N_i^S \) is not found in \( M \). The pattern is actually forward-satisfied by \( M \) because this occurrence can be extended to \( Q \).
Specifications are conjunctions of patterns, hence \( M \) forward-satisfies a specification \( S \) (\( M \models F S \)) if it forward-satisfies all its patterns. Two models are synchronized if each other is a correct forward/backward translation of the other: \( M \models F S \) and \( M \models_B S \).
### C. Traceless Patterns
Similar to QVT-R, patterns in the second style of specification do not make use of traces, but provide constructs to check if other patterns in the specification are satisfied (when clause, a pre-condition), or to demand the satisfaction of other patterns (where clause, a post-condition). Therefore they need a way to express dependencies between patterns. As for trace-based specifications, we consider P- and N-patterns having the same structure, although for N-patterns we demand where = \( \emptyset \) (we cannot ask for additional conditions on a non-existing occurrence of \( Q \)). As a difference from trace-based patterns, we distinguish between top and non-top patterns. The former must be satisfied always, and the latter only when invoked from the where clause of other patterns. Recall that we use the same underlying structure as for trace-based patterns, but in this case the correspondence graph is not shown because it is empty.
**Def. 5** (Traceless pattern). A traceless pattern \( R = \{Q, N_{pre} = \{n_i: Q \rightarrow N_i\}_{i \in I}, \text{when}, \text{where}, \text{top}\} \) is made of a main constraint triple \( Q \), a set \( N_{pre} \) of negative pre-conditions, two sets when and where of dependencies for \( Q \), and a boolean flag top.
**Example.** Fig. 7 depicts three traceless patterns for the specification of the class-to-relational transformation. Pattern ClassTable is top and demands a table for each class without parents (negative pre-condition \( N(Parent) \)). The when clause makes this necessary only if the class’ package and the table’s schema satisfy the PackageSchema pattern (shown in Fig. 2). Moreover, if this is the case, then both patterns AttributeColumn and ParentClassTable should be satisfied for the class and table. While the former demands pairs of attributes and columns in the given class and table, the latter descends recursively through the inheritance hierarchy demanding the satisfaction of AttributeColumn at each child class.
The use of when and where clauses creates dependencies between patterns. In particular, given the main constraints \( Q_1 \) and \( Q_2 \) of two patterns, a dependency is given by \( Q_1 \xleftarrow{d_1} D \xrightarrow{d_2} Q_2 \), where \( D \) contains the elements passed as parameter in a when or where clause relating them. Thus, similar to the forward pre-condition notion for trace-based patterns, we define forward dependencies for traceless patterns generalizing the pushout construction in Def. 2 to an arbitrary number of dependencies (not just one). Then we take the amalgamation of all of them.
**Def. 6** (Forward dependency). Given a traceless pattern \( R \) and a dependency \( Q \xleftarrow{d} D \xrightarrow{d} Q \), the forward positive dependency is given by \( F^+(R) = F^+ (D \rightarrow Q, N_{pre}) \), while the set of forward negative dependencies is given by \( F^-(R) = F^-(D \rightarrow Q, N_{pre}) \), see Def. 2.
Given \( R \) and a dependency set \( DS = \{Q \xleftarrow{d_j} D_j\}_{j \in J} \), the forward positive dependency is given by \( F^+_{DS}(R) = W \) as shown to the left of Fig. 8, with \( I \) the limit of \( \{p_j\} \),
W the colimit of \( \{ i_j \} \), and \( u \) exists due to the limit universal property. The set of forward negative dependencies is \( F_{DS}^-(R) = \bigcup_{d_j \in DS} \{ W \to N_i^W \}_i \), with \( N_i^W \) a pushout calculated as shown to the right of Fig. 8.
\[
\begin{array}{c}
\overset{\text{I}}{\bullet} & \overset{i_0}{\cdots} & \overset{i_n}{\bullet} \\
\overset{\text{W}}{\bullet} & \overset{p_0}{\cdots} & \overset{p_n}{\bullet} \\
\overset{\text{Q}}{\bullet} & \overset{Q_j}{\cdots} & \overset{Q_k}{\bullet} \\
\end{array}
\]
\( F_{d_0}^+(R) \ldots F_{d_k}^+(R) \)
\( N^S \leftarrow P.O. \{ c_j \} \rightarrow d_j \)
\( \overset{\text{W}}{\bullet} & \overset{\text{W}}{\bullet} \rightarrow u \overset{\text{Q}}{\bullet} \)
\( (1) m^e; \overset{\text{(1)}}{d_j} \rightarrow (2) m^e \)
\( M \overset{\text{W}}{\bullet} \)
Figure 8. Forward dependency set.
Next we define the conditions for a traceless pattern \( R \) to be forward-enabled. As in the trace-based case, we have to find an occurrence of \( F_{d_j}^+(R) \) and no occurrence of \( F_{d_j}^-(R) \), but here there is no positive pre-condition but a set of when dependencies. Thus, for a traceless pattern to be forward-enabled, we build \( F_{when}^+(R) \) and demand each when dependency to be satisfied.
**Def. 7** (Forward enabled). \( R \) is forward-enabled in a constraint triple \( M \) at match \( m^S : W \to M \) with \( W = F_{when}^+(R) \), written \( M \vdash_{m^S} R \), iff \( \exists m_1 : N_i^W \to M \) with (1) commuting to the right of Fig. 8, and \( \forall Q_j : D_j \overset{d_j}{\to} Q \in \text{when}, SAT^F(R_j, m^S \circ e_j \circ g_j, f_j) \) (see Def. 8).
**Example.** Fig. 9 shows a constraint \( M \) where the pattern `ClassTable` is enabled: there is an occurrence of \( W = F_{when}^+(\text{ClassTable}) \) (made of objects \( p, s \) and \( c \)) that satisfies the pattern `PackageSchema` for the commuting dependency \( F_{I_j}^+ \) (\( \text{PackageSchema} \)). The forward positive dependency \( F_{I_j}^+ \) (\( \text{PackageSchema} \)) is calculated taking \( D_1 \overset{I_j}{\to} Q_1 \). For simplicity we have omitted the negative pre-condition, which avoids the pattern to be enabled in class \( c.2 \). We demand non-cyclic dependencies between patterns as otherwise we may obtain an infinite loop when testing the when clause.
We define the forward satisfaction of traceless patterns using a predicate \( SAT^F \) with three parameters: (1) the pattern \( R \) to be checked, (2) a morphism \( D \to M \) with which its forward positive dependency \( F_{when}^+(R) \) has to commute, and (3) a dependency \( D \to Q \), which may come from a caller where section, and is actually treated as an additional pre-condition in the when clause. In this way, the predicate may demand the satisfaction of other patterns at certain matches that are passed as parameters from invoking when or where clauses, the former coming from Def. 7, and the latter from recursive calls in Def. 8.
**Def. 8** (Forward satisfaction). Given a \( P \)-pattern \( R \), predicate \( SAT^F(R, m_D; D \to M, d; D \to Q) \) holds iff: \( \forall m^S \in \{ m^S : W \to M | m_D = m^S \circ e, \text{ with } W = F_{when \cup d}^+(R), M \vdash_{m^S} R, D \overset{e}{\to} W \} \), \( \exists m : Q \to M \text{ s.t.} \)
(2) commutes to the right of Fig. 8, and \( \forall Q_k : D_k \overset{d_k}{\to} Q \in \text{where}, SAT^F(R_k, m \circ d_k f_k) \).
If \( R \) is an \( N \)-pattern, everything is the same, but we demand the non-existence of \( m : Q \to M \text{ s.t.} \) (2) commutes to the right of Fig. 8 (and nothing else as \( \emptyset \)).
**Example.** The forward-enabled occurrence of pattern `ClassTable` in Fig. 9 is satisfied because we find an occurrence of the pattern’s main constraint \( Q \) and the where dependencies are satisfied: (i) `AttributeColumn` is trivially satisfied as \( c \) has no attributes, and (ii) `ParentClassTable` is satisfied as we find one occurrence of it but the child class has no attributes.
The satisfaction of a traceless specification demands the satisfaction of all their top-level patterns.
**Def. 9** (Specification forward satisfaction). Given a traceless specification \( S \) and a constraint triple \( M, M \vdash \_F S \) if \( SAT^F(R, \emptyset \to M, \emptyset \to Q) \forall R \in S | R \) is top.
Satisfaction of traceless specifications can be tested on traced models (i.e. triple graphs where the correspondence graph is not empty). This makes such specifications more independent of the implementation mechanism, which can be based on traces or not. On the contrary, trace-based specifications necessitate from traced models.
III. COMPIlATION INTO OCL
In this section we provide a practical way for testing satisfaction of our patterns by their compilation into OCL (using the OCL syntax [11]). Our aim is generating invariants to automatically check the satisfaction of specifications by models, and which can be injected in the transformation implementations for testing purposes. We choose OCL because it is an OMG standard and can be integrated in transformation languages of widespread use, such as QVT, ATL or ETL. We start by showing the compilation of
traceless patterns, as the compilation of trace-based ones can be expressed in terms of the former.
For traceless patterns, we generate one set of operations from each (P- and N-) pattern, which only differ in their parameters. In particular, one operation is generated from each pattern call in a when or where clause, and one additional operation without parameters is generated for top patterns. We only show the compilation schema for the operation without parameters, since the others are built similarly (but omitting finding a match for the objects received as parameter). We assume just one pattern in the when and where clauses for readability reasons, and use the following notation:
- **p**: name of compiled pattern
- **when-p**: name of pattern in the when clause
- **where-p**: name of pattern in the where clause
- **when-p.param**, **where-p.param**: objects in the call to when-p and where-p, respectively
- **check-p(...)**: OCL expression that checks the graphical and attribute conditions imposed by p on the objects received as parameter
- **check-n(...)**: like check-p, but checks a negative pre-condition n instead of p
The scheme of the OCL code for checking the forward satisfaction of a traceless P-pattern is:
```java
operation sat_p () : Boolean {
return
-- a) for each occurrence of objects a1,...,am
al.type.allInstances().forAll(al |...
am.type.allInstances().forAll(am |
when-p(a1,...,am) implies
-- b) for each occurrence of objects b1,...,bn
bl.type.allInstances().forAll(bl |
when-p(b1,...,bn) implies
-- c) if it does not violate any negative pre-condition
-- of a (being cl,...,co the objects in the negative
-- pre-condition different from the a and b objects)
and not
cl.type.allInstances().exists(cl |...
co.type.allInstances().exists(co |
check-n(a1,...,am,b1,...,bn,cl,...,co)
-- d) then there must be an occurrence of p (being
-- dl,...,dp the objects in the target of p which
-- are not in when-p.param)
implies
dl.type.allInstances().exists(dl |...
dp.type.allInstances().exists(dp |
check-p(a1,...,am,b1,...,bn,dl,...,dp)
-- e) and satisfies where-p for the objects e1,...,eq
-- in where-p.param (already matched by a, b and d)
and where-p(e1,...,eq);
}
```
In the previous operation, fragments a), e) and c) are omitted if the pattern has an empty when clause, an empty where clause, or an empty set of negative pre-conditions, respectively. The compilation schema for N-patterns is similar to that for P-patterns, but the existential operator in fragment d) is preceded by not. Finally, the compilation for backward satisfaction implies just substituting source by target (and vice-versa).
**Example.** The compiled code for the traceless pattern `ClassTable` is:
```java
operation sat_ClassTable () : Boolean {
return
Package.allInstances().forAll(p |
Schema.allInstances().forAll(s |
PackageSchema(p,s) implies
Class.allInstances().forAll(c |
(p.class.includes(c) and c.persistent=true)
and not Class.allInstances().forAll(n |
(n.source=c and n.target=s)
and AttributeColumn(c,t) ))));
}
```
The compilation schema of trace-based patterns is much simpler: (i) only one operation without parameters is generated from each pattern; (ii) fragments a) and b) are merged so that the resulting fragment looks for all matches of the pattern pre-condition (i.e. all elements in the positive pre-condition and the source of the main constraint which satisfy the graphical and attribute constraints); and (iii) no fragment e) is generated. Note that in this case, the generated OCL conditions actually check that traces exist when they appear in a pattern, while for traceless patterns this is not so, so they are independent of the implementation mechanism.
**Example** The operation derived from the trace-based pattern `ClassTable` is:
```java
operation sat_ClassTable () : Boolean {
return
Package.allInstances().forAll(p |
Schema.allInstances().forAll(s |
Class.allInstances().forAll(c |
P2S.allInstances().forAll(m |
(p.class.includes(c) and c.persistent=true and
m.source=p and m.target=s
and not Class.allInstances().exists(pa |
c.parent.includes(pa))))
Class.allInstances().exists(t |
s.table.includes(t) and t.name='T_'+c.name
and AttributeColumn(c,t) ));
}
```
As stated previously, this OCL code can be used in many ways. The next section shows an application to automated testing of transformation implementations.
**IV. TOOL SUPPORT AND EXAMPLE**
We have built an Eclipse tool to define pattern specifications using a visual concrete syntax. It has been developed with GMF, and includes a code generator to synthesise EOL code [11] (an extension of OCL) for the chosen scenario (forwards/backwards, either for traceless or trace-based specifications). This code can be injected in ETL transformation implementations in two ways: (i) assertions coming from patterns expressing conditions on the source model, like the pattern in Fig. 5, are tested before executing...
the transformation; (ii) patterns expressing expected properties of the target model, as well as verification or functional properties of the transformation, are tested after executing the transformation. Hence, given an input model, it is first checked if it qualifies for the transformation. If it does, the transformation is executed and the user is informed of the patterns that are or are not satisfied, and of the rules that should be revised.
Fig. 10 shows a verification traceless P-pattern defined in the tool. The pattern specifies how to handle multiple inheritance. In particular, it seeks two top-level persistent classes $c_1$ and $c_2$, ancestors of a third class $c$. The fact that $c_1$ and $c_2$ are top-level is checked by the negative pre-conditions $\neg \text{Ancestor1}$ and $\neg \text{Ancestor2}$, whereas the fact that $c_1$ and $c_2$ are ancestors of $c$ is checked by operation $\text{ancestors}$ in the formula. Then, for each attribute $a$ of $c$, the pattern demands a matching column in both tables $t_1$ and $t_2$. This is checked in the where section by calling the functional requirement pattern $\text{AttributeColumn}$ for each table. Moreover, the when section checks that $t_1$ and $t_2$ are associated with $c_1$ and $c_2$ by calling $\text{ClassTable2}$ (equal to $\text{ClassTable}$ but without where section). An additional verification pattern checks that if a top-level class does not have attributes with same name (e.g. no redefined attributes in children), its associated table does not have columns with same name.

After defining the patterns with the functional requirements and verification properties, we can generate EOL code to verify a particular transformation implementation. Fig. 11 shows part of the ETL code that implements the forward transformation. The implementation is a refinement of the functional specification, as in addition it creates primary and foreign keys and considers object references. This implementation is incorrect because it does not consider multiple inheritance: when an attribute is translated into a column, the column is placed in the table associated to the top-most class (line $c.table := a.owner.getTopClass()$). However, the operation $\text{getTopClass}$ assumes single inheritance and returns a unique class (and $::=$ returns its associated table). Therefore, this implementation fails when tested with models having multiple inheritance, and is detected by our patterns, as the pop-up window in Fig. 11 shows. The feedback mentions the rules to be revised because these are annotated with the patterns they address (line $\text{@patterns=...}$).

It is interesting to note that a specification expresses requirements, and is independent of how the implementation actually performs its job. In our example, the implementation does not use recursion on children classes (like pattern $\text{ParentClassTable}$ does), but a method to obtain the table of the top-most class. Second, we found it useful to classify patterns as functional or verification patterns, where the latter usually depend on the former. Third, functional patterns do not need to specify the behaviour of the complete transformation and cover all requirements, but only the most critical ones (in our example, it did not address primary or foreign keys nor references). Moreover, we do not even have to use the same meta-model for specification and implementation, but the meta-model of the implementation can be a refinement of the specification one. Finally, specifications are independent of the implementation language, and they can be used for testing implementations written in different languages. In particular, the approach is useful to test large textual implementations, and we used it for the run-time verification of a transformation of more than 1600 lines of code in the context of a European project.
V. RELATED WORK
Our traceless language is inspired by QVT-R [7], but enriched with N-patterns (i.e. non-constructive primitives),
graphical negative pre-conditions and bidirectional attribute computations. Whereas QVT-R implementations are able to execute parts of the standard [7], we are working in execution support for functional patterns, but there are some issues. First, our attribute computations are bidirectional, which means doing either algebraic manipulation of formulae or using constraint solving when the transformation is given a direction. Bidirectional conditions like \( X+Y=Z+V \), which involve variables of source and target elements, are not supported by existing QVT-R implementations (assignments are supported, but not general formulae). The non-constructive nature of N-patterns would also need constraint solving. Finally, specifications may be loose: a source model may have several correct target models. Implementations can refine this behaviour choosing deterministically one solution.
The formal semantics of our traceless language is immediately applicable to QVT-R. There are few attempts to give formal semantics to QVT-R. In [12], the authors compile simplified QVT-R into TGGs. In [13], a game-theoretic semantics for check-only QVT-R is given, but the semantics is given in an abstract way, neglecting issues like bindings, pattern matching and parameter passing. There are a few QVT-R concepts we do not cover yet though, like having arbitrary formulae in when and where instead of sets.
Even though there are many languages to implement transformations, very few works propose higher-level notations for transformation design [14]. To our knowledge, no language has been proposed for specification of implementation properties, as we do in this paper. Even though there are languages for expressing bi-directional transformations, they are unsuitable for their use as formal specification languages. Some of them, like QVT-R, have no formal semantics. Others, like TGGs, are based on rules and hence they are not suitable for testing, where a language based on constraints is more appropriate.
Finally, our work also contributes to the area of transformation testing by providing a language that simplifies the specification of oracles to automate the comparison of the actual and expected results of transformations, where current approaches require the manual specification of complex OCL constraints [15].
VI. CONCLUSIONS AND FUTURE WORK
In this paper we have presented a high-level M2M specification language, its formal semantics, its compilation into OCL, tool support, and its application for M2M transformation testing. Concerning the latter, we have shown the benefits of a visual specification language to guard the correctness of large, textual transformation implementations. Moreover, our traceless language has a formal algebraic semantics applicable to QVT-R.
We are currently working on executability of specifications by combining transformation languages with constraint solvers. However, in some scenarios, implementations coded by hand may be more efficient or scalable. We are also working in the analysis of specifications, studying the strengths and equivalence of both styles of specification, and on methods to derive test cases from specifications.
ACKNOWLEDGMENT
Work funded by the Spanish Ministry of Science and Innovation through project TIN2008-02081 and mobility grants JC2009-00015 and PR2009-0019; and by the R&D programme of the Madrid Community, project S2009/TIC-1650.
REFERENCES
|
{"Source-Url": "http://www.miso.es/pubs/Guerra_2010_VLHCC.pdf", "len_cl100k_base": 8767, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31927, "total-output-tokens": 10255, "length": "2e13", "weborganizer": {"__label__adult": 0.0003135204315185547, "__label__art_design": 0.0003104209899902344, "__label__crime_law": 0.0002944469451904297, "__label__education_jobs": 0.0006947517395019531, "__label__entertainment": 5.048513412475586e-05, "__label__fashion_beauty": 0.00013530254364013672, "__label__finance_business": 0.00018537044525146484, "__label__food_dining": 0.0002951622009277344, "__label__games": 0.0004024505615234375, "__label__hardware": 0.0006322860717773438, "__label__health": 0.0004017353057861328, "__label__history": 0.00019931793212890625, "__label__home_hobbies": 7.331371307373047e-05, "__label__industrial": 0.0004010200500488281, "__label__literature": 0.00024271011352539065, "__label__politics": 0.0002315044403076172, "__label__religion": 0.000469207763671875, "__label__science_tech": 0.0157318115234375, "__label__social_life": 8.350610733032227e-05, "__label__software": 0.004589080810546875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00026345252990722656, "__label__transportation": 0.000453948974609375, "__label__travel": 0.00018537044525146484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39291, 0.01251]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39291, 0.38915]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39291, 0.84887]], "google_gemma-3-12b-it_contains_pii": [[0, 5196, false], [5196, 10267, null], [10267, 14190, null], [14190, 19404, null], [19404, 24647, null], [24647, 29766, null], [29766, 33875, null], [33875, 39291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5196, true], [5196, 10267, null], [10267, 14190, null], [14190, 19404, null], [19404, 24647, null], [24647, 29766, null], [29766, 33875, null], [33875, 39291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39291, null]], "pdf_page_numbers": [[0, 5196, 1], [5196, 10267, 2], [10267, 14190, 3], [14190, 19404, 4], [19404, 24647, 5], [24647, 29766, 6], [29766, 33875, 7], [33875, 39291, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39291, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
68acb373cccb1acd3c3681929190db7874e23ea5
|
RefNet: A Reference-Aware Network for Background Based Conversation
Chuan Meng,1 Pengjie Ren,2* Zhumin Chen,1* Christof Monz,2 Jun Ma,1 Maarten de Rijke2
1Shandong University, Qingdao, China, 2University of Amsterdam, Amsterdam, The Netherlands
Abstract
Existing conversational systems tend to generate generic responses. Recently, Background Based Conversations (BBCs) have been introduced to address this issue. Here, the generated responses are grounded in some background information. The proposed methods for BBCs are able to generate more informative responses, however, they either cannot generate natural responses or have difficulties in locating the right background information. In this paper, we propose a Reference-aware Network (RefNet) to address both issues. Unlike existing methods that generate responses token by token, RefNet incorporates a novel reference decoder that provides an alternative way to learn to directly select a semantic unit (e.g., a span containing complete semantic information) from the background. Experimental results show that RefNet significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, indicating that RefNet can generate more appropriate and human-like responses.
1 Introduction
Dialogue systems have attracted a lot of attention recently (Huang, Zhu, and Gao 2019). Sequence-to-sequence models (Sutskever, Vinyals, and Le 2014; Lei et al. 2018) are an effective framework that is commonly adopted in existing studies. However, a problem of sequence-to-sequence based methods is that they tend to generate generic and non-informative responses which provide deficient information (Gao et al. 2019).
Previous research has proposed various methods to alleviate the issue, such as adjusting objective functions (Li et al. 2016; Jiang et al. 2019), incorporating external knowledge (Ghazvininejad et al. 2018; Parthasarathi and Pineau 2018; Dinan et al. 2019), etc. Recently, Background Based Conversations (BBCs) have been proposed for generating more informative responses that are grounded in some background information (Zhou, Prabhumoye, and Black 2018; Moghe et al. 2018). As shown in Fig. 1, unlike previous conversational settings (Serban et al. 2016), in a BBC background material (e.g., a plot or review about a movie) is supplied to promote topic-specific conversations.
*Corresponding author
Existing methods for BBCs can be grouped into two categories, generation-based methods (e.g., GTTP (See, Liu, and Manning 2017)) and extraction-based methods (e.g., QANet (Yu et al. 2018)). Generation-based methods generate the response token by token, so they can generate natural and fluent responses, generally. However, generation-based methods suffer from two issues. First, they are relatively ineffective in leveraging background information. For example, for the case in Fig. 1, S2SA does not leverage background information at all. Second, they have difficulties locating the right semantic units in the background information. Here, a semantic unit is a span from the background information that expresses complete semantic meaning. For example, in Fig. 1, the background contains many semantic units, e.g., “mv movie + tv awards 2004 best cameo” and “scary movie 4.” GTTP uses the wrong semantic unit “scary movie 4” to answer the question by “human 2.” Moreover, because generation-based methods generate the response one token at a time, they risk breaking a complete semantic unit, e.g., “scary movie 4” is split by a comma in the response of GTTP in Fig. 1. The reason is that generation-based methods lack a global perspective, i.e., each decoding step only focuses on a single (current) token and does not consider the tokens to be generated in the following steps. Extraction-based methods extract a span from the background as their response and are relatively good at locating
Figure 1: Background Based Conversation (BBC).
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
the right semantic unit. But because of their extractive nature, they cannot generate natural conversational responses, see, e.g., the response of QANet in Fig. 1.
We propose a Reference-aware Network (RefNet) to address above issues. RefNet consists of four modules: a background encoder, a context encoder, a decoding switcher, and a hybrid decoder. The background encoder and context encoder encode the background and conversational context into representations, respectively. Then, at each decoding step, the decoding switcher decides between reference decoding and generation decoding. Based on the decision made by the decoding switcher, the hybrid decoder either selects a semantic unit from the background (reference decoding) or generates a token otherwise (generation decoding).
In the latter case, the decoding switcher further determines whether the hybrid decoder should predict a token from the vocabulary or copy one from the background. Besides generating the response token by token, RefNet also provides an alternative way to learn to select a semantic unit from the background directly. Experiments on a BBC dataset show that RefNet significantly outperforms state-of-the-art methods in terms of both automatic and, especially, human evaluations.
Our contributions are as follows:
- We propose a novel architecture, RefNet, for BBCs by combing the advantages of extraction-based and generation-based methods. RefNet can generate more informative and appropriate responses while retaining fluency.
- We devise a decoding switcher and a hybrid decoder to adaptively coordinate between reference decoding and generation decoding.
- Experiments show that RefNet outperforms state-of-the-art models by a large margin in terms of both automatic and human evaluations.
2 Related work
We survey two types of related work on BBCs: generation-based and extraction-based methods.
2.1 Generation-based methods
Most effective generation-based models are based on sequence-to-sequence modeling (Sutskever, Vinyals, and Le 2014) and an attention mechanism (Bahdanau, Cho, and Bengio 2015). The proposed methods have achieved promising results on different conversational tasks (Serban et al. 2016). However, response informativeness is still a urgently need to be addressed challenge; these approaches prefer generating generic responses such as "I don’t know" and "thank you", which make conversations dull (Gao et al. 2019). Various methods have been proposed to improve response informativeness, such as adjusting objective functions (Li et al. 2016; Jiang et al. 2019), incorporating latent topic information (Xiong et al. 2017), leveraging outside knowledge bases (Liu et al. 2018; Zhou et al. 2018) and knowledge representation (Ghazvininejad et al. 2018; Parthasarathi and Pineau 2018; Lian et al. 2019), etc.
Recently, Background Based Conversations (BBCs) have been proposed for generating more informative responses by exploring related background information (Zhou, Prabhumoye, and Black 2018; Dinan et al. 2019). Moghe et al. (2018) build a dataset for BBC and conduct experiments with state-of-the-art generation-based methods. They show that generation-based methods can generate fluent, natural responses, but have difficulty in locating the right background information. Therefore, most recent studies try to address this issue (Li et al. 2019; Qin et al. 2019). Zhang, Ren, and de Rijke (2019) introduce a pre-selection process that uses dynamic bi-directional attention to improve background information selection. Liu et al. (2019) propose an augmented knowledge graph based chatting model via transforming background information into knowledge graph. However, generation-based models still cannot solve inherent problems effectively, such as tending to break a complete semantic unit and generate shorter responses.
2.2 Extraction-based methods
Extraction-based methods have originally been proposed for Reading Comprehension (RC) tasks (Rajpurkar et al. 2016), where each question can be answered by a right span in a given passage. Wang and Jiang (2017) combine match-LSTM and a pointer network (Vinyals, Fortunato, and Jaitly 2015) to predict the boundary of the answer. Seo et al. (2016) propose BiDAF, which uses a variant co-attention architecture (Xiong, Zhong, and Socher 2017) to enhance the extraction result. Wang et al. (2017) propose R-Net, which introduces a self-matching mechanism. Yu et al. (2018) propose QANet, which devises an encoder consisting exclusively of convolution and self-attention. For BBCs, Moghe et al. (2018) show that extraction-based methods are better at locating the right background information than generation-based methods. However, current extraction-based methods are specifically designed for RC tasks. They are not suitable for BBCs for two reasons: First, BBCs usually do not have standard factoid questions like those in RC tasks. Second, BBCs require that the responses are fluent and conversational, which cannot be met by rigid extraction; see Fig. 1.
Unlike the work summarized above, we propose RefNet to combine the advantages of generation-based and extraction-based methods while avoiding their shortcomings. The main challenge that RefNet addresses is how to design an effective neural architecture that is able to refer to the right background information at the right time in the right place of a conversation while minimizing the influence on response fluency.
3 Reference-aware Network
Given a background in the form of free text \( K = (k_1, k_2, \ldots, k_{L_K}) \) with \( L_K \) tokens and a current conversational context \( C_\tau = (\ldots, X_{\tau-3}, X_{\tau-2}, X_{\tau-1}) \), the task of BBC is to generate a response \( X_\tau \) at \( \tau \). Each \( X_\tau \) contains a sequence of \( L_{X_\tau} \) units, i.e., \( X_\tau = (x^1_\tau, x^2_\tau, \ldots, x^i_\tau, \ldots, x^n_{L_{X_\tau}}) \), where \( x^i_\tau \), the unit at timestamp \( t \), could be a token \( \{x^i_{t,1}\}_{t=1}^n \) or a semantic unit \( \{x^i_{t,2}\}_{t=1}^n \) containing \( n \) tokens.
RefNet consists of four modules: background encoder, context encoder, decoding switcher, and hybrid decoder; see
1. Context & Background Encoder
Recommend me a movie...
Background K
K
Background K
Context c
Star Trek is the similar movie...
2. Hybrid Decoder
<bus> you
should check out
out
star
Trek
5. Reference Decoding
P(x_t^r | x_{t-r}^r, C_T, K) = P(r) P(x_t^r | r),
where P(r) is the reference decoding probability (see §3.3); P(x_t^r | r) is the probability of generating x_t^r under the reference decoding r. If x_t^r = \{x_{t,i}^r\}_{i=1}^n, then x_t^r is generated in generation decoding mode with the probability modeled as:
P(x_t^r | x_{t-r}^r, C_T, K) = P(g_p) P(x_t^r | g_p) + P(g_c) P(x_t^r | g_c),
where P(g_p) = P(g_p) + P(g_c) is the generation decoding probability (see §3.3) and P(g_c) is the copying generation decoding probability (see §3.3). P(x_t^r | g_p) and P(x_t^r | g_c) are the probabilities of generating x_t^r under g_p and g_c, respectively.
Reference decoding. Within reference decoding, the probability of generating the semantic unit \{x_{t,i}^r\}_i=1 is evaluated as follows:
P(x_t^r = \{x_{t,i}^r\}_{i=1}^n | r) = \alpha_{t,start}^{r1} \alpha_{t,end}^{r2},
where \alpha_{t,start}^{r1} and \alpha_{t,end}^{r2} are the probabilities of the start and end tokens of \{x_{t,i}^r\}_{i=1}^n (from the background), respectively, which are estimated by two-hop pointers with respect to the context-aware background hidden state sequence \H \T. The \alpha_{t,start}^{r1} is calculated by the first hop pointer, as shown in Eq. 7:
\begin{align}
\alpha_{t,i}^{r1} = W_{o1}^T [h_t^k; c_t^{k1} + b_{o1}], \\
\alpha_{t,i}^{r2} = \frac{\exp(s_{t,i}^{r1})}{\sum_{j=1}^{L_{C_T}} \exp(s_{t,j}^{r1})}, \\
\alpha_{t,stop}^{r2} = \left[\exp(s_{t,i}^{r1})\right],
\end{align}
where 8498
\begin{align}
s_{t,j}^{r1} = v_T^T \tanh(W_h^c h_j^c + U_h^k h_j^k + b_{h,c}); \\
\alpha_{t,i}^{r1} = \frac{\exp(s_{t,i}^{r1})}{\sum_{j=1}^{L_{C_T}} \exp(s_{t,j}^{r1})}, \\
\alpha_{t,stop}^{r2} = \left[\exp(s_{t,i}^{r1})\right],
\end{align}
where \W, \U, \v_h and \b_c are parameters.
where $W_{o_1}$, $W_f$, $U_r$, $v_r$, $b_{o_1}$, and $b_r$ are parameters. $h_t^s$ is the decoding hidden state vector, the updating scheme of which will be detailed in §3.4. $c_t^{sc}$ and $c_t^{sm}$ are calculated in a similar way like Eq. 3 with $h_t^s$ attentively reading $H_t^s$ and $H_t^m$, respectively. The $\alpha_{t,t+1}^{r_2}$ is calculated by the second hop pointer, as shown in Eq. 8:
$$c_t^r = \sum_{i=1}^{L_K} \alpha_{t,i}^r h_t^m, \quad s_{t,j}^{r_2} = v_{t,j}^r \tanh(W_f h_t^m + U_r o_t^2 + b_r),$$
$$\alpha_{t,t+1}^{r_2} = \frac{\exp(s_{t,j}^{r_2})}{\sum_{j=1}^{L_K} \exp(s_{t,j}^{r_2})},$$
where $W_{o_2}$ and $b_{o_2}$ are parameters. \textit{Reference decoding} adopts soft pointers $\alpha_{t,t+1}^{r_1}$ and $\alpha_{t,t+1}^{r_2}$ to select semantic units, so it will not influence the automatic differentiation during training.
\textbf{Generation decoding.} Within \textit{predicting generation decoding}, the probability of predicting the token $x_t^i$ from the vocabulary is estimated as follows:
$$P(x_t^i = \{x_{t,i}^1\}_{i=1}^M | g_p) = \text{softmax}(W_g \cdot o_t^1 + b_{g_p}),$$
where $W_g$ and $b_{g_p}$ are parameters and the vector $o_t^1$ is the same as in Eq. 7.
Within \textit{copying generation decoding}, the probability of copying the token $x_t^i$ from the background is estimated as follows:
$$P(x_t^i = \{x_{t,i}^1\}_{i=1}^M | g_c) = \sum_{i: k_i = x_t^i} \alpha_{t,i}^{sm},$$
where $\alpha_{t,i}^{sm}$ is the attention probability distribution on $H_t^m$ produced by the same attention process with $c_t^{sm}$ in Eq. 7.
### 3.3 Decoding switcher
The decoding switching probabilities $P(r)$, $P(g_p)$ and $P(g_c)$ are estimated as follows:
$$[P(r), P(g_p), P(g_c)] = \text{softmax}(f_t),$$
where $f_t$ is a fusion vector, which is computed through a linear transformation in Eq. 12:
$$f_t = W_f [h_t^s; c_t^{sc}; c_t^{sm}] + b_f,$$
where $W_f$ and $b_f$ are parameters. $h_t^s$ is decoding states (see §3.4).
During testing, at each decoding step, we first compute $P(r)$ and $P(g) = P(g_p) + P(g_c)$. If $P(r) \geq P(g)$, we use Eq. 4 to generate a semantic unit, otherwise we use Eq. 5 to generate a token.
### 3.4 State updating
The decoding state updating depends on whether the generated unit is a token or semantic unit. If $x_{t-1}^i$ is a token, then
$$h_t^s = \text{GRU}(h_{t-1}^s, [e(x_{t-1}^i); c_{t-1}^{sc}; c_{t-1}^{sm}]).$$
If $x_{t-1}^i$ is a span containing $n$ tokens, Eq. 13 will update $n$ times with one token as the input, and the last state will encode the full semantics of a span; see $h_t^s$ to $h_{t+n}^s$ in Fig. 2.
The decoding states are initialized using a linear layer with the last state of $H_t^m$ and $H_t^r$ as input:
$$h_0^s = \text{ReLU}(W_{hs} [h_{t+n}^{sc}; h_{t+n}^{sm}] + b_{hs}),$$
where $W_{hs}$ and $b_{hs}$ are parameters. ReLU is the ReLU activation function.
### 3.5 Training
Our goal is to maximize the prediction probability of the target response given the context and background. We have three objectives, namely generation loss, reference loss and switcher loss.
The \textit{generation loss} is defined as $L_g(\theta) = -\frac{1}{M} \sum_{\tau=1}^{L_x} \sum_{t=1}^{L_x} \log[P(x_t^i | x_{<t}^\tau, C^\tau, K)]$, where $\theta$ are all the parameters of RefNet. $M$ is the number of all training samples given a background $K$. In $L_g(\theta)$, each $x_t^i$ is a token $\{x_{t,i}^1\}_{i=1}^M$.
The \textit{reference loss} is defined as $L_r(\theta) = -\frac{1}{M} \sum_{\tau=1}^{L_x} \sum_{t=1}^{L_x} \log[I(x_t^i) \cdot \log[P(x_t^i | x_{<t}^\tau, C^\tau, K)]]$, where $I(x_t^i)$ is an indicator function that equals 1 if $x_t^i = \{x_{t,i}^1\}_{i=1}^M$ and 0 otherwise.
RefNet introduces a decoding switcher to decide between \textit{reference decoding} and \textit{generation decoding}. To better supervise this process we define \textit{switcher loss} $L_s(\theta) = -\frac{1}{M} \sum_{\tau=1}^{L_x} \sum_{t=1}^{L_x} I(x_t^i) \log[P(r)] + (1 - I(x_t^i)) \log[P(g)]$, where $I(x_t^i)$ is also an indicator function, which is the same as in $L_r(\theta)$.
The \textit{final loss} is a linear combination of the three loss functions just defined:
$$L(\theta) = L_g(\theta) + L_r(\theta) + L_s(\theta).$$
All parameters of RefNet as well as word embeddings are learned in an end-to-end back-propagation training paradigm.
### 4 Experimental Setup
#### 4.1 Implementation details
We set the word embedding size and GRU hidden state size to 128 and 256, respectively. The vocabulary size is limited to 25,000. For fair comparison, all models use the same embedding size, hidden state size and vocabulary size. Following Moghe et al. (2018), we limit the context length of all models to 65. We train all models for 30 epochs and test on a validation set after each epoch, and select the best model.
based on the validation results according to BLEU metric. We use gradient clipping with a maximum gradient norm of 2. We use the Adam optimizer with a mini-batch size of 32. The learning rate is 0.001. The code is available online.\footnote{https://github.com/ChuanMeng/RefNet}
### 4.2 Dataset
Recently, some datasets for BBCs have been released (Zhou, Prabhumoye, and Black 2018; Dinan et al. 2019). We choose the Holl-E dataset released by Moghe et al. (2018) because it contains boundary annotations of the background information used for each response. We did not use the other datasets because they do not have such annotations for training RefNet. Holl-E is built for movie chats in which each response is explicitly generated by copying and/or modifying sentences from the background. The background consists of plots, comments and reviews about movies collected from different websites. We use the mixed-short background which is truncated to 256 words, because it is more challenging according to Moghe et al. (2018). We follow the original data split for training, validation and test. There are also two versions of the test set: one with single golden reference (SR) and the other with multiple golden references (MR); see (Moghe et al. 2018).
### 4.3 Baselines
We compare with all methods we can get on this task.
- Extraction-based methods:\footnote{For fair comparison, different from Moghe et al. (2018), we do not use pre-trained GloVe (Pennington, Socher, and Manning 2014) such that all models randomly initialize the word embedding with the same vocabulary size.}: (i) BiDAF extracts a span from background as response and uses a co-attention architecture to improve the span finding accuracy (Seo et al. 2016). (ii) R-Net proposes gated attention-based recurrent networks and a self-matching attention mechanism to encode background (Wang et al. 2017). (iii) QANet uses an encoder consisting exclusively of convolution and self-attention to capture local and global interactions in background (Yu et al. 2018).
- Generation-based methods: (i) S2S maps the context to the response with an encoder-decoder framework (Sutskever, Vinyals, and Le 2014). (ii) HRED encodes the context of the conversation with two hierarchical levels (Serban et al. 2016). S2S and HRED do not use any background information. (iii) S2SA adds an attention mechanism to the original S2S model to attend to the relevant background information (Bahdanau, Cho, and Bengio 2015). (iv) GTTP leverages background information with a copying mechanism to copy a token from the background at the appropriate decoding step (See, Liu, and Manning 2017). (v) CaKe is a improved version of GTTP, which introduces a pre-selection process that uses dynamic bi-directional attention to improve knowledge selection from background (Zhang, Ren, and de Rijke 2019). (vi) AKGCM first transforms background information into knowledge graph, and uses a policy network to select knowledge with an additional GTTP to generate responses (Liu et al. 2019).
### 4.4 Evaluation metrics
Following the work of Moghe et al. (2018), we use BLEU-4, ROUGE-1, ROUGE-2 and ROUGE-L as automatic evaluation metrics. We also report the average length of responses outputted by each model. For extraction-based methods and RefNet, we further report F1 (Seo et al. 2016), which only evaluates the extracted spans not the whole responses. We also randomly sample 500 test samples to conduct human evaluations using Amazon Mechanical Turk. For each sample, we ask 3 workers to annotate whether the response is good in terms of four aspects: (1) Naturalness (N), i.e., whether the responses are conversational, natural and fluent; (2) Informativeness (I), i.e., whether the responses use some background information; (3) Appropriateness (A), i.e., whether the responses are appropriate/relevant to the given context; and (4) Humanness (H), i.e., whether the responses look like they are written by a human.
### 5 Results
#### 5.1 Automatic evaluation
We list the results of all methods on mixed-short background setting in Table 1.
First, RefNet significantly outperforms all generation-based methods on all metrics, except in the BLEU score compared to AKGCM. Especially, RefNet outperforms the recent and strong baseline CaKe by around 2%-4% (significantly). The improvements show that RefNet is much better at leveraging and locating the right background information to improve responses than these generation-based methods. We believe RefNet benefits from reference decoding to tend to produce more complete semantic units, alleviating the inherent problems that pure generation-based method faced.
Second, RefNet outperforms extraction-based methods in most cases, including the strong baseline QANet. We think the reason is that extraction-based methods can only rigidly extracts the relevant spans from the background, which does not consider the conversational characteristics of responses. Differently, RefNet also benefits from the generation decoding to generate natural conversational words in responses, which makes up the shortcoming of only extraction. RefNet is comparable in average length with extraction-based methods, which demonstrates that RefNet retains the advantages of extraction-based methods.
Third, the performance of these three extraction-based methods are comparable. However, their performances differ greatly between each other on the RC task dataset SQuAD (Rajpurkar et al. 2016), e.g., QANet outperforms BiDAF by around 7% on F1 score. Even with a stronger extraction-based model, we will arrive at a similar conclusion that they cannot generate natural and fluent responses due to the extraction nature. This confirms that extraction-based methods are not suitable for this task. Besides, we can further enhance the reference decoding of RefNet by incorporating various mechanisms used by extraction-based models. But that’s beyond the scope of this paper.
Table 1: Automatic evaluation results. **Bold face** indicates leading results. Significant improvements over the best baseline results are marked with * (t-test, \( p < 0.05 \)). SR and MR refer to test sets with single and multiple references. The results of AKGCM are taken from the paper because the authors have not released their code and processed knowledge graph. Note that AKGCM uses GloVe and BERT (Devlin et al. 2019) to improve performance but none of other models do.
<table>
<thead>
<tr>
<th>Methods</th>
<th>BLEU</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>Average length</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>SR</td>
<td>MR</td>
<td>SR</td>
<td>MR</td>
<td>SR</td>
</tr>
<tr>
<td>no background</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>S2S</td>
<td>-</td>
<td>-</td>
<td>5.26</td>
<td>7.11</td>
<td>27.15</td>
</tr>
<tr>
<td>HRED</td>
<td>-</td>
<td>-</td>
<td>5.23</td>
<td>5.38</td>
<td>24.55</td>
</tr>
<tr>
<td>BiDAF</td>
<td>40.38</td>
<td>45.86</td>
<td>27.44</td>
<td>33.40</td>
<td>38.79</td>
</tr>
<tr>
<td>R-Net</td>
<td>40.92</td>
<td>46.84</td>
<td>27.54</td>
<td>33.18</td>
<td>39.78</td>
</tr>
<tr>
<td>QANet</td>
<td>41.65</td>
<td>47.32</td>
<td>28.21</td>
<td>33.91</td>
<td>40.66</td>
</tr>
<tr>
<td>mixed-short background (256 words)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>S2SA</td>
<td>-</td>
<td>-</td>
<td>11.71</td>
<td>12.76</td>
<td>26.36</td>
</tr>
<tr>
<td>GTTP</td>
<td>-</td>
<td>-</td>
<td>13.65</td>
<td>19.49</td>
<td>30.77</td>
</tr>
<tr>
<td>CaKe</td>
<td>-</td>
<td>-</td>
<td>26.03</td>
<td>29.18</td>
<td>40.21</td>
</tr>
<tr>
<td>AKGCM</td>
<td>-</td>
<td>-</td>
<td>30.84</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>RefNet</td>
<td><strong>41.86</strong></td>
<td><strong>48.46</strong></td>
<td><strong>30.33</strong></td>
<td><strong>33.97</strong></td>
<td><strong>42.11</strong></td>
</tr>
</tbody>
</table>
Table 2: Human evaluation results on mixed-short background version. \( n \) means that at least \( n \) MTurk workers think it is a good response w.r.t. **Naturalness (N)**, **Informativeness (I)**, **Appropriateness (A)** and **Humanness (H)**.
<table>
<thead>
<tr>
<th></th>
<th>CaKe</th>
<th>QANet</th>
<th>RefNet</th>
</tr>
</thead>
<tbody>
<tr>
<td>≥ 1</td>
<td>449</td>
<td>264</td>
<td>288</td>
</tr>
<tr>
<td>≥ 2</td>
<td>359</td>
<td>115</td>
<td>414</td>
</tr>
<tr>
<td>≥ 2</td>
<td>390</td>
<td>153</td>
<td>406</td>
</tr>
<tr>
<td>≥ 2</td>
<td>438</td>
<td>231</td>
<td>355</td>
</tr>
</tbody>
</table>
5.2 Human evaluation
We also conduct a human evaluation for RefNet, CaKe (the best generation-based baseline), and QANet (the best extraction-based baseline). The results are shown in Table 2. Generally, RefNet achieves the best performance in terms of all metrics. In particular, we find that RefNet is even better than CaKe in terms of **Naturalness** and **Humanness**. We believe this is because RefNet has a good trade-off between **reference decoding and generation decoding**, where the generated conversational words and the selected semantic units are synthesized in a natural and appropriate way. RefNet is also much better than CaKe in terms of **Informativeness and Appropriateness**, which shows that RefNet is better at locating the appropriate semantic units. The reason is that with the ability to generate a full semantic unit at once, RefNet has a global perspective to locate the appropriate semantic units, reducing the risk of breaking a complete semantic unit. QANet achieves good evaluation scores on **Informativeness and Appropriateness** than CaKe, but gets the worst scores on **Naturalness and Humanness**. Although QANet is relatively good at locating the relevant semantic unit, its responses lack contextual explanations, which makes workers hard to understand. This further shows that only extracting a span from the background is far from enough for BBCs, even replacing QANet with a more stronger extraction-based one.
6 Analysis
6.1 Reference vs. generation decoding
To analyze the effectiveness of reference and generation decoding, we compare the results of RefNet with only reference decoding (**force reference**) and with only generation decoding (**force generation**) in Table 3. Note that **force generation** is better than GTTP because there are two differences. First, we use a matching layer to get the context-aware background representation in Eq. 2, while GTTP only uses basic background representations without such a matching operation. Second, we use the hidden states of the background and context to jointly initialize the decoding states in Eq. 2. We use the code released by Moghe et al. (2018) https://github.com/nikitacs16/Holl-E
3We use the code released by Moghe et al. (2018) https://github.com/nikitacs16/Holl-E
### 6.2 Switcher loss
To verify the effectiveness of the switcher loss $L_s(\theta)$ in Eq. 17, we compare RefNet with and without training switcher loss, as shown in Table 5. We find that the overall performance increases in terms of all metrics with switcher loss, especially on F1. It means that the switcher loss is an effective component, which better guides the model to choose between reference decoding and generation decoding at the right time in the right place of a conversation by additional supervision signal. The obvious increase of F1 further shows that at the right time to cite a semantic unit may bring higher accuracy.
### 6.3 Case study
We select some examples from the test set to illustrate the performance of RefNet, CaKe and QANet, as shown in Table 4. One can see that RefNet can select the right semantic unit from the background or generate fluent tokens at appropriate time and position, resulting in more informative and appropriate responses. For instance, in Example 1, RefNet identifies the right semantic unit “$279,167,575” within the background, which is combined with “it made” ahead and followed by “at the box office” to form a more natural and conversational response. The second example indicates that RefNet can locate longer semantic units accurately. In contrast, the responses by QANet lack naturality. The responses by CaKe are relatively inconsistent and irrelevant. In the first example, CaKe breaks the complete semantic unit “if you like ben stiller” and throws out the part “if you like”. There are also some cases where RefNet does not perform well. For example, we find that RefNet occasionally selects short or meaningless semantic units, such as “i” and “it.” This indicates that we could further improve reference decoding by taking more factors (e.g., the length of semantic units) into consideration.
### 7 Conclusion and Future Work
In this paper, we propose RefNet for the Background Based Conversation (BBCs) task. RefNet incorporates a novel reference decoding module to generate more informative responses while retaining the naturality and fluency of responses. Experiments show that RefNet outperforms state-of-the-art methods by a large margin in terms of both automatic and human evaluations.
A limitation of RefNet is that it needs boundary annotations of semantic units to enable supervised training. In future work, we hope to design a weakly supervised or un-
supervised training scheme for RefNet in order to apply it to other datasets and tasks. In addition, we will consider more factors (e.g., the length or frequency of semantic unit) to further improve the reference decoding module of RefNet.
Acknowledgments
We thank the anonymous reviewers for their helpful comments. This work is supported by the Natural Science Foundation of China (61972234, 61902219, 61672324, 61672322), the Natural Science Foundation of Shandong province (2016ZRE27468), the Tencent AI Lab Rhino-Bird Focused Research Program (JR201932), the Fundamental Research Funds of Shandong University, Ahold Delhaize, the Association of Universities in the Netherlands (VSNU), and the Innovation Center for Artificial Intelligence (ICAI).
References
|
{"Source-Url": "https://ojs.aaai.org/index.php/AAAI/article/download/6370/6226", "len_cl100k_base": 8280, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 34887, "total-output-tokens": 9596, "length": "2e13", "weborganizer": {"__label__adult": 0.0006465911865234375, "__label__art_design": 0.0011882781982421875, "__label__crime_law": 0.0005354881286621094, "__label__education_jobs": 0.002422332763671875, "__label__entertainment": 0.0009026527404785156, "__label__fashion_beauty": 0.0004017353057861328, "__label__finance_business": 0.0004041194915771485, "__label__food_dining": 0.0006008148193359375, "__label__games": 0.0018777847290039065, "__label__hardware": 0.002025604248046875, "__label__health": 0.0009984970092773438, "__label__history": 0.0004696846008300781, "__label__home_hobbies": 0.00012302398681640625, "__label__industrial": 0.0006628036499023438, "__label__literature": 0.0014905929565429688, "__label__politics": 0.0004611015319824219, "__label__religion": 0.0007042884826660156, "__label__science_tech": 0.468017578125, "__label__social_life": 0.00034165382385253906, "__label__software": 0.04083251953125, "__label__software_dev": 0.4736328125, "__label__sports_fitness": 0.0003705024719238281, "__label__transportation": 0.0005993843078613281, "__label__travel": 0.0002732276916503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33021, 0.04644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33021, 0.21378]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33021, 0.8487]], "google_gemma-3-12b-it_contains_pii": [[0, 4061, false], [4061, 10255, null], [10255, 12252, null], [12252, 17081, null], [17081, 23040, null], [23040, 27619, null], [27619, 30056, null], [30056, 33021, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4061, true], [4061, 10255, null], [10255, 12252, null], [12252, 17081, null], [17081, 23040, null], [23040, 27619, null], [27619, 30056, null], [30056, 33021, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33021, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33021, null]], "pdf_page_numbers": [[0, 4061, 1], [4061, 10255, 2], [10255, 12252, 3], [12252, 17081, 4], [17081, 23040, 5], [23040, 27619, 6], [27619, 30056, 7], [30056, 33021, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33021, 0.12575]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
db538913506f9de056b46b915251eb4190f7e035
|
[REMOVED]
|
{"Source-Url": "http://www.cs.princeton.edu/~aartig/papers/cav18-synonym.pdf", "len_cl100k_base": 14270, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 64705, "total-output-tokens": 18459, "length": "2e13", "weborganizer": {"__label__adult": 0.0003809928894042969, "__label__art_design": 0.0003352165222167969, "__label__crime_law": 0.00047850608825683594, "__label__education_jobs": 0.0009398460388183594, "__label__entertainment": 7.367134094238281e-05, "__label__fashion_beauty": 0.0001652240753173828, "__label__finance_business": 0.0003590583801269531, "__label__food_dining": 0.000400543212890625, "__label__games": 0.0008997917175292969, "__label__hardware": 0.000957965850830078, "__label__health": 0.0005931854248046875, "__label__history": 0.00026345252990722656, "__label__home_hobbies": 0.00012433528900146484, "__label__industrial": 0.0005578994750976562, "__label__literature": 0.00033664703369140625, "__label__politics": 0.000316619873046875, "__label__religion": 0.0005040168762207031, "__label__science_tech": 0.057403564453125, "__label__social_life": 0.00010013580322265624, "__label__software": 0.00708770751953125, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.0003046989440917969, "__label__transportation": 0.0006771087646484375, "__label__travel": 0.0001989603042602539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56409, 0.06574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56409, 0.30462]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56409, 0.80229]], "google_gemma-3-12b-it_contains_pii": [[0, 2649, false], [2649, 6109, null], [6109, 8491, null], [8491, 11813, null], [11813, 15543, null], [15543, 18414, null], [18414, 21286, null], [21286, 23473, null], [23473, 26814, null], [26814, 30072, null], [30072, 32845, null], [32845, 35725, null], [35725, 38898, null], [38898, 41887, null], [41887, 44721, null], [44721, 48140, null], [48140, 50967, null], [50967, 54410, null], [54410, 56409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2649, true], [2649, 6109, null], [6109, 8491, null], [8491, 11813, null], [11813, 15543, null], [15543, 18414, null], [18414, 21286, null], [21286, 23473, null], [23473, 26814, null], [26814, 30072, null], [30072, 32845, null], [32845, 35725, null], [35725, 38898, null], [38898, 41887, null], [41887, 44721, null], [44721, 48140, null], [48140, 50967, null], [50967, 54410, null], [54410, 56409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56409, null]], "pdf_page_numbers": [[0, 2649, 1], [2649, 6109, 2], [6109, 8491, 3], [8491, 11813, 4], [11813, 15543, 5], [15543, 18414, 6], [18414, 21286, 7], [21286, 23473, 8], [23473, 26814, 9], [26814, 30072, 10], [30072, 32845, 11], [32845, 35725, 12], [35725, 38898, 13], [38898, 41887, 14], [41887, 44721, 15], [44721, 48140, 16], [48140, 50967, 17], [50967, 54410, 18], [54410, 56409, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56409, 0.09061]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
358fde9b2ec69f445f57767618961756d0b5f129
|
Advancing Project Management Methodologies: An In-Depth Analysis of Jira in Managerial and Developmental Contexts
Ohoud AlHarbi¹, Reem AlMalki¹, Nouf AlYousef¹
¹ King Saud University, Saudi Arabia
ARTICLE INFO
Keywords:
Jira, Agile, Project Management Tool, Usability Test
Received: Aug, 26, 2023
Accepted: Sep, 19, 2023
Published: Dec, 22, 2023
ABSTRACT
A study was conducted to examine the satisfaction levels of project teams with the Jira mobile application, a leading project management tool, in Saudi Arabian companies. Through usability tests and surveys, the research addresses three key questions related to the satisfaction of project managers and developers with Jira mobile application and improve their experiences. While most project managers found Jira to be an efficient and easy-to-use tool, some suggestions for improvements were made, including the ability to edit, delete, and clone projects as well as a resource management ability. Similarly, developers have reported that Jira has significantly improved task tracking and status monitoring, while also suggesting improved mobile functionality. Usability testing and surveys highlighted specific issues with Jira’s mobile application and provided recommendations for enhancement. The study aims to empower project teams with effective management capabilities through Jira.
1. INTRODUCTION
Before the increased reliance on software tools, project management processes were entirely conducted using traditional methods such as papers and analog tools to plan, execute, and monitor projects (E.D.C. Carvalho, 2020). While in recent years, with the numerous enhancements in technologies and the significant evolution in software industry, some organizations build, develop and manage projects with completely geographically distributed teams that rely on technology for communication which may add more complexity to the project management and relying solely on manual processes is insufficient to handle the complications involved (S.Morrison-Smith and J.Ruiz, 2020). Most businesses now are moving toward Agile methodology in developing their software projects. Agile methodology is described by Conboy and Fitzgerald as “The continual readiness of an entity to rapidly or inherently, proactively or reactively, embrace change, through high quality, simplistic, economical components and relationships with its environment” (S.Chopra and M. Chaudhary, 2022). Based on the nature of rapidly changing requirements in software projects, the Agile methodology accommodates these changes because of its flexibility characteristic, as also requires active collaboration between, the team and the clients to understand the domain, identify the needs, and prioritize them. Therefore, organizations are increasingly going toward using software tools to control project management processes (P. Marnada, et al., 2021). However, the assessment of a project’s success or failure is a subject to varying opinions. It typically evaluated based on three key points, which are time, budget and product quality (M.D.Kadenic, et al., 2021).
Significant project delays potentials and working beyond the scheduled projects timeline is highly expected and unavoidable for many reasons such as rapid changing of project requirements or design, unrealistic timelines or estimations and the lack of resources (F.Hayat, et al., 2019). In addition to increasing project costs, reduced team productivity and increased defects in the developed software, these delays reduce software shareholders’ satisfaction and delay general market demand and benefits (R.Pellerin, et al., 2013). Because of their benefits, software management tools are highly recommended in software projects, including improved project date estimation and better resource utilization, which ultimately increase the success rate of software projects. Recently, various tools available in the market that come with different features, usability rate, and pricing plans. These tools are designed to assist organizations and teams in effectively managing various aspects of the project management, and they are responsible for the evaluation and selection process based on their needs and preferences.
Accordingly, due to the Kingdom of Saudi Arabia’s awareness of the significance of digital technology and its impact on society and the economy, its government began a plan as a crucial part of Saudi Vision 2030 for completely digitalizing traditional processes and utilizing the most recent software technologies (M.E.Bogopa and C.Marnewick, 2022). To achieve the goal of completing the National Transformation Program (NTP) by 2030, projects specifically software projects, must be effectively managed and delivered within the timeline. However, there hasn’t been much research on project management techniques and software tools reviews targeted for Saudi Arabia companies, this study will fill this gap in this domain literature and focus on the projects team satisfactions with the software project management tools. Organization often have difficulty identifying and adopting appropriate tools that meet their specific needs and project objectives. By investigating the widely distributed tool in terms of the tool’s main features and limitations in the usability context. Currently, Jira is used extensively in the industry and by organizations to manage their projects, tasks, and workflows (H.Yogaantara and A.N.Fajar, 2020). This makes it a suitable area for investigation. To better understand Jira’s strengths and weaknesses, as well as the challenges facing project managers and developers when implementing and using it. Ultimately, this research aims to contribute to the improvement of project management practices and to the enhancement of organizations’ efficiency and effectiveness in projects management execution.
The objective of this study is to conduct a usability testing to qualitatively assess the satisfaction levels of six project managers and six developers working on Saudi Arabia’s IT company with Jira Software’s management tool for tracking project and task progress, respectively. This study aims to identify areas for improvement and assist the overall satisfaction levels with Jira Software's management tool. It captures feedback, suggestions, and recommendations from both groups of users.
This research is targeted mainly towards addressing the following three questions:
- How satisfied are project managers with the project management tool that they are currently using?
- How satisfied are developers with the project management tool that they are currently using?
- How to create a design that improves both project managers’ and developers experience?
Following is the structure of the remaining sections. The second section provides a literature review consisting of an overview of software project management tools. It also provides a brief introduction to agile methods and management. The third section presents the research methodology used in the study. The fourth section contains usability test results and survey responses. The authors discuss the presented results to address the research questions in the fifth section of the paper. The sixth section outlines the limitations and future work. The seventh section concludes this paper.
2. LITERATURE REVIEW
This section provides an overview of Agile methodology regarding project management, discuss the advantages and disadvantages of using these tools in a project and a comparison of the available software tools.
2.1 Overview of Agile methodology
The purpose of this subsection is to describe what Agile methodology is, what the main methods are, and how Agile methodology is applied to the management process of a project.
2.1.1 Definitions
Agile project management and software development have been widely discussed and applied. Therefore, several authors and experts in the field have provided detailed definitions and clarifications (A.Behrens, et al., 2021). Ciric and Lalic in their article (H. Rahman, et al., 2018), outlined Agile as a set of management practices founded on iterative cycles and incremental development that focuses on delivering products in small, incremental steps that can be completed in a short amount of time. These practices are founded on collaborative effort of self-organized teams to evolve requirements and prioritize solutions based upon the stakeholder's collaboration, communication, and feedback. In the Agile methodology, (M. Hamid, et al., 2019) explains that software development is continuously improved and iterated based on the actual experience of using the software. As part of every iteration, there are phases for design, implementation, and testing, as well as an analysis of the requirements. However, (M. Kuutila, et al., 2020) emphasize the flexible nature of Agile software development as well as its ability to allow for changing requirements during the development cycle. Agile methodology comprises a set of principles. Agile methodology focuses on people and interactions rather than processes, and similarly, it prefers to work on working software rather than preparing documentation. (J.R.da Costa Filho, et al., 2022) has concluded that Agile attempts to minimize the risks inherent in traditional waterfall processes, along with the extremely high risks associated with requirements changes and final integration. However, the transition to an Agile environment may require the use of updated tools and technical environments, as well as changes to organizational guidelines and policies.
In summary, the Agile approach, as defined by multiple authors (D.Ghimire and S. Charters, 2022), encompasses a variety of management practices as prioritizing collaboration, flexibility, and iterative development.
2.1.2 Methods
Considering that every Agile method has its own limitations, it cannot be used for all projects, particularly those that are complex, large, and extremely important as mentioned by (Y.Chouhan, et al., 2022). Following is a brief description of each method used in software development using the Agile methodology:
Firstly, Rational Unified Process is an incremental process in which new releases are released over time. It consists of main four phases, namely inception, elaboration, construction, and transition. Secondly, Scrum utilizes short iterations, known as sprints, to deliver working software incrementally. It focuses on self-organizing teams, close collaboration between stakeholders, and continuous improvement, it is suitable for both small and large projects, it is an iterative and incremental model (M.N.Mahdi et al., 2021). Thirdly, Extreme Programming emphasizes customer satisfaction, frequent releases, continuous testing, pair programming, and collective code ownership. Fourthly, Kanban become increasingly popular in software development, team members can visualize their work and optimize the flow efficiency through visual management methods (A.Puška, et al., 2020). Fifthly, Lean Software Development the goal of this approach is to reduce waste, optimize flow, and deliver value as quickly as possible. Sixthly, Feature-Driven Development one of the popular Agile methods, follows eight best practices, such as domain object modeling and development by feature. Seventhly, Adaptive Software Development which values are created by rapidly adjusting to internal and external events rather than using process optimization techniques, in general, the size and degree of uncertainty of the project determine the length of an iteration.
2.1.3 Agile project management
Agile project management (APM) is the process of managing projects within an Agile framework. As stated by (M.Mahmud, 2020) APM focuses on delivering small chunks of work quickly and iteratively, rather than all at once. It also emphasizes communication and collaboration and encourages teams to be flexible and responsive to customer needs. APM can help teams deliver high-
quality products and projects on time (M.Talukder, 2020). The iterative and flexible nature of Agile methodologies allows the traditional phases of project management to be adapted to fit the nature of Agile.
2.2 Project management software tools
Using software management tools when adapting Agile methodologies can impact project management decisions. A wide variety of such tools are available, including Jira, Trello, and versionOne. These tools help project managers improve communication and resource allocation, plan their work effectively, and achieve desired results (E.Ismagilova, et al., 2019).
2.2.1 Advantages and disadvantages of using project management tools.
The authors Özkhan and Mishra (A.-D. Salaou, et al., 2021) emphasize in their research that software tools provide advantages as enhancing efficiency by automating repetitive tasks, simplifying workflows, leading to improved schedule management, and progress tracking. Additionally, these tools promote improved collaboration through features such as shared project dashboards, continuous updates, and document sharing, enabling seamless teamwork regardless of geographical locations. Software tools provide a centralized repository of project-related information, such as plans, requirements, documents, and communication records, can be centralized. Similarly, authors (A.Mishra and Y. I. Alzoubi, 2023) noticed as a result of software tool’s centralized information, it allows information to be accessed and retrieved more easily, allowing everyone to stay up-to-date (A.Faudot, et al., 2022). Likewise, author Chouhan makes the case that software tools are beneficial due to its assist in resource allocation, tracking, and optimization. By assigning tasks, monitoring resource utilization, and identifying potential conflicts, project managers can manage projects efficiently. Additionally, the tools are used for increasing the rate of throughput and eliminate bottlenecks by closely monitoring and managing progress while maintaining control over the scope of the project. On the other hand, author argues as well that using software tools for project management activities can produce a few challenges such as some tools can have a steep learning curve, particularly for team members who have no prior experience with the tool (E.Ismagilova, L, et al., 2019). Which implies that it’s a critical to ensure that everyone can use a software tool effectively, so training and onboarding may be required as proposed by author. Again, author Chouhan points out that high-quality project management software tools often cost money. It may also be difficult and time-consuming to implement and customize these tools to meet the specific needs of a specific project as author (D.Özkan and A. Mishra, et al., 2021). Nevertheless, author Chouhan discusses that it can be challenging to integrate project management software tools into existing workflows and systems. It may be necessary to expend additional effort to ensure seamless data interchange due to compatibility issues with other software. Moreover, maintaining, and providing technical support for software tools is a continuous process. Support quality and responsiveness vary from tool to tool. Insufficient support can prevent the tool from being effectively used.
2.2.2 Tools comparison
According to (M.I.Lunesu, et al., 2020) agile have keys characteristic as online accessibility, cost involvement, task boards for progress tracking for defining, monitoring, and reporting tasks. The most suitable tool can be selected based on factors as task scheduling, resource management, time tracking, estimating, risk assessment, process management, and portfolio management. As part of the selection process, it is necessary to evaluate the features of different tools and choose the optimal combination of features for maximum utility across projects, taking into consideration factors such as cost and features availablely (D.Ghimire and S. Charters, 2022). The most common features include generating reports and dashboards, facilitating collaboration, managing requirements, budgets or resources, and tracking time. The bellow Figure1 clearly illustrates a significant observation regarding Jira management tool. It is noteworthy that the tool has successfully targeted the majority of the criteria outlined and it has demonstrated to provide a reliable solution for managing various aspects of projects and teams. Therefore, the authors conduct this research using this tool.
2.2.3 Overview of Jira project management software tool
Jira is a project management software tool that is widely used by professionals and institutional designers working in collaborative environments. It is known for its ability to track progress, track issues, manage tasks, and project backlogs. Jira's ticketing system was created by Atlassian Corporation in 2002 for monitoring and tracking bugs. Later, with Jira's advanced features, it has been widely used to manage IT projects. Jira has a significant impact on how project team members conceptualize Agile projects. In many instances, the approach and understanding of the team is influenced by the data structures managed within the tool [A. Gupta, G. Poels, and P. Bera, 2022].
In addition to providing an effective work environment, Jira assists both co-located and distributed teams in anticipating, identifying, and resolving potential deadline issues. All members of the team have access to project milestones, updates, and reminders in one central location. A fundamental role of Jira is to manage the backlogs of the Development, Architecture, and QA teams, each with different workflows. Since Jira can be used to create Kanban and Scrum boards, schedule sprints, estimate completion times for work items, and create burndown charts and cumulative flow diagrams easily. In addition, fields and screens can be customized to ensure accurate tracking and recording of work items. The tool is globally used by large communities.
3. METHODOLOGY
The purpose of this study is to gather in-depth insights into participants' interactions with Jira's mobile application through a qualitative research design, specifically usability tests, which allows to...
observe users as they interact with the targeted application and uncover any difficulties they encounter. The study involved 12 participants, selected based on the criteria outlined by the authors. Each has an IT background and at least one year of experience working with the Agile as a developer or project manager. All of them work in one organization; however, each participant has different previous experiences from different organizations. They are chosen in a random selection from a variety of projects within the organization to ensure fairness and minimize bias. The test accomplished by using Lookback tool which combines direct observation, screen recording, and participant feedback as methods for data collection. The selected data was analyzed to identify common themes, usability issues, and patterns of interaction based on the participants' interactions and nonverbal and verbal cues during the test. An informed consent form was obtained using Jotform tool from each participant prior to the test. The authors conducted the testing in a controlled lab equipped with Jira application running devices. Participants were assured that their data will remain confidential. They were encouraged to express their thoughts and suggestions through the use of the application. A series of predefined tasks were provided to participants during the testing process. To gather more information about the participants' experience, a survey is presented following the completion of the test. Each participant is required to complete a questionnaire using Google Form. This questionnaire is divided into three parts. The first section includes questions for personal job information as the number of years of experience with the software. The second section contains a background question for the tools used previously. In the third section, participants are asked scaled questions with rating answer which are (Strongly disagree, Disagree, Neutral, Agree, Strongly agree) or with this scale (Very Easy, Easy, Moderate, Difficult, Very Difficult) regarding Jira software management tools, such as how user-friendly it is, the usability of the application, and whether the participant plans to use Jira in their future projects. In addition, an open question is included to obtain any enhancements or desired features for Jira. The first two sections were answered prior to the test while the last section was answered after the completion of the test. During the experiment, participants were divided into two groups, with six participants representing each role. Three tasks was assigned to project managers. The first task (T1) is to create a project. It's began when the "Create Project" button in the project tab is clicked. Project forms must be completed with necessary information, including the project name, in this case "Volunteer system," the project template, in this case "Scrum," and the project key, which may be as "VS". Once the form is complete, they were asked to click on the "Create" button to finalize the creation of the project. Their second task (T2) is to create multiples user stories. The first two user stories should be assigned to developer AB and the third user story should be assigned to developer CD. The list of the user stories as follows:
- User story 1: As an admin, I want to create a volunteer opportunity.
- User story 2: As an admin, I want to edit a volunteer opportunity name.
- User story 3: As an admin, I want to edit a volunteer opportunity timeline.
- User story 4: As an admin, I want to delete a volunteer opportunity.
Lastly, the third task (T3) project managers were required to move all created user stories from the project backlog to the sprint backlog as their final task. Further, the developer with the least number of tasks should be assigned the last user story added to sprint 1, "User story 4". As for the second group, which is comprised of developers, the first task (T1) is based on the user story assigned to them on an existing project "Volunteer system" those states, "As an admin, I want to create a volunteering opportunity." Two subtasks should be created and assigned to them, which are
- Create a table for opportunities in the database.
- Create a user interface for the opportunity creation form.
Next, the second task (T2) the developers should change the status of the subtasks they created in the backlog from "To Do" to "In Progress." Additionally, the status of user stories from "To Do" to "In Progress." As well. Finally, their were requested for the last task (T3) is to search for their assigned bug on a specific user story and change it status to be done.
https://doi.org/10.54489/ijtim.v3i2.303
Published by GAFTIM, https://gaftim.com
4. RESULT
To gather more information about the participants' background and the other tools that they used previously, and based on the Figure2, all the selected project managers participants are using Jira currently in their projects with different years of experience. While only 83% of the selected developer participants are using Jira currently it with different years of experience as well and only one is new for the application.
As shown in Figure4, 67% of the project managers participants used Team Foundation Server (TFS) project management tool to manage their project, and other used Trello and Excel sheet. Similarly, 83% of the developer's participants used TFS project management tool, and other used excel sheet. In Figure5 and Figure6, 84% of project managers agreed that the tool they used previously is easy to learn and use, except excel sheets which is reported to be extremely difficult to use in project management by 16%. The developers had different views regarding the ease of use and learn of TFS tool that they used previously.
In Figure7 and Figure8, all project managers were asked to rate the project creation process based on the tools they used previously, and 67% of them found the process of creating a new project is not easy, and it took them around 5 up to 10 or more minutes to add only four users' stories. Same with developers, they were asked to rate the tasks related process based on the tools they used previously, but only 66% of the participants agreed that the tools they used previously is not easy in regard of tracking the tasks and its status, as well as it took them around 1 to 10 minutes or more to track bugs assigned to a user story that they responsible for.
After finalizing the test, participants were asked to answer the last section of the survey about their opinions about Jira applications, Figure9 shows that 100% of the project managers and developers agreed or strongly agreed that Jira mobile application is user-friendly. With respect to the ease of learn and use, 100% of project managers found Jira mobile application is easy to learn and use, while only 84% of developers agreed with that.
Regarding the project creation with Jira, Figure11 shows that 84% of the project manager agreed that it's easy to access and create projects on Jira application, with maximum 9 minutes to create four user stories. On the other hand, regarding the access and retrieval of the tasks, only 17% of the developers found it difficult on Jira application, with maximum 9 minutes to track assigned bugs for specific user story.
When the participants were asked if there willing to use Jira mobile application in their project management, Figure13 shows that both 83% project managers and developers agreed that they will but only 17% are disagreed or remained neutral on the matter. For the opened answer questions, the authors intended to collect the main strengths and weaknesses of the software tools based on project managers and developers' opinions from real experience in the workplace prior to the test. Most of projects manager commented on the ability to manage the tasks and resources through the tools directly is one of strengths of the tools. While some weaknesses are included that there are no mobile application versions of the tool nor reporting dashboards and the need of complicated configuration to be handled. In contrast, developers commented on the ability to integrate with the integrated development environment (IDE), they used to link the code sections with its associated user stories. While some weaknesses including that there is no history for the changes made and it's not easy to learn for first time users. After finalizing the usability test, participants were asked again an opened answers questions to get as much feedback, suggestions and enhancements as possible from their previous experience with Jira. The project managers outlined that the there are some missing features such as creating project using the information of existing one, edit project info, resource management activities. While the developers outlined that the there are some missing features such as handling the tasks or user story in case of assignee updates, more extensive filters, and direct code linking configuration. For usability testing results, Table1 provides an overview of the time duration for each task performed by the participants and tasks' average time. Also, During the usability test, participants provided valuable suggestions for improving the project management system. Suggestions include the ability to clone a project and delete a project. In addition, there is the introduction of a resource management tab and delegation options for Table 1Usability test summary for project managers.
employees. These suggestions will enhance the
application’s functionality and user experience.
Table 2 provides an overview of the time duration for each task performed by the participants and tasks’ average time. Also, during the usability test, the authors gathered valuable suggestions from participants to improve the application. In terms of user stories, participants recommended that the system should automatically update a user story’s status when its sub-task status changes. Moreover, participants expressed a desire to display their assigned user stories by default in the backlog. Furthermore, they suggested that priority fields be added to indicate the severity of the bug to facilitate a better prioritization process.

### Figure 3: Participants years of experience using Jira
<table>
<thead>
<tr>
<th>Experience Level</th>
<th>Project Manager</th>
<th>Developer</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 or less</td>
<td>50%</td>
<td>50%</td>
</tr>
<tr>
<td>2</td>
<td>16%</td>
<td>17%</td>
</tr>
<tr>
<td>3</td>
<td>17%</td>
<td>17%</td>
</tr>
<tr>
<td>4 or more</td>
<td>33%</td>
<td>17%</td>
</tr>
</tbody>
</table>
### Figure 4: Previous tools participants used
<table>
<thead>
<tr>
<th>Tool</th>
<th>Project Manager</th>
<th>Developer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Team Foundation Server (TFS)</td>
<td>67%</td>
<td>16%</td>
</tr>
<tr>
<td>Trello</td>
<td>17%</td>
<td>17%</td>
</tr>
<tr>
<td>Asana</td>
<td>17%</td>
<td>17%</td>
</tr>
<tr>
<td>Excel sheet</td>
<td>17%</td>
<td>17%</td>
</tr>
</tbody>
</table>
### Figure 5: Previous tools participants used with user friendly rating
<table>
<thead>
<tr>
<th>Rating</th>
<th>Project Manager</th>
<th>Developer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Strongly disagree</td>
<td>16%</td>
<td>16%</td>
</tr>
<tr>
<td>Disagree</td>
<td>17%</td>
<td>17%</td>
</tr>
<tr>
<td>Neutral</td>
<td>17%</td>
<td>17%</td>
</tr>
<tr>
<td>Agree</td>
<td>33%</td>
<td>33%</td>
</tr>
<tr>
<td>Strongly agree</td>
<td>67%</td>
<td>17%</td>
</tr>
</tbody>
</table>
### Figure 6: Previous tools participants used with ease to use and learn rating
<table>
<thead>
<tr>
<th>Difficulty</th>
<th>Project Manager</th>
<th>Developer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Very Easy</td>
<td>33%</td>
<td>17%</td>
</tr>
<tr>
<td>Easy</td>
<td>50%</td>
<td>17%</td>
</tr>
<tr>
<td>Moderate</td>
<td>50%</td>
<td>17%</td>
</tr>
<tr>
<td>Difficult</td>
<td>33%</td>
<td>17%</td>
</tr>
<tr>
<td>Very Difficult</td>
<td>33%</td>
<td>17%</td>
</tr>
</tbody>
</table>
Figure 7. Previous tools participate used rating for project creation or tasks management based on role.
Figure 8. Previous tools participate used rating for average time for project creation or tasks management based on role.
Figure 9. Jira’s application with user-friendly rating.
Figure 10. Jira’s application with ease of use and learn rating.
5. DISCUSSION
After all the collected results have been presented, this section delves into a comprehensive analysis with the primary objective of addressing the research questions at hand. It is intended to provide conclusive answers and insights to the research questions by carefully examining the data gathered throughout the testing process.
5.1 Project managers’ satisfaction with current tools
The study seeks to understand how effective and efficient the tool is in facilitating the project creation process. The findings reveal that all the selected participants are currently using Jira in their projects, indicating its popularity among project managers with varying levels of experience as reported previously by the result of Ozkan study (R. Imran and T. R. Soomro, 2022). However, when it comes to project creation on software tools, 67% of project managers expressed that the process of creating a new project with TFS, Trello or excel sheets was not easy, and it took them approximately 5 to 10 minutes or more to add only four user stories. While with Jira, significant majority as 84% of the participants agreed that it was easy to access and create projects. Using Jira, the average time taken to create four user stories was reported to be a maximum of 9 minutes. This average time is calculated as well during the...
usability testing the authors conducted using the same variable as four user stories, and it’s shown to be about 3 minutes only. Therefore, this suggests that project managers can create user stories in significantly less time and more efficiently when using Jira compared to other tools. However, for big projects, it contains a hundred of user stories, so Jira must be enhanced to accommodate that. As a result of the outcomes obtained, 100% of the participants agreed or strongly agreed that Jira mobile application is user friendly and generally expressed satisfaction with Jira software’s management tool in terms of project creation especially in compared with TFS, which was used by 67% of project managers prior to Jira, it doesn’t have a mobile version as reported in the survey. In addition, 83% of the participants are agreed that they are willing to use Jira mobile application in their project management and only 17% are disagreed which reported by them to be because of its learning curve as this is one of the challenging facing project management tools as mentioned by (D.Ciric Lalic, et al., 2022).
5.2 Developers’ satisfaction with current tools
The goal is to evaluate how satisfied developers are with the tool's functionality in relation to monitoring and tracking their tasks including any bugs that may arise. The findings indicated that Jira is currently being utilized by 83% of the chosen participants in their projects, regardless of their varying years of experience. Furthermore, 66% of the participants expressed agreement regarding the difficulty they encountered with previously used tools in terms of task tracking and status monitoring. On average, it took them between 1 to 10 minutes, or even longer, to track bugs assigned to a user story for which they were responsible for. When it came to accessing and retrieving tasks using Jira, 83% of the participants expressed its ease of use, with a maximum of 9 minutes required to track assigned bugs for specific user stories. Approximately, the activities of creating tasks, tracking its bugs and updating its statuses was calculated through the usability testing sessions to be an average of 5 minutes, indicating a significant improvement using Jira.
The outcomes also revealed that 100% of the participants either agreed or strongly agreed that the Jira mobile application was user-friendly. Additionally, 84% of the participants reported that it’s easy to learn and use.
5.3 Improvements to enhance project management experience
Based on the survey and the suggestions during the testing sessions, to achieve higher satisfaction rate Jira must offer some missing features. Firstly, there are a need to take an advantages of the tasks automation abilities in software project management suggested by authors Özkan, an efficient and automatic way to manage task assignees in situations like employee vacation or retirement, which is currently requiring manual updates to individual user stories, tasks, and bugs. In addition, to ensure the same experience across the mobile version, it should maintain a compatible user interface with the web version, as compatibility is an important feature as mentioned by (M.Younas, D.N.A.Jawawi, et al., 2022), such as the displaying of all user stories is not recommended for developers using the mobile version as it takes time to search through them. Instead, it is preferred to filter to display only user stories assigned to them as existed in web version. Secondly, enable project management efficiency, the project manager should be provided with the ability to edit, delete or clone projects, thereby saving time by avoiding the need to create new projects from scratch. This feature enables the creation of a new project based on an existing one within the application, which if it is implemented will be unique for Jira and there are no other tools available for that. Thirdly, quick, and simple method for creating user stories should be implemented. This enables the project team to generate user stories efficiently, ensuring clear communication and understanding of project requirements specially with the rapid changes in Agile projects. Finally, in order to manage project resources, a dedicated resource management tab should be available to the project manager as its one of the main advantages of the software tools. This tab allows the addition or removal of project members as required. It should include a comprehensive list of employees assigned to the project, along with their respective tasks. This facilitates better coordination and allocation of responsibilities within the project team. Although Jira’s mobile application and project creation capabilities were perceived as beneficial by participants, some challenges were
observed with the Jira tool and other enhancements were needed which is addressed in the prototype section later.
5.4 Design implication
Based on the feedback and suggestions from the participants through usability testing and survey, we developed a mockup of Jira application. Only basic functionalities are shown and basic screens are visualized, including new functionality.
5.4.1 Create A Project by Upload a File
Table2. Create project mock-up screen summary.
Once manager completed the Software Requirements Specification (SRS) file, which includes sections for functional requirements, they can upload the file to the application. The application will read it and generate user stories based on its contents. The process of generating user stories will be significantly improved, saving valuable time that would otherwise be spent manually creating them. Table3 shows each mock-up screen with their description.
<table>
<thead>
<tr>
<th>Mock-up screen</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image1.png" alt="Figure 14. Project list" /> <img src="image2.png" alt="Figure 15. Select file step one" /></td>
<td>Figure 14 shows the project tabs display all manager projects. To create a project, the manager clicks on the "plus" icon. A manager can also create a project using the "upload file" icon. It will automatically extract the file as shown in Figures 15, 16, 17 if it is uploaded.</td>
</tr>
<tr>
<td><img src="image3.png" alt="Figure 16. Select file step two" /> <img src="image4.png" alt="Figure 17. Select file step three" /></td>
<td></td>
</tr>
</tbody>
</table>
In Figure 18, the application automatically extracts data and fills in all fields except the project template. Figure 19 shows a list of templates will appear when the manager clicks on the project template field.
In Figure 18, when a manager clicks "Next," the application shows the "Epics" tap in Figure 20 which contains a list extracted from the file. A manager can delete, edit or "Expand the epic" to display user stories related to the epic as shown in Figure 21.
In Figure 22 the application displays a successfully message after creation. Then the new project will be displayed in the list as in Figure 23.
Figure 24 shows the available options for each project. Figure 25 Shows a clone form for managers that copies all the information from a selected project.
Figure 26 Shows an edit form for managers to modify project information.
Figure 27 When the manager deletes a project, a confirmation message appears.
When manager clicks a project, the application shows related tabs for it.
Figure 28 Shows epics and bugs with priority in backlog tab as shown in Figure 29.
5.1.1 Resource Management Mock-Up
These functions include adding, deleting, and editing employee information, delegating an employee, which facilitates handover to the newly appointed employee. All user stories, bugs, and tasks are automatically assigned to the new employee, Table 4 shows each mock-up screen with their description.
Table 3. Resource management mock-up screen summary
<table>
<thead>
<tr>
<th>Mock-up screen</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>[Mock-up Screen Image]</td>
<td>Figure 30 shows a view list of resources that are currently assigned to a project. Additionally, it allows manager to add new resources.</td>
</tr>
<tr>
<td>[Add New Resource Screen Image]</td>
<td>Figure 31 shows the form to add a new resource, the information about them will be displayed in the resource tab of the project as in Figure 32.</td>
</tr>
</tbody>
</table>
6. LIMITATIONS AND FUTURE WORK
The focus of this study is to explore the areas for improvement with Jira tool, and to assist the overall satisfaction level. However, this study did not explore the extent to which the use of Jira tools affects a project’s success, such as ensuring the project is delivered on time, within budget and on schedule. The authors administered the usability test to a total of six project managers and six developers, achieving the initial target of six
Figure 33 shows the available options for each resource. Figure 34 shows a form for editing employee role and name.
Figure 35 shows a delegation form to set a start and end date for delegation for an employee. Figure 36 displays a confirmation message indicating the deletion of the resource selected.
developers and six project managers. However, it would be beneficial to increase the sample size of to include project managers and developers from different Saudi Arabian companies with varying levels of experience to ensure the validity of the results. An extensive study is required to determine the characteristics and specific needs of each role. In addition, comparing the effectiveness of two popular software management tools which are TFS and Jira could provide valuable insights for the study, since the authors noted based on the conducted survey that TFS it widely used as well. The next usability testing should include a new role as part of the usability testing to assess the satisfaction level of users of the Jira software management tool, such as DevOps and software quality assurance testers, to better understand how people are satisfied with using the tool. For additional examination, it is necessary to examine Jira project management tools in conjunction with the use of Kanban for example or any other Agile methods that currently supported by Jira.
7. CONCLUSION
In conclusion, the increasing use of software tools has transformed the way that projects are planned, executed, and monitored. Digital solutions have replaced traditional manual processes and analog tools as a result of technological advances, especially those associated with Agile methodologies, which have emerged as a popular approach to accommodating rapid changes in requirements through flexibility and collaboration, which is the leading application in this area. In this study, three key research questions were addressed to evaluate the satisfaction of project managers and developers with Jira Software’s project management tool as well as finding any improvements that may improve the outcome of the tool. According to the findings, Jira was found to be an easy and efficient tool to use by most project managers and developers. Although improvements have been suggested, such as the ability to edit, delete, or clone projects, a dedicated resource management system, and automatically assigning tasks to assignees.
As mentioned by the authors, the authors intended to enhance Jira to provide project team with effective management capabilities. However, further research is required to explore whether Jira has a direct impact on project success, as well as to establish a larger sample size to validate the results and to resolve any remaining issues.
REFERENCES
|
{"Source-Url": "https://journals.gaftim.com/index.php/ijtim/article/download/303/172", "len_cl100k_base": 8741, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 62011, "total-output-tokens": 11715, "length": "2e13", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0004429817199707031, "__label__crime_law": 0.0002312660217285156, "__label__education_jobs": 0.00543212890625, "__label__entertainment": 7.325410842895508e-05, "__label__fashion_beauty": 0.0001596212387084961, "__label__finance_business": 0.0015659332275390625, "__label__food_dining": 0.0003147125244140625, "__label__games": 0.0007266998291015625, "__label__hardware": 0.0003631114959716797, "__label__health": 0.00029349327087402344, "__label__history": 0.00021696090698242188, "__label__home_hobbies": 0.00010854005813598631, "__label__industrial": 0.00027680397033691406, "__label__literature": 0.0002474784851074219, "__label__politics": 0.0001480579376220703, "__label__religion": 0.0003077983856201172, "__label__science_tech": 0.00342559814453125, "__label__social_life": 0.00015306472778320312, "__label__software": 0.021148681640625, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.00029587745666503906, "__label__transportation": 0.00032520294189453125, "__label__travel": 0.0002384185791015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49700, 0.04662]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49700, 0.16717]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49700, 0.92984]], "google_gemma-3-12b-it_contains_pii": [[0, 3102, false], [3102, 7549, null], [7549, 11989, null], [11989, 16505, null], [16505, 18226, null], [18226, 22960, null], [22960, 27745, null], [27745, 28539, null], [28539, 30223, null], [30223, 30574, null], [30574, 31915, null], [31915, 36698, null], [36698, 37620, null], [37620, 38158, null], [38158, 38630, null], [38630, 38931, null], [38931, 39240, null], [39240, 40044, null], [40044, 40829, null], [40829, 45812, null], [45812, 49700, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3102, true], [3102, 7549, null], [7549, 11989, null], [11989, 16505, null], [16505, 18226, null], [18226, 22960, null], [22960, 27745, null], [27745, 28539, null], [28539, 30223, null], [30223, 30574, null], [30574, 31915, null], [31915, 36698, null], [36698, 37620, null], [37620, 38158, null], [38158, 38630, null], [38630, 38931, null], [38931, 39240, null], [39240, 40044, null], [40044, 40829, null], [40829, 45812, null], [45812, 49700, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49700, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49700, null]], "pdf_page_numbers": [[0, 3102, 1], [3102, 7549, 2], [7549, 11989, 3], [11989, 16505, 4], [16505, 18226, 5], [18226, 22960, 6], [22960, 27745, 7], [27745, 28539, 8], [28539, 30223, 9], [30223, 30574, 10], [30574, 31915, 11], [31915, 36698, 12], [36698, 37620, 13], [37620, 38158, 14], [38158, 38630, 15], [38630, 38931, 16], [38931, 39240, 17], [39240, 40044, 18], [40044, 40829, 19], [40829, 45812, 20], [45812, 49700, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49700, 0.19429]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
d89972bc632cacfcea92744651dd51de2dd7ed83
|
The following full text is a preprint version which may differ from the publisher's version.
For additional information about this publication click this link.
http://hdl.handle.net/2066/72206
Please be advised that this information was generated on 2019-12-29 and may be subject to change.
Inferring static non-monotonically sized types through testing
Ron van Kesteren, Olha Shkaravska, Marko van Eekelen
{R.vanKesteren, O.Shkaravska, M.vanEekelen}@cs.ru.nl
Institute for Computing and Information Sciences
Radboud University Nijmegen
Abstract. We propose a size analysis algorithm that combines testing and type checking to automatically obtain static output-on-input size dependencies for first-order functions. Attention is restricted to functions for which the size of the result is strictly polynomial, not necessarily monotonic, in the sizes of the arguments.
To infer a size dependency, the algorithm generates hypotheses for increasing degrees of polynomials. For each degree, a polynomial is defined by a finite number of points. The function is evaluated with a large enough set of appropriate measurement data to get these points and determine the coefficients of the polynomial. The resulting hypothesis is then checked using an existing type checking procedure.
The algorithm is not tied to the current sized type checker. The sized type of a function will be inferred if it exists and if it is accepted by the sized type checker. For terminating functions, our sized type inference algorithm is complete with respect to type checking. Hence, using a more complete sized type checker yields a more complete sized type inference algorithm.
Keywords: Memory complexity analysis, type checking, testing, Lagrange interpolation
1 Introduction
Embedded systems or server applications often have limited resources available. Therefore, it can be important to know in advance how much time or memory a computation is going to take, for instance to determine how much memory should at least be put in a system to enable all desired operations. Economically, the developer does not want to include too much memory, but the costs of failure of the application will be much higher.
Such decisions can only reliably be based on formally verified upper bounds of the resource consumption. However, an advanced detailed analysis of these bounds requires knowledge of the sizes of the data structures used throughout the program [ESvK+07]. Trivially, the time it takes to iterate over a list depends on the size of that list. In this paper we focus on the task of automatically deriving the exact output-on-input size dependencies of functions.
Size dependencies can be represented in function types. We focus on shapely functions, where shapely means that the size relations are exactly polynomial (not necessarily monotonic). As an example, consider the function that computes the Cartesian product of two lists. It generates all pairs of elements, one taken from the first list, the other from the second.
\[
\begin{align*}
\text{pairs } x \; \text{[]} &\; = \; \text{[]} \\
\text{pairs } x \; (y:ys) &\; = \; [x,y]:\text{pairs } x \; ys \\
\text{cprod } \; \text{[]} \; y\text{s} &\; = \; \text{[]} \\
\text{cprod } (x:xs) \; y\text{s} &\; = \; \text{pairs } x \; y\text{s} \; ++ \; \text{cprod } x\text{s} \; y\text{s}
\end{align*}
\]
The size of a list is the number of nodes it consists of (its length). Given lists of size 3 and 2, the output is a list of size 3 \times 2 = 6 whose elements are pairs, i.e., lists of size 2.
\[
\text{cprod } [1,2,3] \; [4,5] = [[1,4],[1,5],[2,4],[2,5],[3,4],[3,5]]
\]
The sized type of the \text{cprod} function expresses the general relation between argument and result sizes. When the two input lists have size \( s_1 \) and \( s_2 \) respectively, the output is a list of lists, where the outer list has size \( s_1 \times s_2 \) and the inner lists all have size 2.
\[
\text{cprod : } [\text{Int}]^{s_1} \rightarrow [\text{Int}]^{s_2} \rightarrow [[\text{Int}^2]^{s_1+s_2}}
\]
In general, all lists at the input side, before the arrow, have an associated size variable. After the arrow, at the output side, all lists have an associated polynomial that determines the size of the output list. These polynomials are defined in terms of the input size variables. The current presentation is limited to a language over lists for reasons of simplicity; sized types are straightforwardly generalized to general data structures and other programming languages.
Recently, we have developed a sized type checking procedure to formally verify polynomially sized types (section 2) [SvKvE07]. Given a sized type, the procedure automatically checks if the function definition satisfies that type. Unfortunately, inferring such types is a lot more challenging than type checking and the type system approach does not straightforwardly extend (section 2.3). Therefore,

we have suggested an alternative method of inferring sized types [SvKvE07]. This paper develops this method into a practical type inference algorithm.
The method is based on the observation that it is relatively easy to generate hypotheses for a size dependency by testing. Because a polynomial of a given degree is determined by a finite number of values, its coefficients can be computed from the output sizes of run-time tests (figure 1). If the size expression is indeed a polynomial of that degree, it can be only that polynomial. This theory is used to create a practical algorithm that yields hypotheses for sized types (section 3).
Combining hypothesis generation and type checking yields an algorithm that can infer the sized type of a function (section 4). The algorithm generates hypotheses for an increasing degree. For each degree, hypotheses for all polynomial size expressions in the output type are determined. The resulting sized type is checked using the sized type checking procedure. Thus:
1. Infer the underlying type (without sizes) using standard type inference
2. Annotate the underlying type with size variables
3. Assume the degree of the polynomial
4. For every output size:
- Determine which tests are needed
- Do the required series of test runs
- Compute the polynomial coefficients based on the test results
5. Annotate the type with the size expressions found
6. Check the annotated type
7. If checking fails, repeat from step 4 assuming a higher degree
In practice, an upper limit on the degree can be used as a stopping criterion. Note that the algorithm can also work with any other procedure that automatically checks polynomially sized types. Indeed, for terminating programs the algorithm is only guaranteed to find the sized type if one exists that is accepted by the type checker.
The main contribution of this paper is developing the method suggested [SvKvE07] into a practical sized type inference algorithm. Specifically, this means dealing with cases where the function definition only partially defines the output size polynomial: when the output type is a nested list and the output value is the empty list, there is no information on the sizes of the inner lists.
2 Sized type checking
Essentially, our approach to sized type inference for shapely functions is based on reducing inference to sized type checking. This section briefly describes the existing strict size-aware type system for a functional language and accompanying type checking procedure [SvKvE07] that we use in the inference algorithm. This also motivates our approach to type inference.
2.1 Sized Types
The zero-order types we consider are integers, strictly sized lists of integers, strictly sized lists of strictly sized lists, etc. For lists of lists the element lists have to be of the same size and in fact it would be more precise to speak about matrix-like structures. For instance, the type \([[\text{Int}]^3]^2\) is given to a list which two elements are both lists of exactly three integers, such as \([[2,5,3], [7,1,6]]\).
\[
\text{Types } \tau ::= \text{Int} | \alpha | [\tau]^p \quad \alpha \in \text{TypeVar}
\]
The \(p\) in this definition denotes a size expression. Size expressions are polynomials in size variables.
\[
\text{SizeExpr } p ::= \text{IN} | s | p + p | p - p | p * p \quad s \in \text{SizeVar}
\]
For instance, type \([\alpha]^4\) represents a list containing four elements of some type \(\alpha\) and \([[\text{Int}]^{(s_1 - s_2)}]^2\) represents a list of integers of size \((s_1 - s_2)^2\) where \(s_1\) and \(s_2\) are size variables. Size expressions are subject to the standard associativity, commutativity and distributivity laws for addition and multiplication. Types with negative sizes have no meaning.
Because the current system does not support Currying, first-order types are functions from tuples of zero-order types to zero-order types.
\[
\text{FTypes } \tau^f ::= \tau_1 \ldots \tau_n \rightarrow \tau_{n+1}
\]
For example, the type of \(\text{cprod} \), \([[\text{Int}]^{s_1} \rightarrow [\text{Int}]^{s_2} \rightarrow [[\text{Int}]^2]^{s_1 \ast s_2}\) is a first-order type. In well-formed first-order types, the argument types are annotated only by size variables and the result type is annotated by size expressions in these variables. Type and size variables occurring in the result type should also occur in at least one of the argument types. Thus, the type of \(\text{cprod}\) is a well-formed type, whereas \([\alpha]^{s_1 + s_2} \rightarrow [\alpha]^{2 \ast s_1}\) is not.
2.2 Typing system
Previously, we have developed a sound size-aware type system and a type checking procedure for a first-order functional language with call-by-value semantics [SvKvE07]. The language supports lists and integers and standard constructs for pattern matching, if-then-else branching, and let-binding.
The typing rules follow the intuition on how sizes are used and changed during function evaluation. The construction of a list results in a list that is one element longer than the tail. The \(\text{then}\) and \(\text{else}\) parts of the if-statement are required to yield the same size. The same goes for the \(\text{nil}\) and \(\text{cons}\) branch of pattern matching, but that rule also takes into account that the matched list is known to be empty in the \(\text{nil}\) branch: when matching a list of size \(s\), if the \(\text{cons}\) branch has size \(s \ast 4\), the \(\text{nil}\) branch can have size 0 because, there, \(s = 0\) and thus \(0 = s \ast 4\).
In the formal rules, a context \(\Gamma\) is a mapping from zero-order program variables to zero-order types, a signature \(\Sigma\) is a mapping from function names to
first-order types, and \( D \) is a set of Diophantine equations that keeps track of which lists are empty. A typing judgment is a relation of the form \( D; \Gamma \vdash \Sigma e : \tau \) which means that if the free program variables of the expression \( e \) have the types defined by \( \Gamma \), and the functions called have the types defined by \( \Sigma \), and the size constraints \( D \) are satisfied, then \( e \) will be evaluated to a value of type \( \tau \), if it terminates. For example:
\[
D \vdash p = p' + 1
\]
\[
D; \Gamma, \text{CONS} \vdash \text{CONS} \quad \frac{\Gamma(x) = \text{Int}}{\Gamma(x) = \text{Int}}
\]
\[
\frac{\Gamma(x) = \text{Int} \quad D; \Gamma \vdash \Sigma \text{if } e_i : \tau \quad D; \Gamma \vdash \Sigma \text{else } e_f : \tau}{D; \Gamma \vdash \Sigma \text{ if } x \text{ then } e_i \text{ else } e_f : \tau}
\]
\[
D; \Gamma, x: [\tau]^p \vdash \Sigma \text{ match } x \text{ with } \begin{cases} \text{nil} & \Rightarrow e_{\text{nil}} \\ \text{cons}(hd, tl) & \Rightarrow e_{\text{cons}} \end{cases} : \tau
\]
Sized type checking eventually amounts to checking entailments of the form \( D \vdash p = p' \), which means that \( p = p' \) is derivable from \( D \) in the axiomatics of the ring of integers. Because \( p \) and \( p' \) are known polynomials of universally quantified size variables, comparing them is straightforward. For instance, for the \( \text{cprod} \) function we obtain \( s_1 = 0 \vdash s_1 \times s_2 = 0 \) (in the \text{nil} branch) and \( \vdash s_1 \times s_2 = s_2 + (s_1 - 1) \times s_2 \) (in the \text{cons} branch). A syntactical condition that prohibits let-bindings before pattern matching was shown to be necessary and sufficient to make type checking decidable for this system [SvKvE07].
### 2.3 Motivation
Type inference in this type system is not straightforward. Applying the typing rules to types with unknown size expressions leads to sets of non-linear equations [SvKvE07] for which we know that there is no algorithm that solves them all. Of course, it is possible to write an algorithm that solves a subset of these cases, but then it is hard to determine to which subset of function definitions this corresponds and, consequently, if type inference is complete. It is also hard, and not desirable, to restrict the type system so that we can be sure that only solvable equations are generated (as Mycroft [Myc84] did for the Milner calculus [Mil78]). Both approaches most likely add unwanted restrictions, whereas we want our type inference algorithm to be as complete as possible.
The testing approach presented in this paper does not use the type system directly. Hypotheses for types are constructed based on the observed behavior of the function. This avoids solving non-linear systems of equations. To validate the hypotheses we use the existing, decidable, type checking algorithm. However, in practice any type checker can be used. The algorithm ensures that, for terminating programs, type inference is complete with respect to the type checker that is used.
3 Generating size hypotheses
This section develops a procedure that uses run-time tests to automatically obtain a hypothesis for an output size polynomial, given its maximum degree. This hypothesis is correct if the output size is in fact a polynomial of the same or lower degree. In section 4, this is combined with the type checker from section 2 to obtain a sized type inference algorithm.
The essence of the problem is giving the conditions under which a set of data points has a unique polynomial interpolation and constructing an algorithm to find points satisfying these conditions. This is complicated by the fact that for nested lists the size function is only partially defined by the function definition (section 3.3).
3.1 Interpolating a polynomial
Looking at the sizes of the arguments and results of some tests of the $\text{cprod}$ function gives the impression that the size of the outer list in the output is always the product of the sizes of the arguments. More specifically, if $p_1(s_1, s_2)$ is the size of the outer list given arguments of size $s_1$ and $s_2$, tests yielding $p_1(1, 3) = 3$, $p_1(4, 6) = 24$, and $p_1(3, 5) = 15$ may be interpolated to $p_1(s_1, s_2) = s_1 \cdot s_2$. Such a hypothesis can also be derived automatically by fitting a polynomial to the size data. We are looking for the polynomial that best approaches the data, i.e., the Lagrange interpolation. The Lagrange interpolation is unique under some conditions on the data, which are explored in polynomial interpolation theory [CL87,Lor92]. If the true size expression is polynomial and the degree of the unique Lagrange interpolation is high enough, the interpolating polynomial coincides with the true size expression.
We seek a condition under which the interpolation is unique. In the well-known univariate case this is simple. A polynomial $p(x)$ of degree $m$ with coefficients $a_1, \ldots, a_{m+1}$ can be written as follows:
$$a_1 + a_2 x + \ldots + a_{m+1} x^m = p(x)$$
The values of the polynomial function in any $m+1$ points determine a system of linear equations w.r.t. the polynomial coefficients. More specifically, given the set $(x_i, p(x_i))$ of pairs of numbers, where $1 \leq i \leq m+1$, and coefficients $a_1, \ldots, a_{m+1}$, the set of equations can be represented in the following matrix form, where only the $a_i$ are unknown:
$$\begin{bmatrix}
1 & x_1 & \cdots & x_1^{m-1} & x_1^m \\
1 & x_2 & \cdots & x_2^{m-1} & x_2^m \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
1 & x_m & \cdots & x_m^{m-1} & x_m^m \\
1 & x_{m+1} & \cdots & x_{m+1}^{m-1} & x_{m+1}^m
\end{bmatrix}
\begin{bmatrix}
a_1 \\
a_2 \\
\vdots \\
a_m \\
a_{m+1}
\end{bmatrix}
=
\begin{bmatrix}
p(x_1) \\
p(x_2) \\
\vdots \\
p(x_m) \\
p(x_{m+1})
\end{bmatrix}$$
The determinant of the left matrix, contains the measurement points, is called the Vandermonde determinant. For pairwise different points $x_1, \ldots, x_{m+1}$ it is
non-zero. This means that, as long as the output size is measured for \( m + 1 \) different input sizes, there exists a unique solution for the system of equations and, thus, a unique interpolating polynomial.
The conditions under which there exists a unique polynomial that interpolates multivariate data are not so trivial. A polynomial of degree \( m \) and dimension \( n \) (the number of variables) has \( N_m^n = \binom{m+n}{n} \) coefficients. The condition under which a set of data uniquely determines a polynomial interpolation is stated as a condition on a set of nodes \( W = \{\bar{w}_i : i = 1, \ldots, N_m^n\} \), the input sizes for which a measurement is done, such that for every set of associated measurement data \( \{f_i : i = 1, \ldots, N_m^n\} \), there is a unique polynomial \( p(\bar{w}) = \sum_{0 \leq |j| \leq m} a_j \bar{w}^j \) with total degree \( m \) which interpolates the given data at the nodes [CL87]. That is, \( p(\bar{w}_i) = f_i \), where \( 1 \leq i \leq N_m^n \). Here \( \bar{w}^j = w_1^{j_1} \cdots w_n^{j_n}, |j| = j_1 + \cdots + j_n \) is the usual multivariate notation. In the next subsections, node configurations that satisfy this condition are defined, starting with bivariate polynomials and ending with the general case.
### 3.2 Measuring bivariate polynomials
For a two-dimensional polynomial of degree \( m \), the condition on the nodes that guarantees a unique polynomial interpolation is as follows. In the input space, there are \( m + 1 \) lines, each containing \( m + 1, \ldots, 1 \) of the nodes, respectively, and the nodes do not lie on the intersections of the lines. Such a configuration is depicted for parallel lines in figure 2a. This corresponds to the NCA configuration studied, for instance, by Chui [CL87].
**Definition 1 (Two-dimensional node configuration).** There exist lines in the input space, \( \gamma_1, \ldots, \gamma_{m+1} \), such that \( m + 1 \) nodes of \( W \) lie on \( \gamma_{m+1} \), \( m \) nodes of \( W \) lie on \( \gamma_m \setminus \gamma_{m+1} \), \( \ldots \), and 1 node of \( W \) lies on \( \gamma_1 \setminus (\gamma_2 \cup \cdots \cup \gamma_{m+1}) \).
Assuming the function terminates on all inputs, such points can be found algorithmically, at least for outermost lists, using a triangle of points on parallel lines (figure 2b).
An example of the two dimensional case is the `cprod` function from the introduction. Standard type inference and annotating gives the following type:
\[
cprod : [\alpha]^{s_1} [\alpha]^{s_2} \to [[[\alpha]^{p_2(s_1,s_2)}]^{p_1(s_1,s_2)}]
\]
We derive that \( p_1(s_1, s_2) = s_1 \ast s_2 \) assuming \( p_1 \) is a quadratic polynomial:
\[
p_1(s_1, s_2) = a_{0,0} + a_{0,1}s_1 + a_{1,0}s_2 + a_{1,1}s_1s_2 + a_{0,2}s_1^2 + a_{2,0}s_2^2
\]
Running the function at the six nodes from figure 2b gives the following results:
<table>
<thead>
<tr>
<th></th>
<th>s_1</th>
<th>s_2</th>
<th>x</th>
<th>y</th>
<th>cprod x y</th>
<th>p_1(s_1, s_2)</th>
<th>p_2(s_1, s_2)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>x</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>-</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>-</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>-</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>[[0, 1]]</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>0</td>
<td>[2]</td>
<td>[0, 1]</td>
<td>[[0, 2], [1, 2]]</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>0</td>
<td>[1, 2]</td>
<td>[0, 1], [0, 2]</td>
<td>2</td>
<td>2</td>
<td></td>
</tr>
</tbody>
</table>
7
This defines the following linear system of equations for the coefficients of $p_1$:
\begin{align*}
a_{0,0} &= 0 \\
a_{0,0} + a_{0,1} + a_{0,2} &= 0 \\
a_{0,0} + a_{1,0} + a_{2,0} &= 0 \\
a_{0,0} + a_{0,1} + a_{1,0} + a_{0,2} + a_{1,1} + a_{2,0} &= 1 \\
a_{0,0} + 2a_{0,1} + a_{1,0} + 4a_{0,2} + 2a_{1,1} + a_{2,0} &= 2 \\
a_{0,0} + a_{0,1} + 2a_{1,0} + a_{0,2} + 2a_{1,1} + 4a_{2,0} &= 2
\end{align*}
The unique solution is $a_{1,1} = 1$ with the rest of the coefficients zero. Thus, we obtain the correct $p_1(s_1, s_2)$ equal to $s_1 \ast s_2$.
This procedure is relatively straightforward. However, there is a problem in repeating it for $p_2$. There are cases in which nodes have no corresponding output size (the dashes in the table). cprod only partially defines $p_2$, because the size of the inner lists can only be determined when there is at least one such a list. Thus, the outer list may not be empty. As can be seen in figure 2d, for cprod this is always the case when one of the two input lists is empty. In the next section, we show that, despite this, it is still possible to always find enough measurements and give an upper bound on the number of nodes that have to be searched.
### 3.3 Handling partial definedness
From the example in the previous section, it is clear that care should be taken when searching for hypotheses for output types with nested lists. In general, for $[\ldots [\alpha]^p \ldots ]^p$ we will not find a value for $p_j$ at a node if one of the outer polynomials, $p_1$ to $p_{j-1}$, is zero at that node. Thus, the nodes where $p_1$ to $p_{j-1}$ are zero should be excluded from the testing process. Here, we show that, despite this, it is always possible to find enough nodes so that it becomes possible to construct an algorithm to find them.
First note that we do not consider nested lists with the size of the outer list a constant zero, like $[[\tau]^q]^0$, because it is not a principal type. Also, remember that we are searching parallel lines $p(x, i)$ for the node configuration. Then, for
any non-zero polynomial there is a finite number of lines \( y = i \), which we will call root lines, where \( p(x, i) = 0 \) (see lemma 1). There are infinitely many other lines.
**Lemma 1.** A polynomial \( p(x, y) \) of degree \( m \) that is not constant 0 has at most \( m \) root lines \( y = i \), such that \( p(x, i) = 0 \).
**Proof.** Suppose there are more than \( m \) root lines. Then, it is easy to pick \( 1, \ldots, m + 1 \) nodes on \( m + 1 \) root lines. With these nodes, at which \( p(x, y) = 0 \), the system of linear equations for the coefficients of \( p \) will have the zero-solution, that is, all the coefficients of \( p \) will be zeros. This contradicts the assumption that \( p \) is not constant 0.
Because of this property, diagonal search can always find as many nodes \((x, y)\) as desired, such that \( p(x, y) \neq 0 \) (see figure 2c, where roots are marked with crosses). In fact, without requiring diagonal search, we can give a limit on the number of parallel lines \( y = i \) and nodes on them that have to be searched at most. Essentially, we just try to find the triangle shape (as in figure 2b) while skipping all crosses. First, we show that for a nested list type \( [[\alpha]^q]^p \) with bivariate polynomial sizes \( q \) and \( p \), only the nodes in \([0, \ldots, m_1 + m_2] \times [0, \ldots, m_1 + m_2]\) have to be searched to determine \( q \), where \( m_1 \) and \( m_2 \) are the degrees of \( p \) and \( q \) respectively.
Say one needs to find coefficients of an output type \( [[\alpha]^q]^p \), and let \( n = 2 \) be the amount of variables, \( m_1 \) be the degree of \( p(x, y) \) and \( m_2 \) be the degree of \( q(x, y) \). One looks for test points for \( q \) that determine a unique polynomial interpolation at places where \( p(x, y) \neq 0 \). We restrict ourselves to lines \( y \) parallel to the \( x \)-axis and we look for \((m_2 + 1)(m_2 + 2)/2\) data points satisfying the condition from definition 1.
**Lemma 2.** When looking for test points for a polynomial \( q(x, y) \) that determine a unique polynomial interpolation at places where another polynomial \( p(x, y) \neq 0 \), it is sufficient to search the lines \( y = 0, \ldots, y = m_1 + m_2 \) in the square \([0, \ldots, m_1 + m_2] \times [0, \ldots, m_1 + m_2]\).
**Proof.** For the configuration it is sufficient to have \( m_2 + 1 \) lines with at least \( m_2 + 1 \) points where \( p(x, y) \neq 0 \). Due to lemma 1 there are at most \( m_1 \) lines \( y = i \) such that \( p(x, i) = 0 \), so at least \( m_2 + 1 \) are not root lines for \( p \). The polynomial \( p(x, j) \), with \( y = j \) not a root line, has at most degree \( m_1 \), thus \( y = j \) contains at most \( m_1 \) nodes \((x, j)\), such that \( p(x, j) = 0 \). Otherwise, it would have been constant zero, and thus a root line. Hence, this leaves at least \( m_2 + 1 \) points on these lines for which \( p \) is not zero.
This straightforwardly generalizes to all nested types with polynomials in two variables, say \( [\ldots [\alpha]^p \ldots]^p \). If we want to derive the coefficients of \( p_i \), searching the square of input values \([0, \ldots, \Sigma_{i=1}^k m_i] \times [0, \ldots, \Sigma_{i=1}^k m_i]\) suffices, where \( m_i \) is the degree of \( p_i \). Each \( p_i \) has at most \( m_i \) root lines, so there are at most \( \Sigma_{j=1}^{m_i} m_j \) root lines. Also, each of the \( p_i \) can have at most \( m_i \) zeros on a non
root line. Hence, when the length is \( \sum_{j=1}^{k} m_j + 1 \) there are always \( m_i + 1 \) values known.
For \( \text{cprod} \) there are two size expressions to derive, \( p_1 \) for the outer list and \( p_2 \) for the inner lists. Deriving that \( p_1(s_1, s_2) = s_1 * s_2 \) is no problem. Because \( p_1 \) has roots for \( s_1 = 0 \) and for \( s_2 = 0 \), these nodes should be skipped when measuring \( p_2 \) (see figure 2d).
3.4 Generalizing to \( n \)-dimensional polynomials
The generalization of the condition on nodes for a unique polynomial interpolation to polynomials in \( n \) variables, is a straightforward inductive generalization of the two-dimensional case. In a hyperspace there have to be hyperplanes, on each of which nodes lie that satisfy the condition in the \( n - 1 \) dimensional case. A hyperplane \( K_j^n \) may be viewed as a set in which test points for a polynomial of \( n - 1 \) variable of the degree \( j \) lie. There must be \( N_{j-1}^{n-1} = N_j^n - N_j^{n-1} \) such points. The condition on the nodes is defined by:
\textbf{Definition 2 (\( n \)-dimensional node configuration).} The \textit{NCA} configuration for \( n \) variables (\( n \)-dimensional space) is defined inductively on \( n \) [CL87]. Let \( \{x_1, \ldots, x_{N_m^n}\} \) be a set of distinct points in \( \mathbb{R}^n \) such that there exist \( m + 1 \) hyperplanes \( K_j^n, 0 \leq j \leq m \) with
\[
\begin{align*}
x_{N_{m-1}^{n-1} + 1}, \ldots, x_{N_m^n} & \in K_m^n \\
x_{N_{j-1}^{n-1} + 1}, \ldots, x_{N_j^n} & \in K_j^n \setminus \{K_{j+1}^n \cup \ldots \cup K_m^n\}, \text{ for } 0 \leq j \leq m - 1
\end{align*}
\]
and each of set of points \( x_{N_{j-1}^{n-1} + 1}, \ldots, x_{N_j^n}, 0 \leq j \leq n, \) considered as points in \( \mathbb{R}^{n-1} \) satisfies \textit{NCA} in \( \mathbb{R}^{n-1} \).
Thus, similarly to lines in a square in the two dimensional case, parallel hyperplanes in a hyperspace have to be searched. Using a reasoning similar to the two-dimensional case one can show that it is always sufficient to search a hypercube with sides \([0, \ldots, \sum_{i=1}^{k} m_i]\). The proof is also straightforwardly generalized.
4 Automatically inferring sized types
The type checking procedure from section 2 and the size hypothesis generation from section 3 are combined into a type inference algorithm by generating and checking hypothesis for an increasing degree. The algorithm is semi-decidable: it only terminates when the function is well-typable in the type system of the type checker used.
4.1 The algorithm
For any shapely program, the underlying type (the type without size annotations) can be derived by a standard type inference algorithm [Mil78]. After
Function: TryIncreasingDegrees
Input: the function definition
Output: the sized type of that function
TryIncreasingDegrees(m, f) =
let type = InferUnderlyingType(f)
atype = AnnotateWithSizeVariables(type)
vs = GetOutputSizeVariables(atype)
stype = GetSizedType(m, f, atype, vs, [ ])
in if (CheckSizedType(stype, f)) then stype
else TryIncreasingDegrees(m+i, f)
Function: GetSizedType
Input: a degree, the function definition with its annotated type, the variables to derive
and the polynomials already derived
Output: the sized type of that function if the degree is high enough
GetSizedType(m, f, atype, [ ], ps) =
AnnotateWithSizeExpressions(atype, ps)
GetSizedType(m, f, atype, v:vs, ps) =
let nodes = GetNodeConf(m, atype, ps)
results = RunTests(f, nodes)
p = DerivePolynomial(m, v, atype, results)
in GetSizedType(m, f, atype, vs, p:ps)
Fig. 3. The weak type inference algorithm in pseudo-code
straightforwardly annotating input sizes with size variables and output sizes
with size expression variables, we have for example
\[ \text{cprod : } [\alpha]^{s_1} \rightarrow [\alpha]^{s_2} \rightarrow [[\alpha]^{p_2(s_1, s_2)}]^{p_1(s_1, s_2)} \]
To derive the size expressions on the right hand side we use the following pro
cedure. First, the maximum degree of the occurring size expressions is assumed,
starting with zero. Then, a hypothesis is generated for each size expression. This
is done from the outside in, because of the problems with partially definedness
noted in section 3.3. After hypotheses have been obtained for all size expressions
they are added to the type and this hypothesis type is checked using the type
checking algorithm. If it is accepted, the type is returned. If not, the procedure
is repeated for a higher degree.
Figure 3 shows the algorithm in pseudo-code. Note that if the assumed degree
is lower than the true degree, the derived polynomials may be wrong. In that
case, also the places where the size function is undefined cannot be determined
correctly. It might happen that the node configuration includes points where the
size expression is undefined so the test results do not provide enough information
to uniquely infer the polynomial. In that case, by convention, the zero polynomial
is returned.
If a type is rejected, this can mean two things. First, the assumed degree
was too low and one of the size expressions has a higher degree. That is why the
Table 1. Type construction for four functions (n is the number of input variables, k the number of output polynomials). For each iteration of the algorithm, the degree (m) and the number of tests required (and the theoretical maximum \((1 + km)^n\)) to get a hypothesis is given assuming the space was searched using diagonal search.
<table>
<thead>
<tr>
<th>function</th>
<th>m</th>
<th>nr. of tests</th>
<th>type suggested</th>
<th>type checker</th>
</tr>
</thead>
<tbody>
<tr>
<td>cprod</td>
<td>0</td>
<td>1</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td>(n = 2, k = 2)</td>
<td>1</td>
<td>8</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{2^{s_1+s_2-1}})</td>
<td>reject</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>14 (25)</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{2^{s_1+s_2}})</td>
<td>accept</td>
</tr>
<tr>
<td>append</td>
<td>0</td>
<td>1</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td>(n = 2, k = 1)</td>
<td>1</td>
<td>3</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{s_1+s_2})</td>
<td>accept</td>
</tr>
<tr>
<td>competition</td>
<td>0</td>
<td>1</td>
<td>([\alpha]^{s_1} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td>(n = 1, k = 2)</td>
<td>1</td>
<td>3</td>
<td>([\alpha]^{s_1} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>5 (5)</td>
<td>([\alpha]^{s_1} \to [\alpha]^{2^{s_1-s_2}})</td>
<td>accept</td>
</tr>
<tr>
<td>sqdiff</td>
<td>0</td>
<td>1</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td>(n = 2, k = 2)</td>
<td>0</td>
<td>3</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{0})</td>
<td>reject</td>
</tr>
<tr>
<td></td>
<td>0</td>
<td>8 (25)</td>
<td>([\alpha]^{s_1} \to [\alpha]^{s_2} \to [\alpha]^{2^{(s_1-s_2)^2}})</td>
<td>accept</td>
</tr>
</tbody>
</table>
procedure continues for a higher degree. Another possibility is that one if the size expressions is not a polynomial (the function definition is not shapely) or that the type cannot be checked due to incompleteness. In that case the algorithm will not terminate. Fortunately, in practice a suitable stopping criterium may be known. If the function is well-typable, the procedure will eventually find the correct sized type and terminate.
4.2 Examples
The algorithm is illustrated by four functions: cprod (Cartesian product), append (standard list concatenation), competition (generates a competition in which every team plays a home and away match against every other team), and sqdiff (illustration of non-monotonicity).
```haskell
competition xs = randomize_order (competition' xs [])
competition' [] ys = []
competition' x:xs ys = pairs x xs ++ competition' xs x:ys
sqdiff [] ys = cprod ys ys
sqdiff x [] = cprod xs xs
sqdiff x:xs y:ys = sqdiff xs ys
```
For each function, table 1 gives the hypotheses generated for each iteration of the algorithm until the correct type has been found. As can be seen, in practice the number of tests is much lower than the theoretical maximum.
5 Discussion and Future Work
The algorithm currently has three apparent limitations. First, the algorithm has two possible sources of non-termination. Second, it only works for exact sizes and not for upper bounds. Third, it is developed for a first-order functional language with lists as the only supported data structures. Here, these issues are discussed and improvements are suggested.
5.1 Sources of Nontermination
Because the algorithm uses run-time tests, it does not terminate when one of these tests does not terminate. In practice, however, this is not an important problem, because the analysis will typically be run on a stable product where non-termination should be rare. Just in case, a termination analysis can be done first or the algorithm may be adapted to start looking for replacement tests if evaluation of a test takes too long and non-termination is suspected. In general, this problem is very related to test-case construction, which is an active field of research.
The second source of nontermination is the iteration over increasing degrees of polynomials. If none of the generated types is accepted by the checker, either because the function definition is not shapely or due to incompleteness, the algorithm in principle does not stop. In practice, often an upper bound can be put on the degree because only size expressions of low degree are desired.
5.2 Shapely programs
The current hypothesis generation algorithm relies on the limitation to shapely programs; output sizes need to be exactly polynomial in the input size. In practice many programs are not shapely, but still have a polynomial upper bound. For instance, inserting an element in a set only increases the set by one if the element was not in it yet. Its upper bound would be:
\[ \text{insert} : [\alpha]^s \rightarrow \alpha \rightarrow [\alpha]^{s+1} \]
To extend our approach to such upper bounds, we have begun studying program transformations that transform an unshapely function into a shapely function with the strict size dependency corresponding to an upper bound of the size dependency of the original function. For instance, the \text{insert} function would be transformed into a shapely function that always inserts the element. We believe that in many practical cases the testing approach combined with program transformations will succeed in providing good upper bounds.
5.3 Wider applicability
In this paper, the work has been presented for a simple functional language over lists. We plan to extend and implement the algorithm for an existing language with more general data structures. Good candidates are XML transformation languages [Wad00,Fri06] because such transformations are very likely to
be shapely. For these applications, the general type inference algorithm will stay the same. The only requirement is that a type checker exists, or is developed, that supports the language.
6 Conclusion
We have developed an algorithm that infers static non-monotonically sized types through interpolating data from run-time tests. Because the dynamically generated types are only accepted after checking them by a formal type checking algorithm, the types are static: the size expressions hold for every possible future run of the program.
The key idea in this approach is the use of a dynamic testing procedure to generate hypotheses for the sized types. This replaces an otherwise infeasible to define formal type inference procedure and essentially reduces type inference to type checking. As a consequence, type inference is complete with respect to type checking.
6.1 Related work
Some interesting initial work on inferring size relations within the output of XML transformations has been done by Su and Wassermann [SW04]. Although this work does not yield output-on-input dependencies, it is able to infer size relations within the output type, for instance if two branches have the same number of elements.
Herrmann and Lengauer have presented a size analysis for functional programs over nested lists [HL01]. However, they do not solve recurrence equations in their size expressions, as this is not important for their goal of program parallelization.
Other work on size analysis has been restricted to monotonic dependencies. Research by Pareto has yielded an algorithm to automatically check linear sized types where size expression are upper bounds [Par98]. Construction of non-linear upper bounds using a traditional type system approach has been presented by Hammond and Vasconcellos [VK04], but this work leaves recurrence equations unsolved and is limited to monotonic dependencies. The work on quasi-interpretations by Amadio [Ama03] also requires monotonic dependencies.
References
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/72206/72206.pdf?sequence=1", "len_cl100k_base": 10835, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37859, "total-output-tokens": 12385, "length": "2e13", "weborganizer": {"__label__adult": 0.00041794776916503906, "__label__art_design": 0.0003521442413330078, "__label__crime_law": 0.0004024505615234375, "__label__education_jobs": 0.0006556510925292969, "__label__entertainment": 7.867813110351562e-05, "__label__fashion_beauty": 0.0001938343048095703, "__label__finance_business": 0.00023674964904785156, "__label__food_dining": 0.00046372413635253906, "__label__games": 0.0006222724914550781, "__label__hardware": 0.0010271072387695312, "__label__health": 0.0007681846618652344, "__label__history": 0.00029206275939941406, "__label__home_hobbies": 0.0001125335693359375, "__label__industrial": 0.0004656314849853515, "__label__literature": 0.000370025634765625, "__label__politics": 0.0003204345703125, "__label__religion": 0.0005879402160644531, "__label__science_tech": 0.03533935546875, "__label__social_life": 0.0001023411750793457, "__label__software": 0.004154205322265625, "__label__software_dev": 0.95166015625, "__label__sports_fitness": 0.00037169456481933594, "__label__transportation": 0.0006322860717773438, "__label__travel": 0.0002231597900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41224, 0.03448]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41224, 0.48392]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41224, 0.85053]], "google_gemma-3-12b-it_contains_pii": [[0, 294, false], [294, 2657, null], [2657, 5019, null], [5019, 7638, null], [7638, 10748, null], [10748, 13829, null], [13829, 16765, null], [16765, 20246, null], [20246, 22319, null], [22319, 25801, null], [25801, 28528, null], [28528, 30973, null], [30973, 33800, null], [33800, 36520, null], [36520, 38933, null], [38933, 41224, null]], "google_gemma-3-12b-it_is_public_document": [[0, 294, true], [294, 2657, null], [2657, 5019, null], [5019, 7638, null], [7638, 10748, null], [10748, 13829, null], [13829, 16765, null], [16765, 20246, null], [20246, 22319, null], [22319, 25801, null], [25801, 28528, null], [28528, 30973, null], [30973, 33800, null], [33800, 36520, null], [36520, 38933, null], [38933, 41224, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41224, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41224, null]], "pdf_page_numbers": [[0, 294, 1], [294, 2657, 2], [2657, 5019, 3], [5019, 7638, 4], [7638, 10748, 5], [10748, 13829, 6], [13829, 16765, 7], [16765, 20246, 8], [20246, 22319, 9], [22319, 25801, 10], [25801, 28528, 11], [28528, 30973, 12], [30973, 33800, 13], [33800, 36520, 14], [36520, 38933, 15], [38933, 41224, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41224, 0.07394]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
0b2b098ed87fa07ebe2eec987b98adebf1431efd
|
[REMOVED]
|
{"len_cl100k_base": 8513, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28024, "total-output-tokens": 9554, "length": "2e13", "weborganizer": {"__label__adult": 0.0003938674926757813, "__label__art_design": 0.0004553794860839844, "__label__crime_law": 0.0008840560913085938, "__label__education_jobs": 0.002307891845703125, "__label__entertainment": 0.00012731552124023438, "__label__fashion_beauty": 0.00021719932556152344, "__label__finance_business": 0.0005354881286621094, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0010633468627929688, "__label__hardware": 0.0012331008911132812, "__label__health": 0.0009713172912597656, "__label__history": 0.00045680999755859375, "__label__home_hobbies": 0.00018012523651123047, "__label__industrial": 0.0008959770202636719, "__label__literature": 0.0005245208740234375, "__label__politics": 0.0005478858947753906, "__label__religion": 0.0006122589111328125, "__label__science_tech": 0.31591796875, "__label__social_life": 0.0001544952392578125, "__label__software": 0.01189422607421875, "__label__software_dev": 0.65869140625, "__label__sports_fitness": 0.0004372596740722656, "__label__transportation": 0.0009050369262695312, "__label__travel": 0.00026035308837890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33090, 0.03799]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33090, 0.32219]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33090, 0.87588]], "google_gemma-3-12b-it_contains_pii": [[0, 1933, false], [1933, 4938, null], [4938, 8143, null], [8143, 10205, null], [10205, 12821, null], [12821, 14937, null], [14937, 17316, null], [17316, 19945, null], [19945, 22771, null], [22771, 25109, null], [25109, 28039, null], [28039, 30617, null], [30617, 33090, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1933, true], [1933, 4938, null], [4938, 8143, null], [8143, 10205, null], [10205, 12821, null], [12821, 14937, null], [14937, 17316, null], [17316, 19945, null], [19945, 22771, null], [22771, 25109, null], [25109, 28039, null], [28039, 30617, null], [30617, 33090, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33090, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33090, null]], "pdf_page_numbers": [[0, 1933, 1], [1933, 4938, 2], [4938, 8143, 3], [8143, 10205, 4], [10205, 12821, 5], [12821, 14937, 6], [14937, 17316, 7], [17316, 19945, 8], [19945, 22771, 9], [22771, 25109, 10], [25109, 28039, 11], [28039, 30617, 12], [30617, 33090, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33090, 0.11189]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
4ab4235d6de341c9b91fb3b9fbb7c415f3e73111
|
Theory Interpretations in PVS
Sam Owre and Natarajan Shankar
SRI International, Menlo Park, California
July 2001
Since its founding, NASA has been dedicated to the advancement of aeronautics and space science. The NASA Scientific and Technical Information (STI) Program Office plays a key part in helping NASA maintain this important role.
The NASA STI Program Office is operated by Langley Research Center, the lead center for NASA's scientific and technical information. The NASA STI Program Office provides access to the NASA STI Database, the largest collection of aeronautical and space science STI in the world. The Program Office is also NASA's institutional mechanism for disseminating the results of its research and development activities. These results are published by NASA in the NASA STI Report Series, which includes the following report types:
- **TECHNICAL PUBLICATION.** Reports of completed research or a major significant phase of research that present the results of NASA programs and include extensive data or theoretical analysis. Includes compilations of significant scientific and technical data and information deemed to be of continuing reference value. NASA counterpart of peer-reviewed formal professional papers, but having less stringent limitations on manuscript length and extent of graphic presentations.
- **TECHNICAL MEMORANDUM.** Scientific and technical findings that are preliminary or of specialized interest, e.g., quick release reports, working papers, and bibliographies that contain minimal annotation. Does not contain extensive analysis.
- **CONTRACTOR REPORT.** Scientific and technical findings by NASA-sponsored contractors and grantees.
- **CONFERENCE PUBLICATION.** Collected papers from scientific and technical conferences, symposia, seminars, or other meetings sponsored or co-sponsored by NASA.
- **SPECIAL PUBLICATION.** Scientific, technical, or historical information from NASA programs, projects, and missions, often concerned with subjects having substantial public interest.
- **TECHNICAL TRANSLATION.** English-language translations of foreign scientific and technical material pertinent to NASA's mission.
Specialized services that complement the STI Program Office's diverse offerings include creating custom thesauri, building customized databases, organizing and publishing research results ... even providing videos.
For more information about the NASA STI Program Office, see the following:
- E-mail your question via the Internet to help@sti.nasa.gov
- Fax your question to the NASA STI Help Desk at (301) 621-0134
- Phone the NASA STI Help Desk at (301) 621-0390
- Write to:
NASA STI Help Desk
NASA Center for AeroSpace Information
7121 Standard Drive
Hanover, MD 21076-1320
Theory Interpretations in PVS
Sam Owre and Natarajan Shankar
SRI International, Menlo Park, California
July 2001
Abstract
This is the final report for SRI Project 6464, Task 16, NASA Langley contract NAS1-20334. The purpose of this task is to provide a mechanism for theory interpretations in PVS so that it is possible to demonstrate the consistency of a theory by exhibiting an interpretation that validates the axioms. The mechanization makes it possible to show that one collection of theories is correctly interpreted by another collection of theories under a user-specified interpretation for the uninterpreted types and constants. A theory instance is generated and imported, while the axiom instances are generated as proof obligations to ensure that the interpretation is valid. Interpretations can be used to show that an implementation is a correct refinement of a specification, that an axiomatically defined specification is consistent, or that an axiomatically defined specification captures its intended models.
In addition, the theory parameter mechanism has been extended with a notion of theory as parameter so that a theory instance can be given as an actual parameter to an imported theory. Theory interpretations can thus be used to refine an abstract specification or to demonstrate the consistency of an axiomatic theory. In this report we describe the mechanism in detail. This extension is a part of PVS version 3.0, which will be publicly released in mid-2001.
## Contents
1 Introduction .................................................. 1
2 Mappings .................................................... 5
3 Theory Declarations ......................................... 11
4 Prettyprinting Theory Instances ......................... 19
5 Comparison with Other Systems ......................... 21
6 Future Work .................................................. 25
7 Conclusion .................................................... 27
Bibliography ..................................................... 29
Chapter 1
Introduction
Theory interpretations have a long history in first-order logic [Sho67, End72, Mon76]. They are used to show that the language of a given source theory $S$ can be interpreted within a target theory $T$ such that the corresponding interpretation of axioms of $S$ become theorems of $T$. This demonstrates the consistency of $S$ relative to $T$, and also the decidability of $S$ modulo the decidability of $T$. Theories and theory interpretations have also become important in higher-order logic and type theory with languages such as EHDM [EHD93], IMPS [Far92], HOL [Win92], Maude [CDE+99], Extended ML [ST97], and SPECWARE [SJ95]. In these languages, theories are used as structuring mechanisms for large specifications so that abstract theories can be refined into more concrete ones through interpretation. In this report, we describe a theory interpretation mechanism for the PVS specification language.
Specification languages and programming languages usually have some mechanism for packaging groups of definitions into modules. Lisp and Ada have packages. Standard ML has a module system consisting of signatures, structures corresponding to a signature, and functors that map between structures. Packages can be made generic by allowing certain declarations to serve as parameters that can be instantiated when the package is imported. Ada has generic packages that allow parameters. SML functors can be used to construct parametric modules. C++ allows templates.
In specification languages, a theory groups together related declarations of constants, types, axioms, definitions, and theorems. One way of demonstrating the consistency of such a theory is by providing an interpretation for the uninterpreted types and constants under which the axioms are valid. The definitions and theorems corresponding to a valid interpretation can then be taken as valid without further proof as long as they have been verified in the source theory. The technique of interpreting one axiomatic theory in another has many uses. It can be used to demonstrate the consistency or decidability of the former theory with respect to the latter theory. It can also be used to refine an abstract theory down to an executable implementation.
Interpretations are also useful in showing that the axioms capture the intended models. For example, a clock synchronization algorithm was developed in EHDM and was later shown to be consistent using the mappings, but it turned out that in one place \(<\) was used instead of \(\leq\), and because of this a set of perfectly synchronized clocks was actually disallowed by the model. Using interpretations in this way is similar to testing in allowing for the exploration of the space of models for the theory.
Parametric theories in PVS share some of the features of theory interpretations. Such theories can be defined with formal parameters ranging over types and individuals, for example,\(^1\)
\[
\text{group}[G: \text{TYPE}, +: [G, G \rightarrow G], 0: G, -: [G \rightarrow G]]: \text{THEORY}
\]
\[
\begin{array}{l}
\text{BEGIN} \\
\hspace{1em} \\
\text{END group}
\end{array}
\]
An instance of the theory group can be imported by supplying actual parameters, the type \(\text{int}\) of integers, integer addition \(+\), zero \(0\), and integer negation \(-\), corresponding to the formal parameters, as in group[int, +, 0, -]. A theory can include assumptions about the parameters that have to be discharged when the actual parameters are supplied. For example, the group axioms can be given as assumptions in the group theory above. However, there are some crucial differences between parametric theories and theory interpretations. In particular, if axioms are always specified as assumptions, then the theory can be imported only by discharging these assumptions. It is necessary to have separate mechanisms for importing a theory with the axioms, and for interpreting a theory by supplying a valid interpretation, that is, one that satisfies its axioms.
The PVS theory interpretation mechanism is quite similar to that for theory parameterization. The axiomatic specification of groups could alternately be given in a theory
\[
\text{group}: \text{THEORY}
\]
\[
\begin{array}{l}
\text{BEGIN} \\
\hspace{1em} G: \text{TYPE}\,+
\hspace{1em} +: [G, G \rightarrow G]
\hspace{1em} 0: G
\hspace{1em} -: [G \rightarrow G]
\hspace{1em} \\
\text{END group}
\end{array}
\]
The group axioms are declared in the body of the theory. Such a theory can be interpreted by writing group\{\{G := \text{int}, + := +, 0 := 0, - := -\}\}. Here the left-hand sides refer to the uninterpreted types and constants of theory group, and the right-hand sides are the interpretations. This notation resembles that of theory parameterization
\(^1\)This exploits a new feature of PVS version 3.0, in which numbers may be overloaded as names.
and is used in contexts where a theory is imported. The corresponding instances of
the group axioms are generated as proof obligations at the point where the theory is imported.
The result is a theory that consists of the corresponding mapping of the remaining declarations
in the theory group. This allows the theory group to be used in other theories, such
as rings and fields, and also allows the theory group to be suitably instantiated by group
structures.
Theory interpretations largely subsume parametric theories in the sense that the the-
ory parameters and the corresponding assuming can instead be presented as uninterpreted
types and constants and axioms so that the actual parameters are given by means of an in-
terpretation. However, a parametric theory with both assuming and axioms involving the
parameters is not equivalent to any interpreted theory, as the parameters may be instanti-
ated without the need to prove the axioms. It is also useful to have parametric theories as a
convenient way of grouping together all the parameters that must be provided whenever the
theory is used. For example, typical theory parameters such as the size of an array, or the
element type of an aggregate datatype such as an array, list, or tree, are required as inputs
whenever the corresponding theories are used. While this kind of parameterization can be
captured by theory interpretations, it would not capture the intent that these parameters are
required inputs wherever the theory is used. Furthermore, when an operation from a para-
metric theory is used, PVS attempts to figure out the actual parameters based on the context
of its use. It can do this because the formal parameters are precisely delimited. The corre-
sponding inference is harder for theory interpretations since there might be many possible
interpretations that are compatible with the context of the operations use.
In addition to the uninterpreted types and constants in a source theory $S$, the PVS theory
interpretation mechanism can also be used to interpret any theories that are imported into
$S$ by means of the THEORY declaration. The interpretation of a theory declaration for $S'$
imported within $S$ must itself be a theory interpretation of $S'$. Two distinct importations
of a theory $S'$ within $S$ can be given distinct interpretations. A typical situation is when
two theories $R_1$ and $R_2$ both import a theory $S$ as $S_1$ and $S_2$, respectively. A theory $T$
importing both $R_1$ and $R_2$ might wish to identify $S_1$ and $S_2$ since, otherwise, these would
be regarded as distinct within $T$. This can be done by importing an instance $S'$ of $S$ into
$T$ and importing $R_1$ with $S_1$ interpreted by $S'$ and $R_2$ with $S_2$ interpreted as $S'$. With
theory interpretations, we have also extended parametric theories in PVS to take theories
as parameters. For example, we might have a theory group_homomorphism of group
homomorphisms that takes two groups $G_1$ and $G_2$ as parameters as in the declaration
\[
\text{group_homomorphism}[G_1, G_2: \text{THEORY group}]: \text{THEORY } \ldots
\]
The actual parameters for these theory formals must be interpretations $G_1'$ and $G_2'$ of the
theory group.
Another typical requirement in a theory interpretation mechanism is the ability to map a
source type to some quotient with respect to an equivalence relation over a target type. For
example, rational numbers can be interpreted by means of a pair of integers corresponding to the numerator and denominator, but the same rational number can have multiple such representations. We show how it is possible to define quotient types in PVS and use these types to capture interpretations where the equality over a source type is mapped to an equivalence relation over a target type.
The implementation of theory interpretation in PVS is described in the following chapters. This report assumes the reader is already familiar with the PVS language; for details see the PVS Language Manual [OSRSC99]. Chapter 2 deals with mappings, explaining the basic concepts and introduces the grammar. Chapter 3 introduces theory declarations and theories as parameters which allow any valid interpretation of the formal parameter theory as an actual parameter. Chapter 4 describes a new command for viewing theory instances. Chapter 5 compares PVS interpretations with other systems, Chapter 6 describes future work, and we conclude with Chapter 7.
Chapter 2
Mappings
Theory interpretations in PVS provide mappings for uninterpreted types and constants of the source theory into the current (interpreting) theory. Applying a mapping to a source theory yields an interpretation (or target) theory. A mapping is specified by means of the mapping construct, which associates uninterpreted entities of the source theory with expressions of the target theory. The mapping construct is an extension to the PVS notion of “name”. The changes to the existing grammar are given in Figure 2.1.
The mapping construct defines the basic translation, but to be a theory interpretation the mapping must be consistent: if type T is mapped to the type expression E, then a constant t of type T must be mapped to an expression e of type E. In addition, all axioms and theorems of the source theory must be shown to hold in the target theory under the mapping. Since the theorems are provable from the axioms, it is enough to show that the translation of the axioms hold. Axioms whose translations do not involve any uninterpreted types or constants of the source theory are converted to proof obligations. Otherwise they remain axioms.
Theory interpretation may be viewed as an extension of theory parameterization. Given a theory named T, the instance T[a_1, ..., a_n]{{c_1 := e_1, ..., c_m := e_m}} is the same as the original theory, with the actuals a_i substituted for the corresponding formal
<table>
<thead>
<tr>
<th>TheoryName</th>
<th>::=</th>
<th>[Id @] Id [Actuals] [Mappings]</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>::=</td>
<td>[Id @] IdOp [Actuals] [Mappings] [ . IdOp]</td>
</tr>
<tr>
<td>Mappings</td>
<td>::=</td>
<td>{ { Mapping++"," } }</td>
</tr>
<tr>
<td>Mapping</td>
<td>::=</td>
<td>MappingLhs MappingRhs</td>
</tr>
<tr>
<td>MappingLhs</td>
<td>::=</td>
<td>IdOp Bindings* [: { TYPE</td>
</tr>
<tr>
<td>MappingRhs</td>
<td>::=</td>
<td>:= { Expr</td>
</tr>
</tbody>
</table>
Figure 2.1: Grammar for Names with Mappings
parameters, and e_i substituted for c_i, which must be an uninterpreted type or constant declaration. Declarations that appear in the target of a substitution in the mapping are not visible in the importing theory. Some axioms are translated to proof obligations. The substituted forms of any remaining axioms, definitions, and lemmas are available for use, and are considered proved if they are proved in the uninterpreted theory.
The following simple example illustrates the basic concepts.
```
thl[T: TYPE, e: T]: THEORY
BEGIN
t: TYPE+
c: t
f: [t -> T]
ax: AXIOM EXISTS (x, y: t): f(x) /= f(y)
lem1: LEMMA EXISTS (x:T): x /= e
END thl
```
```
th2: THEORY
BEGIN
IMPORTING thl[ int, 0 ]
{{
t := bool,
c := true,
f(x: bool) := IF x THEN 1 ELSE 0 ENDIF
}}
lem2: LEMMA EXISTS (x:int): x /= 0
END th2
```
Here theory th1 has both actual parameters and uninterpreted types and constants, as well as an axiom and a lemma. Theory th2 imports th1, making the following substitutions:
\[
\begin{align*}
T & \leftarrow \text{int} \\
e & \leftarrow 0 \\
t & \leftarrow \text{bool} \\
c & \leftarrow \text{true} \\
f & \leftarrow \Lambda (x: \text{bool}): \text{IF } x \text{ THEN } 1 \text{ ELSE } 0 \text{ ENDF}
\end{align*}
\]
Note that the mapping for \(f\) uses an abbreviated form of substitution. Typechecking this leads to the following proof obligation.
```
IMP thl TCCI: OBLIGATION
EXISTS (x, y: bool):
IF x THEN 1 ELSE 0 ENDIF /= IF y THEN 1 ELSE 0 ENDIF;
```
This is simply the interpretation of the ax axiom and is easily proved. The lemma lem1 can be proved from the axiom, and may be used directly in proving lem2 using the proof command (LEMMA "lem1").
Note that once the TCC has been proved, we know that \texttt{th1} is consistent. If we had left out the mapping for \texttt{f}, then the TCC would not be generated, and the translation of theory \texttt{th1} would still contain an axiom and not necessarily be consistent.
One advantage to using mappings instead of parameters is that not all uninterpreted entities need be mapped, whereas for parameters either all or none must be given. For example, consider the following theory.
\begin{verbatim}
example1[T: TYPE, c: T]: THEORY
BEGIN
f(x: T): int = IF x = c THEN 0 ELSE 1 ENDIF
END example1
\end{verbatim}
It may be desirable to import this where \texttt{T} is always \texttt{real}, and \texttt{c} is left as a parameter, but there is currently no mechanism for this. One could envision partial importings such as \texttt{IMPORTING example1[real, _]}, but it is not clear that this is actually practical—in particular, the syntax for providing the missing parameters is not obvious. With mappings, on the other hand, we can define \texttt{example1} as follows.
\begin{verbatim}
example1: THEORY
BEGIN
T: TYPE
c: T
f(x: T): int = IF x = c THEN 0 ELSE 1 ENDIF
END example1
\end{verbatim}
Then we can refer to this theory from another theory as in the following.
\begin{verbatim}
example2: THEORY
BEGIN
th: THEORY = example1{{T := real}}
frm: FORMULA f{{c := 3}} = f
END example2
\end{verbatim}
The \texttt{th} theory declaration just instantiates \texttt{T}, leaving \texttt{c} uninterpreted. The first reference to \texttt{f} maps \texttt{c} to 3, whereas the second reference leaves it uninterpreted though it is still a \texttt{real}. Note that formula \texttt{frm} is unprovable, since the uninterpreted \texttt{c} from the second reference may or may not be equal to 3.
As described in the introduction, an important aspect of mappings is the support for quotient types. In \texttt{EHDM} this was done by interpreting equality, but in \texttt{PVS} we instead define a theory of equivalence classes, and allow the user to map constants to equivalence classes under congruences. For example, the \texttt{stacks} datatype might be implemented using an array as follows.
The `equivalence_class` theory defines the quotient type of `cstack` with respect to the equivalence relation `ce`. It is defined as follows.
```plaintext
equivalence_class[T:TYPE, ==: (equivalence?[T])] : THEORY
BEGIN
x, y: VAR T
equiv_class(x): setof[T] = {y | x == y}
E: TYPE = {A: setof[T] | EXISTS x: A = equiv_class(x)}
rep(A: E): (A) = epsilon(A)
CONVERSION equiv_class, rep
equiv_class_covers: LEMMA FORALL x: EXISTS (A: E): member(x, A)
equiv_class_separates: LEMMA
NOT (x == y)
IMPLIES disjoint?(equiv_class(x), equiv_class(y))
END equivalence_class
```
Note that it introduces `equiv_class` and `rep` as conversions. The type of the `==` parameter ensure that only equivalence relations are used in generating equivalence classes. The type `E` is the type of equivalence classes.
The `lifteq` and `lifteqs` theories allow functions on concrete stacks to be lifted to functions on equivalence classes, so long as they are congruences, that is, they satisfy the `preserves` relation.
```plaintext
lifteq[D, R: TYPE, deq: (equivalence?[D])]: THEORY
BEGIN
IMPORTING equivalence_class
lift(f: (preserves[D, R] (deq, =[R]))) (A:E[D,deq]) : R
= f(rep(A))
CONVERSION lift
END lifteq
lifteqs[D, R: TYPE, deq: (equivalence?[D]), req: (equivalence?[R])]: THEORY
BEGIN
IMPORTING equivalence_class
lift(f: (preserves[D, R] (deq, req))) (A:E[D,deq]) : E[R,req]
= equiv_class[R,req](f(rep(A)))
CONVERSION lift
END lifteqs
```
For `lifteqs`, \( f \) satisfies the `preserves` relation if the following holds
```plaintext
FORALL (x1, x2: D): deq(x1,x2) IMPLIES req(f(x1),f(x2))
```
The reader might notice that the `lifteq` theory is not really necessary, as `lifteq[D, R, deq]` is semantically equivalent to `lifteqs[D, R, deq, =[R]]`. However, in practice the `lift` conversion of `lifteqs` is not applied without explicitly importing the correct instances. In addition, terms such as `rep[int,=[int]](equiv_class[int,=[int]](13))` end up being constructed, and it takes some work to reduce this to 13.
With these theories imported, we can finish the specification of `cstack` as follows.
```plaintext
...
estack: TYPE = E
IMPORTING stack[t]{
stack := estack,
empty? := cempty?,
nonempty? := cnonempty?,
empty := cempty,
push(x: t, s: estack) := cpush(x)(s),
top := ctop,
pop := cpop }}
END cstack
```
Here the source type stack is mapped to the equivalence class E defined by the concrete equality ce, by means of the equiv_class conversion. The constant empty is then mapped to its equivalence class. The mapping for push is more involved; cpush must first be lifted in order to apply it to the abstract stack s. This is applied automatically by the conversion mechanism of PVS. The application of lift generates the proof obligation that cpush preserves the equivalences, that is, it is a congruence. This mapping generates a large number of proof obligations, because the stack datatype generates a stacks.adt theory with a large number of axioms, for example, extensionality, well-foundedness, and induction.
The PVS interpretations mechanism is much simpler to implement than the one in EHD—equality is not a special case, but simply an aspect of mapping a type to an equivalence class. The technique of mapping types to equivalence classes is quite useful, and captures the notion of behavioral equivalence outlined in [ST97]. In fact it is more general, in that it works for any equivalence relation, not just those based on observable sorts.
Chapter 3
Theory Declarations
With the mapping mechanism, it is easy to specify a general theory and have it stand for any number of instances. For example, groups, rings, and fields are all structures that can be given axiomatically in terms of uninterpreted types and constants. This works well when considering one such structure at a time, but it is difficult to specify theories that involve more than one structure, for example, group homomorphisms. Importing the original theory twice is the same as importing it once, and an attempted definition of a homomorphism would turn into an automorphism. In this case what is needed is a way to specify multiple different “copies” of the original theory. This is accomplished with theory declarations, which may appear in either the theory parameters or the body of a theory. A theory declaration in the formal parameters is referred to as a theory as parameter.\(^1\) Theory declarations allow theories to be encapsulated, and instantiated copies of the implicitly imported theory are generated.
For example, an (additive) group is normally thought of as a 4-tuple consisting of a set \(G\), a binary operator \(+\), an identity element \(0\), and an inverse operator \(-\) that satisfies the usual group axioms. Using theory interpretations, we simply define this as follows:
\(^1\)The term theory parameter refers to a parameter of a theory, so we use the term theory as parameter instead.
As described in Chapter 2, we can use mappings to create specific instances of groups. For example,
\[
\text{group}\{\{G := \text{int}, + := \text{+}, 0 := 0, - := -\}\}
\]
is the additive group of integers, whereas
\[
\text{group}\{\{G := \text{nzreal}, + := \text{*}, 0 := 1, - := \text{LAMBDA (r:znreal) : 1/r}\}\}
\]
is the multiplicative group of nonzero reals.
This works nicely, until we try to define the notion of a group homomorphism. At this point we need two groups, both individually instantiable. We could simply duplicate the group specification, but this is obviously inelegant and error prone. Using theories as parameters, we may define group homomorphisms as follows.
group_homomorphism[G1, G2: THEORY group]: THEORY
BEGIN
x, y: VAR G1.G
homomorphism?(f): bool = FORALL x, y: f(x + y) = f(x) + f(y)
hom_exists: LEMMA EXISTS f: homomorphism?(f)
END group_homomorphism
Here G1 and G2 are theories as parameters to a generic homomorphism theory that may be instantiated with two different groups. Hence we may import group_homomorphism, for example, as
IMPORTING group_homomorphism[{{G := int, + := +, 0 := 0, - := -}]
{{G := nzreal, + := *, 0 := 1, - := LAMBDA (x: nzreal): 1/x}}]
There is a subtlety here that needs emphasizing; G1 and G2 are two distinct versions of theory group. For example, consider the addition of the following lemma to group_homomorphism.
oops: LEMMA G1.0 = G2.0
If G1 and G2 are treated as the same group theory, this is a provable lemma. But then after the importing given above we would be able to show that 0 = 1. Even worse, the two different instances of groups may not even be type compatible, so the oops lemma should not even typecheck.
We have solved this in PVS by making new theories G1 and G2 that are copies of the original group theory. Declarations within these copies are distinct from each other and from the original. Thus the oops lemma generates a type error, as G1.G and G2.G are incompatible types.
This introduces new possibilities. When creating copies of a theory the mappings are substituted and the original declarations disappear. However, it may be preferable to create definitions rather than substitutions. In addition, it is sometimes useful to simply rename the types or constants of a theory. For example, consider the following group instance
G1: THEORY = group{{G := int, + := +, 0 := 0, - := -}}
which generates the following theory.
G1: THEORY
BEGIN
x, y, z: VAR int
idempotent_is_identity: LEMMA x + x = x => x = 0
END G1
To create definitions, use = instead of :, as in the following.
\[
\text{G2: THEORY} = \text{group}\{\{G = \text{int}, + = +, 0 = 0, - = -\}\}
\]
Now we get the following theory.
\[
\begin{align*}
\text{G2: THEORY} \\
\text{BEGIN} \\
G: \text{TYPE}+ = \text{int} \\
+: [G, G \to G] = + \\
0: G = 0 \\
-: [G \to G] = - \\
x, y, z: \text{VAR} G \\
idempotent_is_identity: \text{LEMMMA} x + x = x \Rightarrow x = 0
\end{align*}
\]
Finally, to simply rename the uninterpreted types and constants, use ::= as in the following.
\[
\text{G3: THEORY} = \text{group}\{\{G ::= \text{MG}, + ::= *, 0 ::= 1, - ::= \text{inv}\}\}
\]
The generated theory instance specifies multiplicative groups as follows.
\[
\begin{align*}
\text{G3: THEORY} \\
\text{BEGIN} \\
MG: \text{TYPE}+ \\
*: [MG, MG \to MG] \\
1: MG \\
inv: [MG \to MG] \\
x, y, z: \text{VAR} MG \\
associative_ax: \text{AXIOM FORALL} x, y, z: x \cdot (y \cdot z) = (x \cdot y) \cdot z \\
identity_ax: \text{AXIOM FORALL} x: x \cdot 1 = x \\
inverse_ax: \text{AXIOM FORALL} x: x \cdot \text{inv}(x) = 1 \text{ AND } \text{inv}(x) \cdot x = 1 \\
idempotent_is_identity: \text{LEMMMA} x \cdot x = x \Rightarrow x = 1
\end{align*}
\]
The right-hand side of a renaming mapping must be an identifier, operator, or number, and must not create ambiguities within the generated theory. Note that renamed declarations are still uninterpreted, and may themselves be given interpretations, as in
\[
\text{G3i: THEORY} = \text{G3}\{\{MG ::= \text{nzreal}, * ::= \ast, 1 ::= \text{one}\}
\]
Finally, we can mix the different forms of mapping, to give a partial mapping.
\[
\text{G4: THEORY} = \text{group}\{\{G = \text{nzreal}, + ::= \ast, 0 ::= \text{one}\}\}
\]
This generates the following theory instance.
```
G4: THEORY
BEGIN
G: TYPE+ = nzreal;
one: nzreal;
-: [nzreal -> nzreal]
x, y, z: VAR nzreal
identity_ax: AXIOM FORALL (x: nzreal): x * one = x
inverse_ax: AXIOM FORALL (x: nzreal):
x * -x = one AND -x * x = one
idempotent_is_identity: LEMMA x * x = x => x = one
END G4
```
Note that associative_ax has disappeared—it has become a TCC of the importing theory—whereas the other axioms are not so transformed because they still reference uninterpreted types or constants.
With theories as parameters we have another situation in which mappings are more convenient than theory parameters. Many times the same set of parameters is passed through an entire theory hierarchy. If there are assumings, then these must be copied. For example, consider the following theory.
```
th[T: TYPE, a, b: T]: THEORY
BEGIN
ASSUMING
A: ASSUMPTION a /= b
ENDASSUMING
...
END th
```
To import this theory, you simply provide a type and two different elements of that type. But suppose you wish to import this theory from a theory that has the same parameters. In this case the assumption must also be copied, as there is otherwise no way to prove the resulting obligation. This can (and frequently does) lead to a tower of theories, all with the same parameters and copies of the same assumptions, as well as proofs of the same obligations.
There are ways around this, of course. Most assumptions may be turned into type constraints, as in the following.
```
th[T: TYPE, a: T, b: [x: T | a /= x]]: THEORY
```
But this introduces an asymmetry in that a and b now belong to different types, and the type predicate still must be provided up the entire hierarchy.
Using a theory as a parameter, we may instead define `th` as follows.
th: THEORY
BEGIN
T: TYPE,
a, b: T
A: AXIOM a /= b
...
END th
We then parameterize using this theory (which is implicitly imported):
th_1[t: THEORY th]: THEORY ...
We have encapsulated the uninterpreted types and constants into a theory, and this is now represented as a single parameter. Axiom A is visible within theory th_1, and no proof obligations are generated since no mapping was given for th. Now we can continue defining new theories as follows.
th_2[t: THEORY th]: THEORY IMPORTING th_1[t] ...
th_3[t: THEORY th]: THEORY IMPORTING th_2[t] ...
None of these generate proof obligations, as no mappings are provided.
We may now instantiate th_n, for example, with the following.
IMPORTING th_n[th{{T := int, a := 0, b := 1}}]
Now the substituted form of the axiom becomes a proof obligation which, when proved, provides evidence that the theory th is consistent.
With the introduction of theories as parameters, it is natural to allow theory declarations that may be mapped, in the same way that instances may be provided for theories as parameters. Thus the group_homomorphism may be rewritten as follows:
group_homomorphism: THEORY
BEGIN
G1, G2: THEORY group
x, y: VAR G1.G
homomorphism?(f): bool = FORALL x, y: f(x + y) = f(x) + f(y)
hom_exists: LEMMA EXISTS f: homomorphism?(f)
END group_homomorphism
Again, the choice between using theories as parameters or theory declarations is really a question of taste, as they are largely interchangeable.
As with theories as parameters, copies must be made for G1 and G2. Note that this means that there is a difference between theory abbreviations and theory declarations, as
the former do not involve any copying. We decided to use the old form of theory abbreviation to define theory declarations, and to extend the IMPORTING expressions to allow abbreviations, as shown in Figure 3.2. Thus instead of
```plaintext
funset: THEORY = sets[[int -> int]]
```
which creates a copy of sets, use
```plaintext
IMPORTING sets[[int -> int]] AS funset
```
which imports `sets[[int -> int]]` and abbreviates it as `funset`.
---
Figure 3.2: Grammar for Importings
```
Importing ::= IMPORTING ImportingItem+ ',',
ImportingItem ::= TheoryName [AS Id]
```
---
17
Chapter 4
Prettyprinting Theory Instances
Mappings can get fairly complex, especially if actual parameters are involved, and it may be desirable to see the specified theory instance displayed with all the substitutions performed. To support this, we have provided a new PVS command: prettyprint-theory-instance (M-x ppti). This takes two arguments: a theory instance, which in general is a theory name with actual parameters and/or mappings, and a context theory, in which the theory instance may be typechecked. The simplest way to use this command is to put the cursor on the theory name as it appears in a theory as parameter, theory declaration, or importing—when the command is issued it then defaults to the theory instance under the cursor and the current theory is the default context theory. For example, putting the cursor on group_homomorphism in the following and typing M-x ppti followed by two carriage returns\textsuperscript{1} generates a buffer named group_homomorphism.ppi. All instances of a given theory generate the same buffer name.
\[
\text{IMPORTING group_homomorphism}[
\{\{G := \text{int, } + := +, \; 0 := 0, \; - := -\}\}
\{\{G := \text{nreal, } + := *, \; 0 := 1, \; - := \text{LAMBDA } (x: \text{nreal}) : \; 1/x\}\}]
\]
This buffer has the following contents.
\textsuperscript{1}The first uses the theory name instance at the cursor, and the second uses the current theory as the context.
% Theory instance for
% group_homomorphism[groups[{ G := int, + := +,
- := -, 0 := 0 }},
groups[{ G := nzreal, + := *,
- := (LAMBDA (x: nzreal): 1 / x), 0 := 1 }]
group_homomorphism_instance: THEORY
BEGIN
IMPORTING groups[{ G := int, + := +, - := -, 0 := 0 }]
IMPORTING groups[{ G := nzreal, + := *,
- := (LAMBDA (x: nzreal): 1 / x), 0 := 1 }]
x, y: VAR int
f: VAR [int -> nzreal]
homomorphism?(f): bool =
FORALL (x: int), (y: int): f(x + y) = f(x) * f(y)
hom_exists: LEMMA EXISTS (f: [int -> nzreal]): homomorphism?(f)
END group_homomorphism_instance
The group instances shown on pages 13–15 provide more examples of the output produced by prettyprint-theory-instance.
Chapter 5
Comparison with Other Systems
In this chapter we compare PVS theory interpretations to existing programming and specification mechanisms of other systems. The EHDM system [EHD90] has a notion of a mapping module that maps a source module to a target module. When a mapping module is typechecked, a new module is automatically created that represents the substitution of the interpretations for the body of the source theory. Equality is allowed to be mapped in EHDM, in which case it must be mapped to an equivalence relation. In PVS, mappings are provided as a syntactic component of names, and are essentially an extension of theory parameters. Equality is not treated specially, but is handled by mapping a given type to a quotient type.
IMPS [FGT90,Far94] also supports theory interpretations. It is similar to EHDM in that it has a special def-translation form that takes a source theory, target theory, sort association list, and constant association list, and generates a theory translation. Obligations may be generated that ensure that every axiom of the source theory is a theorem of the target theory. If these are proved the translation is treated as an interpretation. There is no mechanism for mapping equality. As with both PVS and EHDM, defined sorts and constants of the source theory are automatically translated. A more detailed comparison between IMPS and an earlier version of PVS appears in an unpublished report by Kammlüer [Kam96].
In Maude [CDE+99] and its precursor OBJ [GW88] it is possible to define modules that represent transition systems of a rewrite theory whose states are equivalence classes of ground terms and whose transitions are inference rules in rewriting logic. A given module may import another module, either protecting it, which means that the importing module adds no junk or confusion, or including it, which imposes no such restrictions. In addition to modules, Maude has theories, which are used to declare module interfaces. These may appear as module parameters, as in $M[X_1 :: T_1, \ldots, X_n :: T_n]$, where the $X_i$ are labels and the $T_i$ are names of theories. These theory parameters (source theories) may be instantiated by target theories or modules using views, which indicate how each sort,
function, class, and message of the source theory is mapped to the target theory. However, Maude currently does not support the generation of proof obligations from source theory axioms, so views are simply theory translations, not interpretations.
The programming language Standard ML [MTH90] has a module system where modules are given by structures with a given signature, and parametric modules are functors mapping structures of a given signature to structures. The PVS mechanism of using theories as parameters resembles SML functors but for a specification language rather than a programming language. Sannella and Tarlecki [ST97] describe a version of the ML module system in which there are specifications containing sorts, operations, and axioms. For example, the signature of stacks is the following:
```plaintext
STACK = sorts stack
empty : stack
push : int x stack -> stack
pop : stack -> stack
top : stack -> int
is_empty : stack -> bool
axioms is_empty(empty) = true
\forall s : stack. \forall n : int. is_empty(push(n, s)) = false
\forall s : stack. \forall n : int. top(push(n, s)) = n
\forall s : stack. \forall n : int. pop(push(n, s)) = s
```
The following algebra is a realization of the above specification that corresponds to that of cstack on page 8.
```plaintext
structure S2 : STACK =
struct
type stack = (int -> int) * int
val empty = ((fn k => 0), 0)
fun push (n, (f, i))
= (if k = i then n else f k), i+1
fun pop (f, i) = if i = 0 then (f, 0) else (f, i-1)
fun top (f, i) = if i = 0 then 0 else f(i-1)
fun is_empty (f, i) = (i=0)
end
```
Note however, that the stacks `empty` and `pop(push(6,empty))` are not equal. Thus they distinguish the observable sorts, in this case `int` and `bool`, which are the only data directly visible to the user. The above two terms are not observable computations, so it does not matter that they are different. In general, two different algebras are behaviorally equivalent if all observable computations yield the same results. Note that choosing observable values based on sorts is a bit coarse: for example, there may be two `int`-valued variables, one of which is observable and one that represents an internal pointer. Mapping to equivalence classes is more general, as it is easy to capture behavioral equivalence.
The induction theorem prover Nqthm [BM88, BGKM91] has a feature called `FUNCTIONALLY-INSTANTIATE` that can be used to derive an instance of a theorem.
by supplying an interpretation for some of the function symbols used in defining the theorem. The corresponding instances of any axioms concerning these function symbols must be discharged. Such axioms can be introduced as conservative extensions as definitions with the `DEFUN` declaration or through witnessed constraints using the `CONSTRAIN` declaration, or they can be introduced nonconservatively through an `ADD-AXIOM` declaration. While the functional instantiation mechanism is similar in flavor to PVS theory interpretations, the underlying logic of Nqthm is a fragment of first-order logic whose expressive power is more limited than the higher-order logic of PVS. In addition, Nqthm lacks types and structuring mechanisms such as parametric theories.
The SPECWARE language [SJ95] employs theory interpretations as a mechanism for the stepwise refinement of specifications into executable code. SPECWARE has constructs for composing specifications while identifying the common components, and for compositionally refining specifications so that the refinement of a specification can be composed from the refinement of its components. Unlike PVS, SPECWARE has the ability to incorporate multiple logics and translate specifications between these logics. A theory is an independent unit of specification in PVS and hence there is no support for composing theories from other theories. However, the operations in SPECWARE can largely be simulated by means of theories and theory interpretations in PVS.
In summary, theory interpretation has been a standard tool in specification languages since the early work on HDM [RLS79] and Clear [BG81]. PVS implements theory interpretations as a simple extension of the mechanism for importing parametric theories. PVS theory interpretations subsume the corresponding capabilities available in other specification frameworks.
Chapter 6
Future Work
A number of interesting extensions may be contemplated for the future.
Mapping of interpreted types and constants— There are two aspects: one is simply a convenience where, for example, we might have a tuple type declaration $T: \text{TYPE} = [T_1, T_2, T_3]$ and want to map it to position: $\text{TYPE} = [\text{real, real, real}]$ by simply giving the map $\{{T := \text{position}}\}$.
The second aspect is where the mapping is between two different kinds, for example mapping a record type to a function type. This requires determining the corresponding components as well as making explicit the underlying axioms. For example, record types satisfy extensionality, and if they are mapped to a different type the implicit extensionality axiom must be translated to a proof obligation.
Rewriting with congruences— In theory substitution, if a type is mapped to a quotient type then equality over this type is mapped to equality over the quotient type. If $T$ is an uninterpreted type, $\equiv$ an equivalence relation over $T'$, and $T'/\equiv$ the quotient type, then $=[T]$ is mapped to $=[T'/\equiv]$, which is equivalent to $\equiv$. An equational formula thus still has the form of a rewrite. However, to apply such a rewrite one generally needs to do some lifting. The following is a simple example.
th: THEORY
BEGIN
T: TYPE
a, b: T
f, g: [T -> T]
... Some axioms involving f, g, a, and b
lem: LEMMA f(a) = g(b)
END th
th2: THEORY
BEGIN
==(x, y: int): bool = divides(3, x - y)
IMPORTING th{{T
a
b
f
g
END th
To rewrite with lem, a must first be lifted to its equivalence class, then the rewrite is applied
and the result is then projected back using rep. To do this requires some modification to
the rewriting mechanism of the prover.
Consistency Analysis— With a single independent theory such as groups, it is easy to
generate a mapping in which all axioms become proof obligations, and see directly that the
theory is consistent. On the other hand, if many theories are involved in which compositions
of mappings are involved, this may become quite difficult. What is needed is a tool that
analyzes a mapped theory to see if it is consistent, and reports on any remaining axioms
and uninterpreted declarations. This is similar in spirit to proof chain analysis, but works at
the theory level rather than for individual formulas.
Semantics of Mappings— The semantics of theory interpretations needs to be formal-
ized and added to the PVS semantics report [OS97].
Chapter 7
Conclusion
Theory interpretations are used to embed an interpretation of an abstract theory in a more concrete one. In this way, they allow an abstract development to be reused at the more concrete level. Theory interpretations can be used to refine a specification down to code. Theory interpretations can also be used to demonstrate the consistency of an axiomatic theory relative to another theory.
Parametric theories in PVS provide some but not all of the functionality of theory interpretations. In particular, they do not allow an abstract theory to be imported with only a partial parameterization. Theory interpretations have been implemented in PVS version 3.0, which will be released in mid-2001. The current implementation allows the interpretation of uninterpreted types and constants in a theory, as well as theory declarations. PVS has also been extended so that a theory may appear as a formal parameter of another theory. This allows related sets of parameters to be packaged as a theory. Quotient types have been defined within PVS and used to admit interpretations of types where the equality on a source type is treated as an equivalence relation on a target type.
Theory interpretations have been implemented in PVS as an extension of the theory parameter mechanism. This way, theory interpretations are an extension of an already familiar concept in PVS and can be used in place of theory parameters where there is a need for greater flexibility in the instantiation. The proof obligations generated by theory interpretations are similar to those for parametric theories with assumptions.
A number of extensions related to theory interpretations remain to be implemented. First, we plan to extend theory interpretations to the case of interpreted types and constants. This poses some challenges since there are implicit operations and axioms associated with certain type constructors. Second, the rewriting mechanisms of the PVS prover need to be extended to rewrite relative to a congruence. This means that if we are only interested in \( f(a) \) up to some equivalence that is preserved by \( f \), then we could rewrite \( a \) up to equivalence rather than equality. Third, the PVS semantics have to be extended to incorporate
theory interpretations. Finally, the PVS ground evaluator has to be extended to handle theory interpretations. Currently, the ground evaluator generates code corresponding to a parametric theory and this code is reused with the actual parameters used as arguments to the operations. Theory interpretations cannot be treated as arguments in this manner since there is no fixed set of parameters; parameters can vary according to the interpretation. Also, non-executable operations can become executable as a result of the interpretation.
In summary, we believe that theory interpretations are a significant extension to the PVS specification language. Our implementation of this in PVS3.0 is simple yet powerful. We expect theory interpretations to be a widely used feature of PVS.
Bibliography
The purpose of this task was to provide a mechanism for theory interpretations in PVS so that it is possible to demonstrate the consistency of a theory by exhibiting an interpretation that validates the axioms. The mechanization makes it possible to show that one collection of theories is correctly interpreted by another collection of theories under a user-specified interpretation for the uninterpreted types and constants. A theory instance is generated and imported, while the axiom instances are generated as proof obligations to ensure that the interpretation is valid. Interpretations can be used to show that an implementation is a correct refinement of a specification, that an axiomatically defined specification is consistent, or that a axiomatically defined specification captures its intended models. In addition, the theory parameter mechanism has been extended with a notion of theory as parameter so that a theory instance can be given as an actual parameter to an imported theory. Theory interpretations can thus be used to refine an abstract specification or to demonstrate the consistency of an axiomatic theory. In this report we describe the mechanism in detail. This extension is a part of PVS version 3.0, which will be publicly released in mid-2001.
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20010066742.pdf", "len_cl100k_base": 11653, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 73939, "total-output-tokens": 14725, "length": "2e13", "weborganizer": {"__label__adult": 0.0003459453582763672, "__label__art_design": 0.0005807876586914062, "__label__crime_law": 0.0004072189331054687, "__label__education_jobs": 0.002017974853515625, "__label__entertainment": 0.00014710426330566406, "__label__fashion_beauty": 0.00021970272064208984, "__label__finance_business": 0.0005221366882324219, "__label__food_dining": 0.0004525184631347656, "__label__games": 0.0008792877197265625, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0006456375122070312, "__label__history": 0.0005354881286621094, "__label__home_hobbies": 0.0001729726791381836, "__label__industrial": 0.0010480880737304688, "__label__literature": 0.0006504058837890625, "__label__politics": 0.0005245208740234375, "__label__religion": 0.0006952285766601562, "__label__science_tech": 0.32275390625, "__label__social_life": 0.00016951560974121094, "__label__software": 0.0127105712890625, "__label__software_dev": 0.65185546875, "__label__sports_fitness": 0.0003762245178222656, "__label__transportation": 0.0009813308715820312, "__label__travel": 0.0002287626266479492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53086, 0.01904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53086, 0.68707]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53086, 0.85683]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 2863, null], [2863, 2978, null], [2978, 2978, null], [2978, 4354, null], [4354, 4354, null], [4354, 4899, null], [4899, 7153, null], [7153, 9784, null], [9784, 13185, null], [13185, 14233, null], [14233, 16129, null], [16129, 17809, null], [17809, 20002, null], [20002, 20819, null], [20819, 22363, null], [22363, 23513, null], [23513, 24960, null], [24960, 25652, null], [25652, 27515, null], [27515, 29224, null], [29224, 31017, null], [31017, 32681, null], [32681, 33264, null], [33264, 33264, null], [33264, 34690, null], [34690, 35381, null], [35381, 37652, null], [37652, 40161, null], [40161, 42037, null], [42037, 42037, null], [42037, 43372, null], [43372, 44542, null], [44542, 46811, null], [46811, 47593, null], [47593, 49150, null], [49150, 51123, null], [51123, 51812, null], [51812, 53086, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 2863, null], [2863, 2978, null], [2978, 2978, null], [2978, 4354, null], [4354, 4354, null], [4354, 4899, null], [4899, 7153, null], [7153, 9784, null], [9784, 13185, null], [13185, 14233, null], [14233, 16129, null], [16129, 17809, null], [17809, 20002, null], [20002, 20819, null], [20819, 22363, null], [22363, 23513, null], [23513, 24960, null], [24960, 25652, null], [25652, 27515, null], [27515, 29224, null], [29224, 31017, null], [31017, 32681, null], [32681, 33264, null], [33264, 33264, null], [33264, 34690, null], [34690, 35381, null], [35381, 37652, null], [37652, 40161, null], [40161, 42037, null], [42037, 42037, null], [42037, 43372, null], [43372, 44542, null], [44542, 46811, null], [46811, 47593, null], [47593, 49150, null], [49150, 51123, null], [51123, 51812, null], [51812, 53086, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53086, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53086, null]], "pdf_page_numbers": [[0, 115, 1], [115, 2863, 2], [2863, 2978, 3], [2978, 2978, 4], [2978, 4354, 5], [4354, 4354, 6], [4354, 4899, 7], [4899, 7153, 8], [7153, 9784, 9], [9784, 13185, 10], [13185, 14233, 11], [14233, 16129, 12], [16129, 17809, 13], [17809, 20002, 14], [20002, 20819, 15], [20819, 22363, 16], [22363, 23513, 17], [23513, 24960, 18], [24960, 25652, 19], [25652, 27515, 20], [27515, 29224, 21], [29224, 31017, 22], [31017, 32681, 23], [32681, 33264, 24], [33264, 33264, 25], [33264, 34690, 26], [34690, 35381, 27], [35381, 37652, 28], [37652, 40161, 29], [40161, 42037, 30], [42037, 42037, 31], [42037, 43372, 32], [43372, 44542, 33], [44542, 46811, 34], [46811, 47593, 35], [47593, 49150, 36], [49150, 51123, 37], [51123, 51812, 38], [51812, 53086, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53086, 0.01294]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
6720079e879e78f441df1f83dc7486cadb3192c1
|
SYSTEMATIC SYNTHESIS OF λ-TERMS
PIETER KOOPMAN AND RINUS PLASMEIJER
Institute for Computing and Information Sciences, Radboud University Nijmegen, The Netherlands
e-mail address: pieter@cs.ru.nl
Institute for Computing and Information Sciences, Radboud University Nijmegen, The Netherlands
e-mail address: rinus@cs.ru.nl
ABSTRACT. In this paper we show how to generate terms in the λ-calculus that match a given number of function argument result pairs. It appears that the number of λ-terms is too large to find terms reasonably fast based on the grammar of λ-calculus alone. By adding knowledge such as the desired number of arguments it is possible to synthesize λ-terms effectively for some interesting examples. This yields surprising terms that are unlikely to be found by a human.
An interesting subproblem is the determination of suitability of candidate terms based on equivalence of terms. We used an approximation of equivalence by a finite number of reduction steps. This implies that the test for equivalence can also yield the value undefined. Fortunately the test system used is able to handle undefined test results.
For Henk Barendregt on his sixtieth birthday
1. INTRODUCTION
In computer science one often looks for reducts of λ-expressions (the λ-expression is seen as a functional program representing the desired value), or general properties of λ-calculus (like the famous Church-Rosser property). The construction of λ-terms possessing some desirable property is commonly done manually. In this paper we describe a technique to synthesize such λ-terms automatically. Typical examples are: find a term Y such that ∀f. Y f = f(Y f), or find a term s such that ∀n ≥ 0. s n = \sum_{i=0}^{n} i. This technique can be used to find rather complicated terms, or terms that are not very intuitive. Although the (mathematical) examples shown in this paper are merely a proof of concept. There exists also serious applications of this kind of synthesis techniques, like the generalization of behavior in adaptive systems. In this paper we concentrate on untyped λ-terms as described by Barendregt in [1] extended with numbers and some operations on numbers. See [3] for an introduction to typed λ-calculus.
2000 ACM Subject Classification: D.1.2, I.2.2.
Key words and phrases: Program Synthesis, Automatic Programming, λ-Terms, programming by example.
The approach to generate A-terms obeying some property is inspired by our previous work [9]. In that paper we described how one can synthesize functions matching a property such as $f \ 4 = 5 \land f \ 5 = 8$. In general there are uncountable functions matching a given set of input-output pairs. In program synthesis it is our goal to find small functions that generalize the given behavior. We prefer a nonrecursive function over a recursive function, and a recursive function is preferably over a sequence of conditionals that exactly coincides with the given argument result pairs (like $f \ x = \text{if} \ (x = 4) \ 5 \ (\text{if} \ (x = 5) \ 8 \ 0)$).
The approach in [9] is to define a data type that represents the grammar of the candidate functions, for our example primitive recursive functions of type $\text{Int} \rightarrow \text{Int}$ will do. Using the generic generation of instances of this type, abstract syntax trees of candidate functions are generated [8]. Such a syntax tree is turned into the equivalent function. The suitability of these functions is determined by the automatic test system $\text{Gvst}$ [7]. The test system is used to find functions $f$ matching the desired property, by stating that such a function does not exists, e.g. $\forall f. \neg(f \ 4 = 5 \land f \ 5 = 8)$. The counterexamples found by $\text{Gvst}$ are exactly the functions wanted. For $f \ 4 = 5 \land f \ 5 = 8$ the test system finds the function $f \ x = \text{if} \ (x < 1) \ 1 \ (f(x-2)+f(x-1))$, the well known Fibonacci function.
In this paper we do not define a specific grammar for candidate functions, since we want to find ordinary $\lambda$-terms matching the given property. For the synthesis of $\lambda$-terms we start with the same approach. First we define a data type representing $\lambda$-terms and synthesize instances of this type from small to large as candidates. Then we check with the test system if these instances of the data type represent a function ($\lambda$-term) obeying the given constraints. However, there are some significant differences compared to the generation of the functions mentioned above. These difference corresponds to problems that needs to be tackled in order to make the systematic synthesis of $\lambda$-terms work. First, the termination of computations needed to determine the suitability of candidate $\lambda$-terms is an issue. In the generation of ordinary functions, we constructed the functions such that termination of reduction is guaranteed. In our new approach it is not possible to guarantee termination of reduction without too serious restrictions on the terms considered. Second, in order to obtain interesting $\lambda$-terms (corresponding to recursive functions) it is essential to have higher order functions. The use of higher order and potentially nonterminating expressions makes the equivalence of $\lambda$-terms an issue. Theoretically it is known that the equivalence of $\lambda$-terms is in general undecidable. Third, the $\lambda$-terms corresponding to recursive functions like the Fibonacci function mentioned above are relative large as well. This is caused by the fine grain computations in $\lambda$-calculus. Experience shows that the number of candidate terms to be considered becomes impractically large. Hence we need some guidance in the generation of candidate terms. A fourth difference with previous work is that we will use also logical properties like $\forall x, y. k \ x \ y = x$ instead of only input-output pairs like $k \ 1 \ 2 = 1$.
Problem one and two are covered by using normal order (left most) single step reduction in the comparison of equations. If equivalence is not found in a given number of steps, the equivalence is decided to be undefined. The test system $\text{Gvst}$ is well equipped to handle these undefined test results. The third problem is handled by a simple yet effective and flexible generation algorithm for candidate terms. Although the data type used allows more terms, the synthesis algorithm used generates only instances of a grammar that allows a smaller number of terms. Since we still handle terms prescribing potentially infinite computations, transforming the terms to ordinary expressions and evaluating these is unsafe. For the reduction algorithm however, it is crucial that all terms used are described by a small and simple data type.
In the remainder of this paper we first give a quick overview of testing logical properties using the test tool GVst. In section 3 we introduce the data structure that will be used to represent λ-terms. The equivalence of λ-terms is treated in section 4. Next we use the generic algorithm from GVst to generate λ-terms in section 5 and look at the effectiveness of this approach with some examples. In section 6 we introduce more effective algorithms to generate λ-terms matching a given condition. Finally, there is a conclusion and a discussion of related work in section 7.
2. Testing logical properties with the test tool GVst
The test system GVst checks properties in first order logic by evaluating the property for a (large) number of arguments. GVst is implemented as a library for the functional programming language Clean [10]. This library provides operators corresponding to the logical operators from logic as well as a class of functions called test to check properties.
The test system treats every ordinary function argument as a universally quantified variable if the function is used as a logical property. Consider the function \( pAbs \ n = \text{abs} \ n \geq 0 \), where \text{abs} is a function from the standard library that computes the absolute value of integer arguments. This function is interpreted by GVst as the property \( \forall n \in \text{Int}. pAbs \ n \). By unfolding the function definition this is \( \forall n \in \text{Int}. \text{abs} \ n \geq 0 \). This property can be tested by executing \text{Start} = \text{test} pAbs. By the type of the function \text{pAbs} to be tested \text{Int} \to \text{Bool}, the test system determines that the property should be evaluated for a large number of integers. These test values are synthesized systematically (from small to large for recursive types) by the generic function \text{ggen}. If GVst finds a counterexample within the first \text{maxTest} tests, the test result is \text{ce}. Apart from this test result, the test system gives also information about the counterexample found (like the number of tests done and the values used to find this counterexample). \text{maxTest} is the default number of tests, by default 1000. It is easy to change this number in general or in a specific test. Otherwise, the result is \text{ok} if GVst detects that the number of test values is less than \text{maxTest} and the property holds for all these test values. If no counterexample is found in the first \text{maxTest} tests and there are more than \text{maxTest} values in the list generated by \text{ggen}, the test result is \text{Pass}. The test result \text{Pass} is weaker then \text{ok}: doing additional test might show a counter example if the result is \text{Pass}, while \text{ok} indicates proof by exhaustive testing.
In the example \( pAbs \ n = \text{abs} \ n \geq 0 \) the test system finds a counterexample corresponding to the minimum integer value in the domain for test \( pAbs \). The instance of the generic test suite generator, \text{ggen}, for integers generates test values like 0, 1, -1, maxint, and minint that are known to cause often issues somewhere in the beginning of each test suite for integers.
With the operator \( \exists \) we can test existentially qualified expressions like \( \exists x. f \ x = \ x \). The operator \( \exists \) takes a function as argument. The type of the argument determines the type of test arguments generated by \text{ggen} in GVst. In the example \text{pFix} above, we used a nameless function (\( \lambda \)-expression) as argument for the operator \( \exists \). Compared with the ordinary logical notation, we have to write only an additional \( \lambda \) between \( \exists \) and the variable. Of course one can use any function as argument for the operator \( \exists \). Clean’s type inference system detects in this example that \( x \) must be an integer. Hence, GVst will generate integer values.
The test system GVst is used to handle undefined values. For any function \( f :: \text{Int} \to \text{Int} \) we can test if the function has a fixed point (\( \exists x. f \ x = \ x \)) by defining the property:
\[
\begin{align*}
pFix :: (\text{Int} \to \text{Int}) \to \text{Property} \\
pFix f = & \exists \lambda x. f \ x = x \\
\end{align*}
\]
// the type of \( x \) is determined by the context, here \text{Int}
We can test if the function \( g(x) = x + 1 \) has a fixed point by executing \( \text{Start} = \text{test} \ (\text{pFix} \ g) \). The test system generates a fixed number of integer values (by default 500) and checks if one of these values makes \( g(x) = x \) true. If such a value does not occur the test system can neither decide that the property holds, nor that there is a counterexample. The test system uses the value \text{Undef} to indicate that a positive test result has not been encountered within the tests to be done, but such a value might exist. Hence, the possible test results are:
\[
\text{Result} = \text{Pass} | \text{OK} | \text{CE} | \text{Undef}
\]
The difference between this test system and a model checker is that the test system evaluates properties using the ordinary evaluation mechanism. A model checker uses an abstraction of the system (the model) as basis for reasoning rather than actual code. A model checker uses also abstract evaluating steps to check the validity of the model (e.g. \( \forall x. \neg \neg x = x \)). This implies that a model checker is able to prove properties that can only be tested partially by a test system. Advantages of a test system are that no separate model is needed and that the actual code is used rather than a model of this code.
3. A data type to represent \( \lambda \)-terms
The first step is to construct a data type to represent \( \lambda \)-terms. Apart from variables, abstraction, and application we introduce numerical constants and constructors \text{Plus} and \text{If} for a primitive addition and conditional in the terms treated. We use the functional programming language \text{Clean} for the algorithms in this paper.
\[
\begin{align*}
\text{:: Expr} &= \text{Var} \ V | \text{Abs} \ V \ \text{Expr} | \text{Ap} \ \text{Expr} \ \text{Expr} | \text{Const} \ C | \text{Plus} | \text{If} & \quad \text{// \( \lambda \)-expression} \\
\text{:: V} &= \text{V} \ \text{Int} & \quad \text{// variable} \\
\text{:: C} &= \text{C} \ \text{Int} & \quad \text{// constant}
\end{align*}
\]
The additional types \( V \) and \( C \) are superfluous for the syntax trees describing \( \lambda \)-terms, but are convenient to control the generation of variables and constants.
Although that it is known that the numerical constants and the constants \text{Plus} and \text{If} are theoretical superfluous \cite{1}, it is convenient to introduce them. The use of these constants makes computations much more efficient. Moreover, this representation is much more compact than the representation of constants by Church numbers.
By using tailor made instances of the generic \text{show} function, instances of these types can be printed as usual in \( \lambda \)-calculus. The term \( \text{Abs} \ (\text{V} \ 1) \ (\text{Ap} \ (\text{Var} \ (\text{V} \ 0)) \ (\text{Ap} \ (\text{Var} \ (\text{V} \ 1)) \ (\text{Var} \ (\text{V} \ 1)))) \) will be printed as \( \lambda a. a \ (b \ b) \). This is more compact and better readable.
4. Equivalence of \( \lambda \)-terms and reduction
A key step in the search for \( \lambda \)-terms is determination of the equivalence of terms. Looking for some term \( I \) such that \( I \ x = x \) the system needs to be able to determine that \( (\lambda a. a) \ x = x \) and \( x = x \) are equivalent. If we write \( N = M \) we mean equality modulo reduction: \( N =_{\beta} M \). The terms \( M \) and \( N \) are \( \beta \)-convertible if they are equal, if one reduces to the other (i.e. \( M \rightarrow_{\beta} N \) or \( N \rightarrow_{\beta} M \)), or there is a common reduct \( L \) of \( M \) and \( N \) (i.e. \( \exists L \in \Lambda. M \rightarrow_{\beta} L \land N \rightarrow_{\beta} L \)). In general checking whether \( N =_{\beta} M \) is undecidable \cite{1}.
The undecidability of convertibility does not imply that it is impossible to look for equivalent terms. It just says that there are terms where the convertibility is unknown. For many terms we can determine whether they are convertible by reducing them a finite number of steps. We will use the normal order (left most) reduction strategy for these reductions.
since it is known to find a normal form if it exists [1]. If we find a common reduct (modulo
\(\alpha\)-conversion) within this finite number of reduction steps the terms are clearly convertible.
If we obtain unequal normal forms the terms are obviously not convertible. If one of the
terms shows a cyclic reduction (like \(\mathit{w}\mathit{w}\) \(\mathit{w} = \lambda x.x\mathit{x}\) that has the property that
\(\mathit{w}\mathit{w} \rightarrow_{\beta} \mathit{w}\mathit{w}\)) and the other is not a redex the terms are also unconvertible. In all other
situation the convertibility is considered to be undefined.
To reduce the space consumption in this paper we will only list the rules for the traditional
\(\lambda\)-calculus here and ignore the constants. Adding constants is straightforward: two
constants are convertible if they are syntactically equal, otherwise they are unconvertible. A
constant is unconvertible to any other term in normal form. The convertibility of a constant
to a redex is undefined. The system will do a finite number of reduction steps to determine
if it is possible to determine convertibility. We use the constructor \(\text{OK}\) from the type \text{Result}
to represent convertibility and \(\text{CE}\) to indicate inconvertibility.
A single reduction step on an expression is done by the function \(\text{hnf1} : : \text{Expr} \rightarrow (\text{Bool}, \text{Expr})\).
The Boolean in the resulting tuple indicates whether a reduction step is done.
\[
\text{hnf1} :: \text{Expr} \rightarrow (\text{Bool}, \text{Expr}) \\
\text{hnf1} (\text{Ap} (\text{Abs} v e) \mathit{a}) = (\text{True}, \text{sub} e v \mathit{a}) \\
\text{hnf1} (\text{Ap} n \mathit{a}) = (\text{True}, \text{Ap} n \mathit{a}) \\
\text{hnf1} e = (\text{False}, e) \\
\]
// the symbol \# introduces a let definition in Clean
The notion of substitution, \(e[v := a]\), in \(\lambda\)-calculus is implemented as \(\text{sub} e v a\). The function
\(\text{sub} :: \text{Expr} \times \text{Expr} \rightarrow \text{Expr}\) replaces each free occurrence of the second argum ent in the first
argum ent by the third argum ent.
\[
\text{sub} m =: (\text{Var} v) x n = \text{if} (v==x) n m \\
\text{sub} m =: (\text{Abs} y e) x n \\
\]
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The function \(\text{freeVars}\) yields a list containing the free variables in the given expression. The
expression \(\text{newVar n l}\) yields the first variable starting at \(V n\) that does not occur in the list
of variables \(l\). This is used to prevent undesirable binding of variables in examples such as
\(\lambda a.a b[b := a\ c]\). By the renaming of variables this is transformed to \(\lambda d.d b[b := a\ c]\).
The complete function \(\text{hnf1}\) also contains alternatives for the constants \(\text{Plus}\) and \(\text{If}\).
When both arguments of the \(\text{Plus}\) are constants the expression is replaced by a new constant,
otherwise \(\text{hnf1}\) tries to evaluate arguments of the addition. For the conditional expression
numbers are interpreted as Booleans. Positive numbers are interpreted as the Boolean
value True, all other values as False. If the subterm \(c\) in a term \(\text{Ap} (\text{Ap} (\text{Ap} \text{If} c) \mathit{t}) \mathit{e}\) is not
a constant, the functions \(\text{hnf1}\) tries to reduce it.
As a first step to determine convertibility we define \(\alpha\)-equality. Two expressions are
\(\alpha\)-equal (result \(\text{OK}\)) if they can be made identical by \(\alpha\)-conversion of variables introduced by
abstractions within the expressions. If the expressions are not \(\alpha\)-equal and are not in normal
form the result is undefined (Undef). Otherwise the expressions are clearly not α-equal, and
the result is CE.
alphaEQ :: Expr Expr → Result
alphaEQ (Var x) (Var y) = if (x==y) OK CE
alphaEQ (Var x) (Abs v e) = CE
alphaEQ (Abs v e) (Var y) = CE
alphaEQ (Abs x e1) (Abs y e2)
| x == y
| = alphaEQ e1 e2
| not (isMember x (freeVars e2))
| = alphaEQ e1 (sub e2 y (Var x)) // α-conversion: e2[y:=x]
| not (isMember y (freeVars e1))
| = alphaEQ (sub e1 x (Var y)) e2 // α-conversion: e1[x:=y]
| = alphaEQ (sub e1 x (Var v)) (sub e2 y (Var v)) // α-conversion: e1[x:=v] and e2[y:=v]
where v = newVar startVal (freeVars e1++freeVars e2) // a fresh variable
alphaEQ e1=(Ap f x) e2=(Ap g y)
= case alphaEQ f g of
OK = case alphaEQ x y of
CE = if (isRedex e1 || isRedex e2) Undef CE
r = r
CE = if (isRedex e1 || isRedex e2) Undef CE
Undef = Undef
alphaEQ e1 e2 = Undef
The equivalence (convertability) of expressions is determined by the infix operator ≜.
(≜) infix 4 :: Expr Expr → Result
(≜) x y = redEQ maxReductions [x] [y]
The constant maxReductions determines the maximum number of reductions done on the
expressions. This is a trade-off between speed and the ability to determine the convertibility
of expressions. In the tests reported in this paper the value 500 was used to full satisfaction.
If we use examples where more reductions are needed to determine equality, the constant
maxReductions should be increased. For most examples a value of 50 is more than enough.
The real work to determine convertibility is done by redEQ.
The function redEQ gets the number of reduction steps to be done and two stacks of
expressions as arguments. If the number of steps to be done is zero, the result of determining
equality is Undef unless one of the terms shows cyclic reduction and the other is in head
normal form (in that case the result is CE). If the number of steps to be done is not zero, the
function redEQ determines α-equality of the top of one stack (the most recent expression)
and one of the elements in the other stack is unequal to Undef. Any results unequal to Undef
determines the result of redEQ. If all comparisons for α-equality yield Undef, we try to reduce
the most recent expressions one single step. If such a reduction is possible for at least one
of the terms, the function redEQ continues recursively with the new expressions. Otherwise
we decompose an application or abstraction if it occurs in both expressions to be compared,
and continue with the fragments of the expressions to be compared. In all other situations
the given expressions are unequal under reduction: the result is CE.
redEQ :: Int [Expr] [Expr] → Result
redEQ n lx=[x:xs] ly=[y:ys]
| n==0
| | (isMem x xs == OK && ¬(isRedex x)) || (isMem y ys == OK && ¬(isRedex x))
The function `isMem` looks for a result unequal to `Undef` in the list of results. If such a value exists, the result of the application is the value of the first list element unequal to `Undef`, otherwise the result is `Undef`.
This is sufficient to compare λ-terms. In a number of examples the result of comparison might be undefined, but for each property the test system will generate a lot of test arguments. Usually some of these arguments will show whether the property holds or not. Some improvements of the algorithm to compare expressions are possible. For instance, expressions with a different type, such as `λa.a` and `a`, will always be unequal.
5. Generic generation of λ-terms
In order to find λ-terms matching some property, the test system needs to generate candidate expressions. Since `GVst` contains a generic algorithm to generate the members of a type, we can completely derive the generation of candidates. In order to limit the search space (and hence speed up the finding of matching λ-terms) we limit the number of variables to 3, and use only the often occurring constants `-1, 1` and `2`:
The generation of expressions is done by the generic algorithm. We derive an instance of this algorithm for this type by:
derive ggen Expr
If we would have used the generic algorithm to derive the generation of values for the types \( V \) and \( C \), values like \( V \minint \) and \( C \maxint \) would have been generated. We considered this undesirable in this situation and hence we used a tailor made instance of \( ggen \) for these types. This is all we need to get started.
### 5.1. Some examples.
Let's start with a very simple example: a \( \lambda \)-term \( i \) with the property
\[ \forall x. i \cdot x =_{\beta} x. \]
We state that such a term does not exist: \( \forall i \in \Lambda. \neg \forall x \in \Lambda \ (i \cdot x =_{\beta} x) \). The test system will try to find counterexamples of this properties. The counterexample found are exactly the \( \lambda \)-terms obeying \( i \cdot x =_{\beta} x \). The property expressed in \( \text{GVst} \) reads:
\[
pI : \text{Expr} \implies \text{Property}
\]
\[
pI \ i = \neg (\forall \ \lambda \ x. \ \text{Ap} \ i \ \ x \ \equiv \ x)
\]
\( \text{GVst} \) uses the generic algorithm to generate candidate expressions for \( i \) and \( d \). The first ten identity functions found by testing this property are: \( \lambda a. a, \lambda b. b, \lambda c. c, (\lambda a. a) (\lambda a. a), \lambda a. ((\lambda b. a) \ \text{Plus}), \lambda b. ((\lambda a. b) \ \text{Plus}), \lambda a. ((\lambda b. a) \ (\lambda a. a)) \) and \( \lambda a. ((\lambda c. a) \ \text{Plus}) \).
For these ten matching \( \lambda \)-terms the system had to generate only 464 candidate expressions. Note that we use here a property with a universal quantifier rather than some input-output pairs (like \( i \cdot 1 = 1 \), \( i \cdot (\lambda a. a) = \lambda a. a \) and \( i \cdot ((\lambda a. a) \ (\lambda a. a a)) \equiv (\lambda a. a a) \ (\lambda a. a a)) \).
After the preparations described above, \( \text{GVst} \) is very well capable of testing this kind of properties. In our opinion the property shown above is clearer and more elegant than the explicit input-output pairs of the function \( i \). Of course it is still possible to search for functions using input-output pairs.
In the same spirit we can look for terms representing the \( K \)-combinator by:
\[
pK : \text{Expr} \implies \text{Property}
\]
\[
pK \ k = \neg (\forall \ \lambda x. \ y. \ \text{Ap} \ (\text{Ap} \ k \ x) \ y \equiv x)
\]
As expected the system produces terms like \( \lambda a. \lambda b. a \) and \( \lambda b. \lambda a. b \) within the same number of candidates. The system finds also some less obvious terms like \( \lambda a. \lambda b. (\lambda a. a) \ a \) and \( \text{If} \ -1 \).
For functions that only have to work on arguments of a specific type, e.g. numeric constants of the form \( \text{Const} \ (C \ i) \), the \( \forall \) operator will generate undesired arguments if the type of arguments is \( \text{Expr} \). It is not relevant to know what a plus operator does on free variables or arguments like \( \lambda a. a \), hence we should exclude them from the property and the tests. This problem can easily be tackled by using a quantification over type \( \text{Int} \) and the needed type conversion in the property.
\[
pPlus : \text{Expr} \implies \text{Property}
\]
\[
pPlus \ p = \neg (\forall \ \lambda a. b. \ \text{Ap} \ (\text{Ap} \ p \ (\text{Const} \ (C \ a))) \ (\text{Const} \ (C \ b)) \equiv \text{Const} \ (C \ (a+b)))
\]
This will produce correct \( \lambda \)-terms for \( p \) like \( \text{Plus}, (\lambda a. a) \ \text{Plus}, (\lambda a. \text{Plus} \ a), (\lambda a. \text{Plus} \ b), (\lambda a. \text{Plus}) \ a, \) and \( (\lambda a. \text{Plus}) \ (-1 -1) \). If we do not want to use all integers in the property or have only specific input-output combination available, the property will not contain a \( \forall \)-operator. We use the given input-output pairs in the property. For example:
pF1 :: Expr → Result
pF1 p = not \( (f_1 \land \land f_2 \land \land f_3) \equiv Const (C c) \)
where \( f a b c = Ap (Ap p (Const (C a))) (Const (C b)) \equiv Const (C c) \)
The operator \( \land \land \) is the logical and for values of type Result. Matching \( \lambda \)-terms found are
\( \lambda a.\lambda b.(Plus b) b, \lambda a.\lambda b.(Plus a) b, \lambda a.\lambda b.(Plus b) a, \) and \( \lambda a.\lambda b.(Plus ((Plus b) b)) 0 \).
Although these examples work fine, they show also that expressions are generated that are usually considered undesirable like \( \lambda a.Plus b \) (where \( b \) is a free variable) and \( (\lambda a.\lambda b.((\lambda a.Plus) (-1 -1))) \) (with the constant \(-1\) at a function position). In these examples these terms are only curious, but they do occupy space in the search space and hence time during the search for the desired \( \lambda \)-terms. If we search \( \lambda \)-terms implementing the Y-combinators by testing:
pF :: Expr → Property
pF y = not (\( \forall f.\lambda a.Plus y f \equiv Ap f (Ap y f) \))
no success is found in the first 1,000,000 tests. The search space is simply too large to find a suitable term in a reasonable time.
6. SMARTER GENERATION OF \( \lambda \)-TERMS
There are umpteen way to reduce the search space. An unattractive alternative is to reject candidates that represent wrongly typed \( \lambda \)-terms in a property, as these terms will still be generated and hence consume resources. It will be better to prevent the generation of terms that are clearly unsuitable. Take care not to eliminate the wanted terms. In this section we describe an approach to generate better candidate terms.
With a few simple restrictions we can generate much better candidate \( \lambda \)-terms. First, we will always generate a number of abstractions that corresponds to the number of arguments needed by the function at hand. Second, there is no need to generate open \( \lambda \)-terms. In the generation we keep track of the bound variables and only generate them at applied occurrences. In principle that can be further improved by keeping track of the type of these variables, as done by Katayama [6]. Third, we will generate the right number of arguments for constants like Plus and If. Forth, it is useless to generate numerical constants as first argument of an Ap. Fifth, if the right constants are generated then there is no need to generate complex subexpressions containing only constants.
For \( pF \) defined above we clearly need a higher order function. Hence we add the introduction of new abstractions in the generated expressions. The function \( ho \) generates higher order candidate functions. The first argument of \( ho \) is the number of arguments needed, the second argument a list of the bound variables, and the last argument the name (number) of the next argument.
\[ ho :: Int \times [V] \times Int \rightarrow [Expr] \]
\[ ho 0 vs x = r \]
where
\[ r = 1 \]
\[ l = \begin{cases} \Const (C i) \ i \in [-1,1] \\ \Var v \ \ v \leftarrow vs \ \\ \Ap e1 e2 \ \ \ \langle e1,e2 \rangle \leftarrow \text{diag2} \ 1 \ r \ \\ \Abs (V x) e \ \ e \leftarrow ho 0 [V x:vs] (x:1) \ \\ \Ap (\Ap Plus e1) e2 \ \ \langle e1,e2 \rangle \leftarrow \text{diag2} \ 1 \ r \end{cases} \]
\[ ho i vs x = \begin{cases} \Abs (V x) e \ \ e \leftarrow ho (i-1) [V x:vs] (x:1) \end{cases} \]
As a grammar this is:
\[ h_0 \ldots h_m = \lambda v_0 \ldots \lambda v_1 \cdot r \]
where
\[ r = l \mid -1 \mid 1 \]
\[ l = v_m \mid \ldots \mid v_1 \mid l \cdot r \mid \lambda v_{n+1} \cdot h_0 \cdot (m+1) \mid (Plus \ l) \cdot r \]
The infix operator \( | . \) merges two lists into a single list by taking elements from the argument lists in turn.
\[
(| .) \text{ infixl} 4 :: [x] [x] \rightarrow [x] \\
(| .) \ [a:x] y = [a:y] . x \\
(| .) [] y = y \\
// note the swap of argument
Now the first 1,000,000 candidates generated contain 7 Y-combinators. Some examples are
\[ \lambda a. (\lambda b \ b \ b) \ (\lambda a \ (b \ b)) \ (\lambda a \ (b \ b)) \ (\lambda a. ((\lambda b \ b) \ (\lambda b \ b)) \ (\lambda b \ (b \ b))) \]
Most interesting functions are recursive. However, this does not imply that we need to generate higher order \( \lambda \)-terms. It is sufficient to generate terms containing an application of a predefined Y-combinator. Moreover, for recursive functions that yield a nonrecursive type, like Int, it is essential to contain a stop condition. That is, after the Y-combinator there should be a conditional (an If) before the recursive occurrence of the recursive function. This is exactly what the generator of \( \lambda \)-terms \( \text{fun} \) does.
\[
\text{fun} :: \text{Int} \rightarrow \text{[Expr]} \\
\text{fun} n = \text{abs} \ 1 \ n \ e \ \ll e \leftarrow r \ | ] | . \ (\text{Ap exprY} \ (\text{abs} \ 0 \ (n+1) \ e) \ \ll e \leftarrow r \text{Fun})
\]
where
\[
\text{vars} = [\text{Var} \ (V \ v) \ \ll v \leftarrow [1..n]] \quad // V 0 \ is \ the \ recursive \ \text{fun} \ if \ it \ exists \\
\text{r c} = [\text{Const} \ (C \ i) \ \ll i \leftarrow [-1,1,-2]] \ | . \ e \ c \\
\text{e c} = \text{vars} \ | . \ (\text{Ap Plus e1 e2} \ \ll (e1,e2) \leftarrow \text{diag2} \ (e \ c) \ (r \ c)) \ | . \ c \\
\text{rFun} = [\text{Ap} \ (\text{Ap} \ (\text{Ap If} \ c) \ t) \ e \\
\ \ll (c,t,e) \leftarrow \text{diag3 simple} \ (e \ (r\text{App} \ n)) \ [(\text{Const} \ (C \ i) \ \ll i \leftarrow [0,1]) \ | . \ \text{vars}] \\
\text{rApp} 0 = [\text{Var} \ (V \ 0)] \\
\text{rApp} n = [\text{App} \ f \ a \ \ll (f,a) \leftarrow \text{diag2} \ (r\text{App} \ (n-1)) \ \text{simple}] \\
\text{simple} = \text{vars} \ | . \ (\text{Ap Plus v} \ \ll (v,c) \leftarrow \text{diag2} \ \text{vars} \ [(\text{Const} \ (C \ i) \ \ll i \leftarrow [-1,2])]) \\
\text{abs n} 0 \ e = e \\
\text{abs n} \ n \ e = \text{Abs} \ (V \ n) \ (\text{abs} \ (n+1) \ (n-1) \ e)
\]
Using this generation function we will look for a term that implements multiplication by repeated addition. Since we want to prevent (very) large values as arguments for this multiplication function (it is \( O(n) \)), we select some test values manually rather than using a quantification over all integers.
\[
\text{pTimes} \ p = \neg (f \ 0 \ 3 \ \& \& f \ 2 \ 4 \ \& \& f \ 7 \ 5 \ \& \& f \ 3 \ 0) \\
\text{where} \ f \ a \ b = \text{Ap} \ (\text{Ap} \ p \ (\text{Const} \ (C \ a))) \ (\text{Const} \ (C \ b)) \ \equiv \ \text{Const} \ (C \ (a+b))
\]
We look for 2-argument terms generated by the function \( \text{fun} \) by testing \( \text{pTimes} \) \( \text{For} \ (\text{fun} \ 2) \).
The system produces multiplication functions for non-negative numbers of the form:
\[ \lambda a. \lambda b. \lambda c. ((\text{If} \ c) \ ((\text{Plus} \ ((a \ ((\text{Plus} \ c) -1)) \ b)) \ b)) \ 0, \]
\[ \lambda a. \lambda b. \lambda c. ((\text{If} \ c) \ ((\text{Plus} \ ((a \ ((\text{Plus} \ c) -1)) \ b)) \ b)) \ c \]
and terms that a human is more likely to write
\[ \lambda a. \lambda b. \lambda c. ((\text{If} \ b) \ ((\text{Plus} \ ((a \ ((\text{Plus} \ b) -1)) \ c)) \ c)) \ 0, \]
\[ \lambda \lambda a. \lambda b. \lambda c. ((\text{If} \ b) \ ((\text{Plus} \ ((a \ ((\text{Plus} \ b) -1)) \ c)) \ c)) \ 0 \]
In these expressions we use $y$ as abbreviation of the term $(\lambda a. (\lambda b. (a b)) (\lambda b. (b b)))$. The first two terms are somewhat peculiar due to the swap of arguments. As a direct recursive function the second term is:
$$f b c = \text{if } (c > 0) \ (b + f (c-1) b) c$$
Although this function looks extraordinary through the swap of arguments, it computes the desired product of non-negative arguments.
Terms for $s n = \sum_{i=0}^{n} i$ are found by testing $p\text{sum For } (\text{fun } 1)$ with
$$p\text{sum } p = \neg (f 3 \& \& f 5) \text{ where } f a = Ap p \ (\text{Const } (C a)) \equiv \text{Const } (C \ (\text{sum } [1..a]))$$
The first term found is $Y (\lambda a. \lambda b. ((\text{If } b) ((\text{Plus } (a ((\text{Plus } b) -1))) b)) 0)$.
In the same way we can look for $\lambda$-terms matching $f 4 = 5 \& f 5 = 8$ from the introductions by looking for counterexamples for:
$$p\text{Fib } p = \neg (f 4 5 \& \& f 5 8) \text{ where } f a b = Ap p \ (\text{Const } (C a)) \equiv \text{Const } (C b)$$
The first solutions found by testing $p\text{Fib For } (\text{fun } 1)$ are not the Fibonacci function found in our earlier work, but nonrecursive terms such as
$$\lambda b. ((\text{Plus } ((\text{Plus } ((\text{Plus } ((\text{Plus } b) -1))) -1)) -1)) b)$$
and some single-recursive term like
$$Y (\lambda a. \lambda b. ((\text{If } ((\text{Plus } b) -2)) ((\text{Plus } (a ((\text{Plus } b) -1))) ((\text{Plus } ((\text{Plus } b) -1)) -1))) b)$$
Counterexample 13 found after 1583 test is the first (double-recursive) Fibonacci function:
$$Y (\lambda a. \lambda b. ((\text{If } ((\text{Plus } b) -1)) ((\text{Plus } (a ((\text{Plus } b) -2))) ((\text{Plus } ((\text{Plus } b) -1))) 1)$$
By adding $f 6 = 13$ to the patterns to be matched, this is the first term found.
The speed of generating and testing candidate functions depends strongly on the condition that has to be evaluated and the size of the expression. On a rather slow (1GHz) Windows XP laptop we measured a speed of 500 to 200,000 candidate terms per second.
7. Discussion and related work
In this paper we demonstrate that it is possible to find $\lambda$-terms matching some condition by systematic synthesis of candidate expressions. Since we want to be able to find terms like the $Y$-combinator, restricting ourselves to terminating expressions is no option. This implies that testing the suitability of a candidate expression is rather delicate. The equivalence of terms is known to be undecidable. In this paper we used a simple approximation: a finite (and rather small) normal order reduction steps are done on the terms to be compared. If the reduction sequences contains elements that are $\alpha$-equal the terms are obviously $\alpha$-equivalent. If the terms are unequal normal forms the terms are non-equivalent. Otherwise the equivalence has the value $\text{undefined}$.
It appears that the number of $\lambda$-terms is too large to find most interesting terms by brute force search in reasonable time. We have introduced two rather simple but effective generators for expressions. The first one generates higher order terms like the famous $Y$-combinator. The second one generates (recursive) functions like multiplication by repeated addition and the Fibonacci function. By using type information it is possible to generate candidate functions even more effectively. Katayama [6] uses this in his generation of functions matching examples. He generates only first order terms, all other things (like recursion) have to be defined as a recursion pattern in a library of primitive functions.
Broda [5] and Wang [11] discuss algorithms to generate $\lambda$-terms randomly. Broda uses a grammar to specify the type of the terms, somewhat similar to our generation functions. Henk Barendregt has touched the generation of $\lambda$-terms via enumeration in [1, 2, 4].
By adding constructs like the $Y$-combinator and multiplication to the terms, the generated terms become more powerful. Hence complex functions will be found quicker.
Without the struggle for nontermination, it is more elegant to introduce a type class \texttt{eval} to evaluate instances of various grammars represented as type [9]. The grammar of the candidate terms can then elegantly and effectively be determined by more specific types. The types control the generation of candidate functions at a higher level of abstraction than the generation functions used in section 6. When intermediate terms in the reduction need to be compared, as needed to compare $\lambda$-terms for equivalence, this is not possible.
We find $\lambda$-terms generalizing the behavior of the given input-output pairs or properties. Both the obvious functions and more surprising $\lambda$-terms are synthesized. If the goal is to find only primitive recursive functions the direct approach in [9] is more effective. This paper shows that it is possible to find the primitive recursive functions as well as other $\lambda$-terms like the $Y$-combinator. The advantage of the approach introduced in this paper is that it is able to synthesize $\lambda$-terms for general properties without the need to define a very precise grammar for the candidate functions. Some guidance is needed to find larger terms, but the generators might produce totally wrong candidates (like ill-typed terms or terms with nonterminating reduction sequences) without causing any trouble.
**ACKNOWLEDGEMENT**
The authors wish to thank the anonymous referees, the editors and Peter Achten for their suggestions to improve this paper.
**REFERENCES**
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/36536/36536.pdf?sequence=1", "len_cl100k_base": 10781, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31876, "total-output-tokens": 12353, "length": "2e13", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0006442070007324219, "__label__crime_law": 0.00046372413635253906, "__label__education_jobs": 0.0017652511596679688, "__label__entertainment": 0.00013840198516845703, "__label__fashion_beauty": 0.0002338886260986328, "__label__finance_business": 0.0003323554992675781, "__label__food_dining": 0.0006165504455566406, "__label__games": 0.00078582763671875, "__label__hardware": 0.0012674331665039062, "__label__health": 0.0010690689086914062, "__label__history": 0.0004405975341796875, "__label__home_hobbies": 0.00017499923706054688, "__label__industrial": 0.00064849853515625, "__label__literature": 0.0006837844848632812, "__label__politics": 0.000438690185546875, "__label__religion": 0.000903606414794922, "__label__science_tech": 0.11346435546875, "__label__social_life": 0.00017392635345458984, "__label__software": 0.005138397216796875, "__label__software_dev": 0.86865234375, "__label__sports_fitness": 0.0004336833953857422, "__label__transportation": 0.0008902549743652344, "__label__travel": 0.0002651214599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41565, 0.01374]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41565, 0.75647]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41565, 0.79914]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2374, false], [2374, 6765, null], [6765, 11152, null], [11152, 15315, null], [15315, 18970, null], [18970, 21779, null], [21779, 22900, null], [22900, 26901, null], [26901, 30292, null], [30292, 34140, null], [34140, 37770, null], [37770, 41565, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2374, true], [2374, 6765, null], [6765, 11152, null], [11152, 15315, null], [15315, 18970, null], [18970, 21779, null], [21779, 22900, null], [22900, 26901, null], [26901, 30292, null], [30292, 34140, null], [34140, 37770, null], [37770, 41565, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41565, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41565, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2374, 2], [2374, 6765, 3], [6765, 11152, 4], [11152, 15315, 5], [15315, 18970, 6], [18970, 21779, 7], [21779, 22900, 8], [22900, 26901, 9], [26901, 30292, 10], [30292, 34140, 11], [34140, 37770, 12], [37770, 41565, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41565, 0.01075]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
897ad27c53b70837b6cedbba5084dff5bd2efc9b
|
Formally Proving and Enhancing a Self-Stabilising Distributed Algorithm
Camille Coti\(^1\), Charles Lakos\(^2\), and Laure Petrucci\(^1\)
\(^1\) LIPN, CNRS UMR 7030, Université Paris 13, Sorbonne Paris Cité
99, avenue Jean-Baptiste Clément
F-93430 Villetaneuse, FRANCE
\(^2\) Computer Science, University of Adelaide
Adelaide, SA 5005, AUSTRALIA
Charles.Lakos@adelaide.edu.au
Abstract. This paper presents the benefits of formal modelling and verification techniques for self-stabilising distributed algorithms. An algorithm is studied, that takes a set of processes connected by a tree topology and converts it to a ring configuration. The Coloured Petri net model not only facilitates the proof that the algorithm is correct and self-stabilising but also easily shows that it enjoys new properties of termination and silentness. Further, the formal results show how the algorithm can be simplified without loss of generality.
1 Introduction
Goals and contributions This paper aims at using a formal model and associated verification techniques in order to prove properties of a self-stabilising distributed algorithm. Although the algorithm considered [8] was shown self-stabilising, the proof was lengthy and cumbersome. Using formal models, in this case Petri nets which are particularly well-suited for such distributed algorithms, provides a much shorter and more elegant proof. It also allows for easily deriving additional properties which were not proven in the past: correctness, termination and silentness. Finally, reasoning on the model leads to a simplification of the algorithm, and a reduction in the number of exchanged messages. This simplification, which now appears straightforward is not obvious at all when considering the distributed algorithm alone.
Context A distributed system consists of a set of processes or processors. Each process \(k\) has a local state \(s_k\), and processes communicate with each other via communication channels that have local outgoing and incoming queues. The configuration, \(C\), of a distributed system is the set of local states of its processes together with the queues of messages that have yet to be sent or received.
Self-stabilising algorithms are distributed algorithms that, starting from any initial configuration and executing the set of possible transitions in any order, make the system converge to a legitimate configuration. This property makes
Self-stabilising algorithms are suitable for many fields of application, including fault tolerant systems: failures take the system out of its legitimate state and the algorithm is executed to reach a legitimate configuration again. Other properties that may be of interest are the closure property which ensures that once it has reached a legitimate configuration, the system remains in it, and the silent property which states that once it has reached a legitimate state, inter-process communications are fixed. This means that the communication channels hold the same value: processes stop communicating (message queues and communication channels remain empty) or repeatedly send the same message to their neighbours.
Self-stabilisation must not be mistaken for self-healing. Self-healing algorithms detect a process has failed and re-knit the topology. For instance, if a set of processes are interconnected by a topology and one of them dies, inter-process connections are modified in order to form the topology again. Self-healing is an adaptation to the failure, whereas self-stabilisation considers the post-failure state as a new, non-legitimate state and restarts the algorithm from the beginning. Self-stabilising algorithms are more general than self-healing algorithms and can tolerate any kind of failure. When a failure occurs after the system has stabilised, the system gets out of its legitimate configuration and the algorithm executes again to reach a legitimate configuration again [15]. Besides, self-stabilising algorithms ensure self-healing, because after the failure has stopped, they ensure that the system reaches a legitimate state in finite time.
It is not always an easy task to prove the aforementioned properties related to self-stabilisation. It is necessary to prove that these properties hold for any possible execution and starting from any initial state. Some techniques exist for this, coming from traditional distributed algorithm techniques, or specific techniques such as attraction and rewriting [6]. Some algorithms can also be formalised using a proof assistant [13].
One technique to prove that a system is self-stabilising with respect to a set of legitimate states consists in considering a norm function which is representative of the state of the system, and proving that this function is integral, positive and strictly decreasing as the algorithm is executed [19].
Outline of the paper In this paper, we examine a distributed, self-stabilising algorithm that, given a set of processes interconnected by any tree topology, builds a ring topology between them. This algorithm is interesting since it is a stepping stone to a binomial graph (BMG) configuration which has several desirable properties, such as performance (low diameter and degree) and robustness (in terms of number of processes and communication links that can fail without disconnecting the graph) [2].
Section 2 presents the algorithm and describes its behaviour. In Section 3 we derive a Coloured Petri Net model of the algorithm. The Coloured Petri Net formalism is ideally suited for this task because of its support for concurrency and the variability of sequencing and timing of distributed processes. Further, the graphical representation of Petri Nets can help to highlight the flow of information in the algorithm. This immediately leads to some simplifications which are presented in Section 4, along with certain invariant and liveness properties.
In this paper, we focus on a distributed, self-stabilising algorithm that, given a set of processes interconnected by any tree topology, builds a ring topology between them. This algorithm was originally described in [8] and we reproduce the pseudocode in algorithm 1.
This algorithm is meant to establish a fault-tolerant, scalable communication infrastructure. For instance, it can be used to support the execution of parallel processes [11] such as the processes of MPI programs [18]. Robust, fault-tolerant run-time environments [17] are necessary to support middleware-level, automatic fault-tolerance [7] or application-level fault-tolerance [20, 9, 25]. In order to be efficient at large scale, tree topologies are often preferred [5]. However, trees are not robust and connectivity can be lost if intermediate processes fail. Resilient extensions of trees introduce additional inter-process connections, such
Algorithm 1: self-stabilising algorithm that builds a ring from any tree
Constants:
- Parent : ID /* empty if I am the root of the tree */
- Children : List(ID) /* empty if I am a leaf process */
- Id : ID /* my own identifier */
- Pred : ID /* both empty start-up time */
- Succ : ID
Initialisation:
1 - Children ¥= $\emptyset$ → /* I have children: send a F_Connect message to my leftmost child. */
Send (F_Connect, Id) to Succ ;
2 - Children = $\emptyset$ → /* I am a leaf process */
Send (Info, Id) to Parent ;
Run:
3 - Recv (F_Connect, I) from p → /* I received a F_Connect; from my parent? If so, here is my predecessor. */
if p = Parent then Pred = I;
4 - Recv (Info, I) from p → /* I have received an Info message. From whom? */
if p ∈ Children then
let q = next(p, Children) ;
if q ¥= $\perp$ then
Send (Ask_Connect, I) to q ;
else
if Parent ¥= $\perp$ then
Send (Info, I) to Parent ;
else
Pred = I ;
Send (B_Connect, Id) to I ;
end if
end if
end if
5 - Recv (Info, I) from p → /* I am being asked to connect to a leaf process. */
Pred = I ;
Send (B_Connect, Id) to I ;
6 - Recv (B_Connect, I) from p → /* I received a B_Connect; here is my successor. */
Succ = I ;
as k-sibling trees [3] or redundant storage of connection information [4]. The Binomial Graph topology (BMG) is scalable and allows efficient communications [24,10] while being robust. More precisely, a BMG made of $N$ processes has a diameter in $O(\log N)$, therefore a message is routed between two processes in at most $O(\log N)$ hops. It has a degree in $O(\log N)$, which means that every process needs to handle $O(\log N)$ open connections. Every node sees the other nodes along a binomial tree; therefore, efficient (in terms of number of messages) collective communications can be implemented directly on top of a BMG. Its node connectivity and its link connectivity are both in $O(\log N)$, which means that $O(\log N)$ nodes or $O(\log N)$ network connections can fail without its topology becoming disconnected [2].
In this case, the legitimate states of the algorithm are those in which the processes are connected forming a ring topology. The non-legitimate states are all the (connected) topologies, from which a tree can therefore be extracted. In particular, a ring that has lost one process forms a chain, which is a particular form of tree (unary tree). If a new process is spawned, the processes form a tree rooted on the process that spawned the new one, with one long branch (the chain) and a short one (the new process).
Processes are spawned on a set of resources and each process is connected to the process that spawned it. This results in a tree topology that matches the shape of the spawning operation (figure 1(a)).
To connect with another process, a process needs to know the remote process’s communication information (e.g. an IP address and a port). This com-
munication information and the name of a process in a naming system (e.g. a rank) form the identity of this process. In the initial state, processes know their own identity, that of their parent and of their children. The algorithm allows for concurrent or even parallel propagation of the identity of leaf processes in order to establish connections between some leaves and some intermediate processes. The algorithm sets (additional) Succ and Pred pointers to reflect the node order of depth-first traversal. It achieves this by propagating information bottom-up from the leaves, thereby allowing for parallel execution. Besides, inter-process communications are asynchronous.
In the algorithm, \( ID \) denotes the set of possible process identities. \( \text{List}(ID) \) is an ordered list of identities; \( \text{First}(L) \) returns the first element in the list \( L \), and \( \text{next}(e, L) \) returns the element following \( e \) in \( L \). If there is no such element, these functions return \( \perp \), which is also used to denote a non-existing information.
During the first step, each non-leaf process sends an \( F\_\text{Connect} \) (or \( FC \) for short) message to its oldest (i.e. leftmost) child (rule 1). Thus, each process that receives an \( FC \) message is connected to its predecessor (rule 3). Concurrently, each leaf process sends its identity to its parent in an \( \text{Info} \) message (rule 2 and figure 1(b)). In other words, the \( \text{Info} \) message contains the communications information of the process that built it, i.e. how it can be contacted by another process to create a new connection between them.
Then each time a process receives an \( \text{Info} \) message from one of its children (rule 4), it forwards this information to the younger sibling as an \( \text{Ask}\_\text{Connect} \) message (or \( AC \), see figure 1(c)). The message forwarded here contains the identity of the rightmost leaf process of its older sibling. Therefore, the child that receives the \( AC \) message will then be able to send a \( B\_\text{Connect} \) (or \( BC \)) message (rule 5) to the rightmost leaf process of its older (i.e. left) sibling. As a consequence, each process that receives a \( BC \) is connected to its successor (rule 6). The root process receives the identity of the rightmost leaf of the tree from its youngest (i.e. rightmost) child and establishes a \( BC \) connection with it.
Eventually, all the leaves are connected to another process (figure 1(d), where the colours relate to the different transitions sequences of Section 6), and the set of \( FC \) and \( BC \) connections forms the ring.
3 A Coloured Petri Net Model
Algorithm 1 was proved to be self-stabilising in [8], but the proof was lengthy and cumbersome. Moreover, the algorithm enjoys additional properties that were not established before, such as being silent.
Therefore, we build a formal model of the algorithm which provides the following features:
- a graphical representation for an easier and better understanding of the system’s behaviour;
- a global view of the system states at a glance, allowing to focus on the flow of messages;
- facilities for property analysis and formal reasoning.
A coloured Petri net (CPN) model [22, 21] is thus designed, capturing the flow of information and the individual actions in its graph, and the actual essential values in the data it manipulates. For the sake of readability, it is presented in 3 parts (Figures 2(a), 2(c) and 2(b)) corresponding to the initialisation phase, the core of the algorithm, and the termination phase, respectively. Should it be presented in a single figure, the places with the same name would be fused together. Also note that the arcs connected to place Messages (and the associated arc inscriptions) are coloured according to the messages they handle. This has no formal meaning but enhances readability, which is particularly useful when illustrating message flow in the proofs.
### 3.1 Data Types Declaration
The type declarations in Figure 3(a) show all data types that are to be used in the CPN. First, the processes are identified as members of a set Proc of processes (of the form \{P1, \ldots\}). This set also includes a particular fake process that is used to denote that the parent or child of a process does not exist (when it is respectively the root or a leaf). This corresponds to \perp in the algorithm. Type 2Proc is a pair of process names. Then MessType describes all four types of messages: FC, AC, BC, and Info. A message also contains a process identifier, its sender and its receiver.
The algorithm we model makes use of the tree topology with parent and child relation plus the next child in a parent’s list. To model this, we use triples consisting of the parent, the child, and the number of the child in the list of children. The fake child is always the last one in the list, thus denoting its end.
For example, the tree in Figure 1(a) is modelled by the set of triples:
Proc = set of processes U \{fake\};
2Proc = Proc x Proc;
MessType = \{FC, AC, BC, Info\};
Mess = MessType x Proc x Proc x Proc;
TreeStructure = Proc x Proc x Int;
(a) Type declarations
\textit{InitP}: Proc;
\textit{Pred}, \textit{Succ}: 2Proc;
\textit{Messages}: Mess;
\textit{TreeTopology}: TreeStructure;
(b) Places declarations
Tree = initial topology
\begin{align*}
&c, f, I, p, q, r: \text{Proc}; \\
&m, n: \text{Int}; \\
\end{align*}
(c) Variables and constants declarations
Fig. 3: CPN declarations
\{(\text{fake}, P0, 1), (\text{fake}, \text{fake}, 2), (P0, P1, 1), (P0, P2, 2), (P0, \text{fake}, 3), \\
(P1, P3, 1), (P1, P4, 2), (P1, \text{fake}, 3), (P2, P5, 1), (P2, \text{fake}, 2), (P3, P6, 1), \\
(P3, P7, 2), (P3, P8, 3), (P3, \text{fake}, 4), (P4, P9, 1), (P4, \text{fake}, 2), (P5, \text{fake}, 1), \\
(P6, \text{fake}, 1), (P7, \text{fake}, 1), (P8, \text{fake}, 1), (P9, \text{fake}, 1)\}.
3.2 Initialisation Phase
The initial phase of the algorithm is modelled by the CPN of Figure 2(a). It describes all the necessary places with their initial marking (in italics) of the type described in the places declarations in Figure 3(b). Figure 3(c) shows the types of variables used in arc expressions as well as the initial \textit{Tree} topology constant.
At the start, there is no message, all processes (except the fake one) are ready to start. No process knows a possible successor or predecessor in the ring, hence is associated with the fake process in the \textit{Pred} and \textit{Succ} places. The tree topology to be processed is described by the constant \textit{Tree}.
The initialisation of the \textit{Pred} and \textit{Succ} places with the specific \textit{fake} process models the fact that, as stated in section 1, self-stabilising algorithms can start from any arbitrary initial state. In our case, we are initialising the predecessor and successor of each process in the ring with bogus values.
Transition T1 models rule 1 in the algorithm. A process \(p\) with \(c\) as first child is processed, sending an \textit{FC} message with its identity to this child, and updating its successor information with this child’s identity.
Every leaf process \(p\) executes rule 2 of the algorithm, as depicted by transition T2. It is a leaf if described by \((p, \text{fake}, 1)\) in the tree topology, and it is the child number \(n\) of some (non-fake) parent process \(f\) \((f, p, n)\) in the tree). It then sends an \textit{Info} message with its identity to its parent.
3.3 Main Phase
The CPN in Figure 2(c) describes the core part of the algorithm, i.e. the processing of \textit{FC} and \textit{Info} messages. Transition T3 handles an \textit{FC} message, as rule 3 of the algorithm by updating the predecessor information of the receiver of the
message. Rule 4 is decomposed into 3 transitions corresponding to the different possible receiver configurations:
T4a: relative to receiver r, the sending child has a next sibling q to whom the received information is forwarded as an AC message;
T4b: relative to receiver r, the sending child is the last in the list (i.e. the youngest sibling) and the receiving node is not the root (i.e. it has a parent q which is not the fake one) to which it forwards the Info message;
T4c: relative to receiver r, the sending child is the last in the list (i.e. the youngest sibling) and the receiver is the root (i.e. it has no parent—it's parent is the fake one). It updates its predecessor with the information I received and sends a BC message with its own identity to process I.
3.4 Termination Phase
Finally, the termination phase, shown in Figure 2(b) handles the AC and BC messages, using transitions T5 and T6 respectively. In case of an AC message, the predecessor information of receiver r is updated with the content I of the message. It also sends a BC message to this process I with its identifier r. When a BC message is handled, only the successor information of the receiver r is updated with the identity I carried by the message.
4 A Simplified Coloured Petri Net Model
In this section, we first simplify the CPN model, which then makes it easier to exhibit its invariant properties.
4.1 The Simplified Model
The simplified CPN is given in Figures 4(a), 4(c), 4(b). First, both places Pred and Succ initially contained bogus values associated with each process (i.e. (p,fake)) for a process p, which was then discarded to be replaced by the actual values. In the new net, these two places are initially empty, and the only operation now consists in putting in these places the predecessor and successor values produced.
Second, an FC message is only produced by transition T1, and thus has the form (FC,p,p,c). Thus the information I carried by the message is the identity of the sender process p, I=p. Hence, transition T3 (the only one for the reception of FC messages) is modified by using p only.
4.2 Invariants of the simplified CPN
The Petri net model allows us to identify various properties of the system. Of interest here are the place invariants [22, 21]. These identify weighted sums of tokens which remain invariant under the firing of any transition. We use projection functions such as \( \pi_2 \) to select the second element of a token which has a
tuple value, and \( \pi_{2,4} \) to select the second and fourth elements, to form a pair. We also use a function notation to select elements of a particular type, thus \( Messages(FC) \) is the set of FC messages in place \( Messages \).
It is possible that some of these invariant properties could be extracted automatically from the model by a suitable tool, while others could at least be checked automatically. These properties may then be of assistance in proving more involved results for our system.
When verifying properties of a modelled system, it is important to **validate** the model, *i.e.*, show that it is an accurate model. In this regard, we note that the CPN model does not introduce any new information, but it does make explicit certain values (like the sender and receiver of a message) which then make it easier to prove the invariant properties. Another important issue for validating a distributed algorithm is that the model does not have one process inadvertently accessing information that is local to another process. In the case of our model, we need to ensure that each firing of a transition is relevant to only one process and does not access information local to another process. We note the following properties of each transition:
- **T1** fires for process \( p \) and accesses its first child and generates its \( Succ \) entry.
- **T2** fires for process \( p \) and accesses its parent.
- **T3** fires for process \( r \) and generates its \( Pred \) entry.
- **T4a** fires for process \( r \) and accesses its children.
- **T4b** fires for process \( r \) and accesses its parent and its children.
- **T4c** fires for process \( r \) and checks that it is the root and generates its \( Pred \) entry based on the received message.
- **T5** fires for process \( r \) and generates its \( Pred \) entry based on the received message.
- **T6** fires for process \( r \) and generates its \( Succ \) entry based on the received message.

Having convinced ourselves that the model accurately captures a distributed system, we now consider the properties of the model.
**Property 1** \(\text{InitP} + \pi_1(\text{Succ}) + \pi_2(\text{Messages(Info)}) + \pi_2(\text{Messages(AC)}) + \pi_4(\text{Messages(BC)}) = \text{Proc} \setminus \{\text{fake}\}\)
*Proof.* Initially, we have no messages and \(\text{InitP} = \text{Proc} \setminus \{\text{fake}\}\). Then, we can consider each transition in turn:
- **T1** The initialisation of a parent removes an item from \(\text{InitP}\) and adds a \(\text{Succ}\) entry with the same identity.
- **T2** The initialisation of a leaf removes an item from \(\text{InitP}\) and adds an \(\text{Info}\) message with the relevant identity.
- **T4a** This consumes an \(\text{Info}\) message and generates an \(\text{AC}\) message with the same identity.
- **T4b** This consumes one \(\text{Info}\) message and generates another with the same identity.
- **T4c** This consumes an \(\text{Info}\) message and generates a matching \(\text{BC}\) message.
- **T5** This consumes an \(\text{AC}\) message and generates a matching \(\text{BC}\) message for the destination given by the identity.
- **T6** This consumes a \(\text{BC}\) message and adds a \(\text{Succ}\) entry for the receiver. \(\square\)
**Property 2** \(\text{Succ} + \pi_{4,2}(\text{Messages(BC)}) = \pi_{3,4}(\text{Messages(FC)}) + \pi_{2,1}(\text{Pred})\)
*Proof.* Initially, there are no messages and places \(\text{Succ}\) and \(\text{Pred}\) are empty. Subsequently, we consider the relevant transitions in turn:
- **T1** The setting of \(\text{Succ}\) is paired with the generation of an \(\text{FC}\) message.
- **T3** The consumption of an \(\text{FC}\) message is paired with the addition of a \(\text{Pred}\) entry.
- **T4c** The setting of \(\text{Pred}\) is paired with the generation of a \(\text{BC}\) message.
- **T5** The setting of \(\text{Pred}\) is paired with the generation of a \(\text{BC}\) message.
- **T6** The consumption of a \(\text{BC}\) message is paired with the addition of a \(\text{Succ}\) entry. \(\square\)
### 4.3 Liveness of the simplified CPN
We now summarise some liveness properties of the CPN.
**Property 3** For any tree with at least two nodes, either transition \(T1\) or transition \(T2\) can fire for every node. Thus, \(\text{InitP}\) will eventually be empty.
*Proof.* A tree with at least two nodes will have a root and at least one leaf. Thus:
1. \(T1\) can fire for every node which is *not* a leaf, *i.e.* for every node which has a non-fake child.
2. \(T2\) can fire for every leaf, *i.e.* for every node which has a non-fake parent. \(\square\)
**Property 4** All messages can eventually be uniquely consumed.
*Proof.* We consider the different kinds of messages in turn:
- **FC** The only constraint on the consumption of \(\text{FC}\) messages by transition \(T3\) is that the identity and the source of the message are the same. This is guaranteed by the generation of \(\text{FC}\) messages in transition \(T1\).
**Info** Every *Info* message can be consumed by one of the transitions *T4a*, *T4b* or *T4c*. Transition *T4a* can consume *Info* messages from node *p* for parent *r* provided *p* has a younger sibling. Transition *T4b* can consume *Info* messages from node *p* for parent *r* provided *p* has no younger sibling and *r* has a (non-fake) parent. Transition *T4c* can consume *Info* messages from node *p* for parent *r* provided *p* has no younger sibling and *r* is the root.
AC There is no constraint on the consumption of *AC* messages by transition *T5*.
BC There is no constraint on the consumption of *BC* messages by transition *T6*. Note that in each case, exactly one transition can consume each kind of message.
Property 4 guarantees that the algorithm is *silent*. Once a legitimate configuration has been reached (i.e. once the ring has been established), there is no pending message held in the communication channels of the system. A silent algorithm is an algorithm in which, upon a certain point of its execution, the contents of the communication channels remain the same [16]. In our case, no message is sent between processes once the system has stabilised, as mentioned in the introduction of this paper. Also, as stated before, the algorithm is restarted in the event of failure.
5 Algorithm Termination
**Definition 1.** We define the weight of the state as follows:
- for each node prior to sending its first message: \( \text{weight}(\text{node}) = 3 + \text{depth}(\text{node}) \)
- for each node after sending its first message: \( \text{weight}(\text{node}) = 0 \)
- for each *FC* message: \( \text{weight}(\text{FC}) = 1 \)
- for each *BC* message: \( \text{weight}(\text{BC}) = 1 \)
- for each *AC* message: \( \text{weight}(\text{AC}) = 2 \)
- for each *Info* message: \( \text{weight}(\text{Info}) = 3 + \text{depth}(\text{target}) \)
Then the total weight of the state is given by: \( \text{Weight} = \sum_{x \in \text{node} \cup \text{msg}} \text{weight}(x) \).
Note that the weight of a state is always positive if there are any nodes yet to send their first message or any messages to deliver, or else zero when there are none. As a consequence, the weight function has separate points and positivity properties. Absolute homogeneity and triangle inequality properties are not relevant in our context. Therefore, the weight function is a norm on the states (as introduced in Section 1).
**Proposition 1.** Given the state of the algorithm, the weight of the state decreases at every step.
*Proof.* We consider each possible rule in turn:
**rule 1:** The weight of the node is set to zero and the number of *FC* messages is increased by 1. Hence \( \text{Weight} \) is decreased by \( 2 + \text{depth}(\text{node}) \).
**rule 2:** The weight of the node is set to zero and an *Info* message is generated for the parent. Hence \( \text{Weight} \) is decreased by 1.
**Property 5** The algorithm terminates and is self-stabilising.
**Proof.** Following initialisation, every execution step of the algorithm involves at least one of the above rules. Thus, \( \text{Weight} \) is strictly monotonic, i.e. it is decreased by at least one while remaining positive. Consequently, the algorithm terminates. Moreover, as stated in section 1, if the norm function \( \text{Weight} \) is strictly monotonic, then the algorithm is self-stabilising (proof by norm).
\[ \square \]
6 Algorithm Correctness
**Proposition 2.** The algorithm establishes \( \text{Succ} \) and \( \text{Pred} \) as mirror images, i.e. \( \text{Succ}=π_{1,2}(\text{Pred}) \).
**Proof.** This follows directly from properties 2 and 4.
\[ \square \]
**Proposition 3.** The algorithm establishes predecessors of nodes as:
- predecessor(node) = parent of node (case 1: node is the oldest child)
- predecessor(node) = preceding-leaf of node (case 2: node is not the oldest child and not the root)
- predecessor(node) = last-leaf (case 3: node is the root)
**Proof.** We consider each possible case in turn. They correspond to the coloured arcs in Figure 1(d) (red, green and blue arcs respectively).
**case 1**—firing sequence \( T1T3 \) : Every non-leaf node generates an FC message to its oldest child (T1), which sets the parent to be its predecessor (T3), as required.
**case 2**—firing sequence \( T2T4b*T4aT5[T6] \) : Every leaf generates an \( \text{Info} \) message (T2) which is passed up the tree (T4b) till it finds a sibling. That sibling is sent an \( \text{AC} \) message (T4a) with the identity of the leaf (from the \( \text{Info} \) message). The \( \text{AC} \) message sets the predecessor of the sibling to be the originating leaf (T5), which is the preceding leaf in the tree.
**case 3**—firing sequence \( T2T4b*T4c[T6] \) : Every leaf generates an \( \text{Info} \) message (T2) which is passed up the tree (T4b) till it reaches the root, which
will be the case for the last leaf. In this case, the predecessor of the root is set to the last leaf (T4c).
Thus for each possible firing sequence, the \textit{Pred} values are set as required, and Proposition 2 tells us that the \textit{Succ} values are also set as required.
\textbf{Proposition 4.} The algorithm sets the \textit{Succ} and \textit{Pred} values so there is one connected component.
\textit{Proof.} For the purposes of the proof, we consider that a node is connected to the tree until its \textit{Succ} and \textit{Pred} values are set. In this way, the algorithm starts with a tree, which is a single connected component. Then we need to show that every time the \textit{Succ} and \textit{Pred} values are changed, then the node is still connected to the one component. Thus, we have:
- Every oldest child is connected to the parent (by the \textit{Succ} and \textit{Pred} values), which reflects the tree structure, and therefore does not modify the connectedness.
- Every younger sibling is connected to the preceding leaf. Since a leaf has no child, connecting it to another node does not jeopardise the connectivity of the structure. In other words, provided the leaf is connected to the rest of the component, then so is the younger sibling.
- The above items result in a connected structure with the last leaf without a successor. This last leaf is connected to the root.
Thus the algorithm sets the \textit{Succ} and \textit{Pred} values to be a single connected component.
\textit{Proposition 6} The algorithm produces a ring topology.
\textit{Proof.} It suffices to have a single connected component where all nodes have only one predecessor and one successor. The connectedness is stated by Proposition 4. In the terminal state, there is no message left and InitP is empty. Thus, from Property 1, we deduce $\pi_1(Succ) = \text{Proc} \setminus \{\text{fake}\}$. Similar to the cases in the proof of Proposition 4, the different possibilities entail that $\pi_1(Pred) = \text{Proc} \setminus \{\text{fake}\}$.
\section{7 Slight Simplifications of the Algorithm}
From the properties proved in the previous sections, we can again simplify the model and reflect these simplifications in the algorithm.
\subsection{7.1 New Models}
The topology obtained by the predecessor and successor information is a ring, and these are just mirror images of one another. It is thus not necessary to keep them both. So, let us remove place \textit{Pred} from the model of Figures 4(c) and 4(b). In the resulting net, transition $T3$ only discards FC messages. Since no other transition handles these messages, they are also unnecessary. Therefore, we remove the arc producing FC messages from the net in Figure 4(a).
The resulting net is depicted in Figures 5(a), 5(c) and 5(b). The figures are meant to show the modifications on the structure of the net, and the inscriptions are not altered.
Additional simplifications are not possible: even though we could be tempted to get rid of \( AC \) messages that are immediately transformed into \( BC \) messages by transition \( T5 \). This modification would not be sound. Indeed, the successor information for a process \( p \) must be updated by process \( p \) itself. This is obviously the case for transition \( T1 \). The same holds for transition \( T6 \) where process \( r \) receives a \( BC \) message and updates its successor information. However, if transition \( T4a \) were to send immediately a \( BC \) message, it should be \((BC,q,q,I)\) (to be the same as the one generated by \( T4aT5 \)). But then transition \( T4a \) would handle reception of an \( Info \) message by process \( r \), as well as the sending of a message by its child \( q \), hence not the same process and thus not consistent.
We could equally well decide to remove place \( Succ \) and keep place \( Pred \). In this case, \( FC \) messages remain while \( BC \) messages are no more necessary. Hence transition \( T6 \) is also deleted. In order to keep this paper compact and to avoid redundancy, it can be found in [12].
7.2 New Algorithms
Algorithm 2 shows the corresponding simplified algorithm, where variable \( Pred \) has been removed, as well as \( FC \) messages. Note that rule 3 is not necessary anymore, and that some parts of the algorithm are more balanced.
Indeed, the initialisation part is such that leaf processes send \( Info \) messages while the others only update their successor value (rules 1 and 2), while in the end (rules 5 and 6) leaf processes update their successor value and the others send
Algorithm 2: Successor-based algorithm
Constants:
- Parent : ID
- Children : List(ID)
- Id : ID
Output:
- Succ : ID
Initialisation:
1. If Children ≠ ∅ → Succ = First(Children);
2. If Children = ∅ → Send (Info, Id) to Parent;
Run:
1. Receive (Info, I) from p →
- If p ∈ Children then
- Let q = next(p, Children);
- If q ≠ ⊥ then
- Send (Ask_Connect, I) to q;
- Else
- If Parent ≠ ⊥ then
- Send (Info, I) to Parent;
- Else
- Send (B_Connect, Id) to I;
- End if
- Else
- Send (B_Connect, Id) to I;
- End if
2. Receive (Ask_Connect, I) from p →
- Send (B_Connect, Id) to I;
3. Receive (B_Connect, I) from p →
- Succ = I;
A message to a leaf. The main part (rule 4) also features a balanced treatment of Info messages: all types of process send a single message only.
The predecessor-based version of this algorithm is similar, for the case where variable Succ has been removed, as well as BC messages. In order to keep this paper compact and to avoid redundancy, it can be found in [12].
7.3 The algorithms comparison
Both new algorithms send less messages than the original since, in each case, there is one type of message which is no longer used. Let \( m_k \) be the number of messages sent by algorithm \( k \), \( n_l \) the number of leaf nodes and \( n \) the total number of nodes in the tree. We have: \( m_2 = m_1 - 2(n - n_l) \) and \( m_3 = m_1 - 2n_l \).
Therefore, the algorithm to apply depends on the structure of the tree: if there are more leaf nodes than other nodes, the predecessor-based version of the algorithm is preferred; otherwise, we prefer the successor-based version of the algorithm (Algorithm 2).
8 Experimental confirmation of the Algorithm
While the formal results presented above in the paper can stand on their own, we confirmed the results experimentally. The CosyVerif tool [1] provides a graphical front end built on the Eclipse framework [14] for a range of dynamic system formalisms. It also supports a range of backend analysis tools. We prepared a graphical model of the Petri net of Figures 4(a), 4(b) and 4(c) in CosyVerif and we analysed the state space using the prod reachability analyser [23].
The example of Figure 1(a) was entered as the initial marking and prod reported that the state space consisted of 1,275,750 nodes, 9,470,925 arcs with
one terminal node. The terminal node was manually examined to confirm that it represented the appropriate ring structure.
Further validation is considered in the following subsections.
8.1 Exploring different topologies with a pre-initialisation phase
As described in Section 4, the algorithm was modelled as a Petri net with an Initialisation, Main and Termination phases. Rather than just considering one topology, we introduced a pre-initialisation phase to generate an arbitrary tree topology from a given number of nodes. The Petri net segment for this is given in Figure 6.

Fig. 6: Pre-initialisation phase
The pre-initialisation phase was designed so that each topology would be generated with a unique labelling of nodes. Further, the labels would be such that a depth-first traversal of the tree would result in a ring with node labels in increasing numeric order. To achieve this, the last node added to the tree is stored in place `parents` together with its parents and their associated depths in the tree. The depth of the last node added is given in place `depth`. In order to add another node — with the numeric label given in place `nextNode` — there are two possible options: either the new node is added as a child of the last node added (with the transition `addNode`) or the last node can be discarded (with transition `dropParent`) and its immediate parent becomes a possible candidate for the parent of the new node. Transition `dropParent` can drop all but the first node, in which case the next node to be added will be a child of the root. Transition `addNode` can fire for all nodes up to a given node number. When that node number is reached, transition `nodesDone` can fire and add a token to
Table 1: State space results for self stabilisation algorithm.
<table>
<thead>
<tr>
<th>Nodes</th>
<th>Topologies</th>
<th>Initial state space</th>
<th>Reduced state space</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Pre</td>
<td>Nodes</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>0.001</td>
<td>11</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>0.001</td>
<td>103</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>0.002</td>
<td>1,123</td>
</tr>
<tr>
<td>5</td>
<td>14</td>
<td>0.006</td>
<td>172,248</td>
</tr>
<tr>
<td>6</td>
<td>42</td>
<td>0.006</td>
<td>1,301,624</td>
</tr>
<tr>
<td>7</td>
<td>132</td>
<td>0.063</td>
<td>>10M</td>
</tr>
<tr>
<td>8</td>
<td>429</td>
<td>0.230</td>
<td>232.136</td>
</tr>
<tr>
<td>9</td>
<td>1430</td>
<td>0.902</td>
<td>1802.274</td>
</tr>
<tr>
<td>10</td>
<td>4862</td>
<td>0.902</td>
<td>1802.274</td>
</tr>
</tbody>
</table>
place configOK. This place then becomes a side condition of the initialisation transitions $T_1$ and $T_2$ in Figure 4(a).
With the above pre-initialisation phase, we obtain the state space results of Table 1. The first column indicates the number of nodes in the tree to be processed. The column labelled Topologies indicates the number of tree topologies which can be generated with that number of nodes (as given by the pre-initialisation phase above). The column labelled Pre gives the time taken to execute the pre-initialisation phase for that number of nodes. The next three columns give the total number of nodes and arcs in the state space for all those topologies (combined) as well as the time to process those topologies in seconds.
With the pre-initialisation phase, each run of the Petri net will consider a number of topologies and that will be the number of terminal states. In order to ensure that the only difference between the terminal states is due to the different starting topologies, we can add a post-termination phase to remove the topology from the state, i.e. empty the tokens out of place Topology. It is a simple change to the net, and its addition does confirm that irrespective of the starting topology, there is only one terminal node with the correct ring structure.
8.2 State space reduction
In the algorithm we note that there are a number of messages exchanged between a node and what will eventually be its successor and predecessor as well as the intermediate nodes. We hypothesise that this set of messages is independent of the sets of messages exchanged between other pairs of nodes. In other words, the complexity of the algorithm is largely due to the arbitrary interleaving of the message transmission.
Accordingly, we experimented with reducing the state space with the stubborn set technique [26] which eliminates much of the interleaving while maintaining the terminal states. This form of reduction can be activated by running prod with option -s. In Table 1, the last four columns give similar results to the preceding four columns, but this time using the stubborn set technique to reduce the size of the state space. The numbers of nodes and arcs demonstrate that the technique retains only one interleaving, with the state spaces reduced to linear
graphs. Unfortunately, this comes at considerable computational cost which we now consider.
Firstly, we note that the prod tool applies the technique to the unfolded net — it unfolds the Coloured Petri Net into a Place Transition Net. The size of the unfolded net is determined not just by the transitions which can fire in the coloured state space but by the possible range of values for the tokens. Initially, we had allowed for up to 12 nodes and up to 14 children per node. Even for, say, 5 nodes, there would be transitions generated in the unfolded net for up to 14 children (even though there can be no more than 4). Consequently, our initial result was that none of our test cases — even the pre-initialisation phase — reached their terminal states in less than 30 minutes! Accordingly, the nets were modified to reduce the ranges of values for node labels and for child indices to be slightly larger than required for the example under consideration. With these modifications, we were able to reach terminal states for some of our test cases. Still, it is unclear whether this was good enough or whether there was still extensive unused net components in the unfolded net.
The second complexity of the stubborn set method is the computational penalty for computing the stubborn sets. This means that this reduction technique may or may not be effective in reducing the state space. The results clearly show that the size of the state space can be reduced but that the computational penalty can be overwhelming. Even the pre-initialisation phase can be very expensive.
9 Conclusions
This paper has demonstrated the benefits of using formal techniques in the analysis of a non-trivial distributed algorithm for converting a tree structure of processes (or processors) into a ring structure.
In such an exercise, the choice of formalism is significant. The Petri Net formalism has proved to be ideal because of its support for concurrency and the variability of sequencing and timing of concurrent processes. In particular, we did not need to make any assumptions about the synchronisation of the communications nor, when several transitions were enabled, their order of firing.
We built a model of the distributed algorithm and then validated it, i.e. ensured that it accurately reflected the modelled system. In our case, it was important to ensure that the model faithfully reflected the distributed nature of the algorithm. Thus, we examined each transition to ensure that it only accessed information local to a given process.
Having modelled and validated the system, we observed that without adding any new information, the making explicit of the source and target of each message facilitated the identification of some invariant and liveness properties. These were then utilised to prove termination, correctness of the algorithm, and that it was self-stabilising and silent. These properties could easily be exhibited on the model, but they are far from obvious when considering the algorithm itself.
Further, the above properties helped us to identify non-essential information which then allowed us to simplify the algorithm, leading to a more efficient one. We also employed automated tools to explore the state space of the system. This validated our earlier results and confirmed that the complexity of the algorithm was due to the level of concurrency, which was reflected in the large state space. While this state space could be significantly reduced using the stubborn set technique, the cost of doing so quickly became prohibitive.
The approach adopted in this paper presents several advantages: first, proving invariant properties induces that the algorithm is correct whatever the initial tree topology; second, the encoding of the network topology is crucial, and the approach can be generalised to other algorithms provided a suitable encoding of the topology they address.
References
|
{"Source-Url": "https://www-lipn.univ-paris13.fr/~coti/papiers/CLP16.pdf", "len_cl100k_base": 11093, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 57526, "total-output-tokens": 14373, "length": "2e13", "weborganizer": {"__label__adult": 0.0004303455352783203, "__label__art_design": 0.0004723072052001953, "__label__crime_law": 0.00043845176696777344, "__label__education_jobs": 0.0009484291076660156, "__label__entertainment": 0.00012034177780151369, "__label__fashion_beauty": 0.00021922588348388672, "__label__finance_business": 0.0004417896270751953, "__label__food_dining": 0.0004992485046386719, "__label__games": 0.0010080337524414062, "__label__hardware": 0.0021572113037109375, "__label__health": 0.001026153564453125, "__label__history": 0.0004887580871582031, "__label__home_hobbies": 0.00019443035125732425, "__label__industrial": 0.0007600784301757812, "__label__literature": 0.00042128562927246094, "__label__politics": 0.00037288665771484375, "__label__religion": 0.0007219314575195312, "__label__science_tech": 0.195068359375, "__label__social_life": 0.00011897087097167967, "__label__software": 0.0084228515625, "__label__software_dev": 0.78369140625, "__label__sports_fitness": 0.0005011558532714844, "__label__transportation": 0.001110076904296875, "__label__travel": 0.00029969215393066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51707, 0.03116]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51707, 0.39966]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51707, 0.89085]], "google_gemma-3-12b-it_contains_pii": [[0, 2475, false], [2475, 5955, null], [5955, 6873, null], [6873, 9849, null], [9849, 13093, null], [13093, 14875, null], [14875, 17667, null], [17667, 20149, null], [20149, 22163, null], [22163, 25206, null], [25206, 28119, null], [28119, 30091, null], [30091, 32829, null], [32829, 34683, null], [34683, 37065, null], [37065, 38827, null], [38827, 42026, null], [42026, 45049, null], [45049, 48331, null], [48331, 51707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2475, true], [2475, 5955, null], [5955, 6873, null], [6873, 9849, null], [9849, 13093, null], [13093, 14875, null], [14875, 17667, null], [17667, 20149, null], [20149, 22163, null], [22163, 25206, null], [25206, 28119, null], [28119, 30091, null], [30091, 32829, null], [32829, 34683, null], [34683, 37065, null], [37065, 38827, null], [38827, 42026, null], [42026, 45049, null], [45049, 48331, null], [48331, 51707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51707, null]], "pdf_page_numbers": [[0, 2475, 1], [2475, 5955, 2], [5955, 6873, 3], [6873, 9849, 4], [9849, 13093, 5], [13093, 14875, 6], [14875, 17667, 7], [17667, 20149, 8], [20149, 22163, 9], [22163, 25206, 10], [25206, 28119, 11], [28119, 30091, 12], [30091, 32829, 13], [32829, 34683, 14], [34683, 37065, 15], [37065, 38827, 16], [38827, 42026, 17], [42026, 45049, 18], [45049, 48331, 19], [48331, 51707, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51707, 0.03871]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
a72633eb0e66b7351271c5afb2c3f41fef8bc73f
|
ESET’S GUIDE TO DEOBFUSCATING AND DEVIRTUALIZING FINFISHER
## CONTENTS
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Introduction</strong></td>
<td>3</td>
</tr>
<tr>
<td><strong>Anti-disassembly</strong></td>
<td>4</td>
</tr>
<tr>
<td><strong>FinFisher’s virtual machine</strong></td>
<td>7</td>
</tr>
<tr>
<td>Terms and definitions</td>
<td>8</td>
</tr>
<tr>
<td>Vm_start</td>
<td>8</td>
</tr>
<tr>
<td><strong>FinFisher’s interpreter</strong></td>
<td>10</td>
</tr>
<tr>
<td>1. Creating an IDA graph</td>
<td>10</td>
</tr>
<tr>
<td>2. Vm_dispatcher</td>
<td>11</td>
</tr>
<tr>
<td>3. Vm_context</td>
<td>12</td>
</tr>
<tr>
<td>4. Virtual instruction implementations – vm_handlers</td>
<td>14</td>
</tr>
<tr>
<td>5. Writing your own disassembler</td>
<td>17</td>
</tr>
<tr>
<td>6. Understanding the implementation of this virtual machine</td>
<td>19</td>
</tr>
<tr>
<td>7. Automating the disassembly process for more FinFisher samples</td>
<td>20</td>
</tr>
<tr>
<td>8. Compiling disassembled code without the VM</td>
<td>20</td>
</tr>
<tr>
<td><strong>Conclusion</strong></td>
<td>22</td>
</tr>
<tr>
<td><strong>Appendix A: IDA Python script for naming FinFisher vm_handlers</strong></td>
<td>23</td>
</tr>
</tbody>
</table>
INTRODUCTION
Thanks to its strong anti-analysis measures, the FinFisher spyware has gone largely unexplored. Despite being a prominent surveillance tool, only partial analyses have been published on its more recent samples.
Things were put in motion in the summer of 2017 with ESET’s analysis of FinFisher surveillance campaigns that ESET had discovered in several countries. In the course of our research, we have identified campaigns where internet service providers most probably played the key role in compromising the victims with FinFisher.
When we started thoroughly analyzing this malware, the main part of our effort was overcoming FinFisher’s anti-analysis measures in its Windows versions. The combination of advanced obfuscation techniques and proprietary virtualization makes FinFisher very hard to de-cloak.
To share what we learnt in de-cloaking this malware, we have created this guide to help others take a peek inside FinFisher and analyze it. Apart from offering practical insight into analyzing FinFisher’s virtual machine, the guide can also help readers to understand virtual machine protection in general – that is, proprietary virtual machines found inside a binary and used for software protection. We will not be discussing virtual machines used in interpreted programming languages to provide compatibility across various platforms, such as the Java VM.
We have also analyzed Android versions of FinFisher, whose protection mechanism is based on an open source LLVM obfuscator. It is not as sophisticated or interesting as the protection mechanism used in the Windows versions, thus we will not be discussing it in this guide.
Hopefully, experts from security researchers to malware analysts will make use of this guide to better understand FinFisher’s tools and tactics, and to protect their customers against this omnipotent security and privacy threat.
ANTI-DISASSEMBLY
When we open a FinFisher sample in IDA Pro, the first protection we notice in the main function is a simple, yet very effective, anti-disassembly trick.
FinFisher uses a common anti-disassembly technique – hiding the execution flow by replacing one unconditional jump with two complementary, conditional jumps. These conditional jumps both target the same location, so regardless of which jump is made, the same effective code execution flow results. The conditional jumps are then followed by garbage bytes. These are meant to misdirect the disassembler, which normally will not recognize that they are dead code, and will steam on, disassembling garbage code.
What makes this malware special is the way in which it uses this technique. In most other malware we’ve analyzed, it is only used a few times. FinFisher, however, uses this trick after every single instruction.
This protection is very effective at fooling the disassembler – many parts of the code aren’t disassembled properly. And of course, it is impossible to use the graph mode in IDA Pro.
Our first task will be to get rid of this anti-disassembly protection.
The code was clearly not obfuscated manually but with an automated tool and we can observe a pattern in all the jump pairs.
There are two different types of jump pairs – near jump with a 32-bit offset and short jump with an 8-bit offset.
The opcodes of both conditional near jumps (with a dword as a jump offset) start with a 0x0F byte; while the second bytes are equal to 0x8?, where ? in both jump instructions differs only by 1 bit. This is because x86 opcodes for complementary jumps are numerically consecutive. For example, this obfuscation scheme always pairs JE with JNE (0x0F 0x84 vs 0x0F 0x85 opcodes), JP with JNP (0x0F 0x8A vs 0x0F 0x8B opcodes), and so on.
These opcodes are then followed by a 32-bit argument specifying the offset to the destination of the jump. Since the size of both instructions is 6 bytes, the offsets in two consequent jumps differ exactly by 6. (Figure 1)
For example, the code below can be used to detect two of these consecutive conditional jumps:
```python
def is_jump_near_pair(addr):
jcc1 = Byte(addr+1)
jcc2 = Byte(addr+7)
# do they start like near conditional jumps?
if Byte(addr) != 0x0F || Byte(addr+6) != 0x0F:
return False
# are there really 2 consequent near conditional jumps?
if (jcc1 & 0xF0 != 0x80) || (jcc2 & 0xF0 != 0x80):
return False
# are the conditional jumps complementary?
if abs(jcc1-jcc2) != 1:
return False
# do those 2 conditional jumps point to the same destination?
dst1 = Dword(addr+2)
dst2 = Dword(addr+8)
if dst1-dst2 != 6:
return False
return True
```
Deobfuscation of short jumps is based on the same idea, only the constants are different.
The opcode of a short conditional jump equals 0x7?, and is followed by one byte – the jump offset. So again, when we want to detect two consecutive, conditional near jumps, we have to look for opcodes: 0x7?; offset; 0x7? ± 1; offset -2.
The first opcode is followed by one byte, which differs by 2 in two consequent jumps (which is, again, the size of both instructions). (Figure 2)
For example, this code can be used to detect two conditional short jumps:
```python
def is_jump_short_pair(addr):
jcc1 = Byte(addr)
jcc2 = Byte(addr+2)
if not is_jcc8(jcc1) || not is_jcc8(jcc2):
return False
if abs(jcc2–jcc1) != 1:
return False
dst1 = Byte(addr+1)
dst2 = Byte(addr+3)
if dst1 – dst2 != 2:
return False
return True
```
After detecting one of these conditional jump pairs, we deobfuscate this code by patching the first conditional jump to unconditional (using the 0xE9 opcode for the near jump pairs and 0xEB for the short jump pairs) and patch the rest of the bytes with NOP instructions (0x90)
```python
def patch_jcc32(addr):
PatchByte(addr, 0x90)
PatchByte(addr+1, 0xE9)
PatchWord(addr+6, 0x9090)
```
```python
def patch_jcc8(addr):
PatchByte(addr, 0xEB)
PatchWord(addr+2, 0x9090)
```
In addition to these two cases, there might be some places where a jump pair consists of a short and a near jump, rather than two jumps of the same category. However, this only occurs in a few cases in the FinFisher samples and can be fixed manually.
With these patches made, IDA Pro starts to “understand” the new code and is ready (or at least almost ready) to create a graph. It may be the case that we still need to make one more improvement: append tails, i.e. assign the node with the destination of the jump to the same
graph where the node with the jump instruction is located. For this, we can use the IDA Python function `append_func_tail`.
The last step of overcoming the anti-disassembly tricks consists of fixing function definitions. It may still occur that the instruction after the jumps is `push ebp`, in which case IDA Pro (incorrectly) treats this as the beginning of a function and creates a new function definition. In that case, we have to remove the function definition, create the correct one and append tails again.
This is how we can get rid of FinFisher’s first layer of protection – anti-disassembly.
---
**Figure 2** Examples of instructions followed by two conditional short jumps every time
FINFISHER’S VIRTUAL MACHINE
After a successful deobfuscation of the first layer, we can see a clearer main function whose sole purpose is to launch a custom virtual machine and let it interpret the bytecode with the actual payload.
As opposed to a regular executable, an executable with a virtual machine inside uses a set of virtualized instructions, rather than directly using the instructions of the processor. Virtualized instructions are executed by a virtual processor, which has its own structure and does not translate the bytecode into a native machine code. This virtual processor as well as the bytecode (and virtual instructions) are defined by the programmer of the virtual machine. (Figure 3)
As mentioned in the introduction, a well-known example of a virtual machine is the Java Virtual Machine. But in this case, the virtual machine is inside the binary, so we are dealing with a virtual machine used for a protection against reverse engineering. There are well-known commercial virtual machine protectors, for example VMProtect or Code Virtualizer.
The FinFisher spyware was compiled from source code and the compiled binary was then protected with a virtual machine at the assembly level. The protection process includes translating instructions of the original binary into virtual instructions and then creating a new binary that contains the bytecode and the virtual CPU. Native instructions from the original binary are lost. The protected, virtualized sample must have the same behavior as a non-protected sample.
To analyze a binary protected with a virtual machine, one needs to:
1. Analyze the virtual CPU.
2. Write one’s own disassembler for this custom virtual CPU and parse the bytecode.
3. Optional step: compile the disassembled code into a binary file to get rid of the virtual machine.
The first two tasks are very time-consuming, and the first one can also get quite difficult. It includes analyzing every vm_handler and understanding how registers, memory access, calls, etc. are translated.
 // Bytecode interpreted by the virtual CPU
Terms and definitions
There is no standard for naming particular parts of a virtual machine. Hence, we will define some terms which will be referenced throughout the whole paper.
- Virtual machine (vm) – custom, virtual CPU; contains parts like the vm_dispatcher, vm_start, vm_handlers
- vm_start – the initialization part; memory allocation and decryption routines are executed here
- Bytecode (also known as pcode) – virtual opcodes of vm_instructions with their arguments are stored here
- vm_dispatcher – fetches and decodes virtual opcode; is basically a preparation for the execution of one of the vm_handlers
- vm_handler – an implementation of a vm_instruction; executing one vm_handler means executing one vm_instruction
- Interpreter (also known as vm_loop) – vm_dispatcher + vm_handlers – the virtual CPU
- Virtual opcode – an analog of the native opcode
- vm_context (vm_structure) – an internal structure used by the interpreter
- vi_params – a structure in the vm_context structure; the virtual instruction parameters, used by the vm_handler; it includes the vm_opcode and arguments
When interpreting the bytecode, the virtual machine uses a virtual stack and a single virtual register.
- vm_stack – an analog of a native stack, which is used by the virtual machine
- vm_register – an analog of a native register, used by this virtual machine; further referenced as tmp_REG
- vm_instruction – an instruction defined by developers of vm; the body (the implementation) of the instruction is called its vm_handler
In the following sections, we will describe the parts of the virtual machine in more technical detail and explain how to analyze them.
A deobfuscated graph of the main malware function consists of three parts – an initialization part and two other parts which we have named vm_start and interpreter (vm_dispatcher + vm_handlers).
The initialization part specifies a unique identifier of what could be interpreted as a bytecode entry point, and pushes it on the stack. Then, it jumps to the vm_start part that is an initialization routine for the virtual machine itself. It decrypts the bytecode and passes control to the vm_dispatcher that loops over the virtual instructions of the bytecode and interprets them using the vm_handlers.
The vm_dispatcher starts with a pusha instruction and ends with a jmp dword ptr[eax+ecx*4] instruction (or similar), which is a jump to the relevant vm_handler.
Vm_start
The graph created after the deobfuscation of the first layer is seen in Figure 4. The vm_start part is not so important for the analysis of the interpreter. However, it can help us understand the whole implementation of the vm; how it uses and handles virtual flags, virtual stack, etc.
The second part – the vm_dispatcher with vm_handlers – is the crucial one.
The vm_start is called from almost every function, including the main function. The calling function always pushes a virtual instruction identifier and then it jumps to vm_start. Every virtual instruction has its own virtual identifier. In this example, the identifier of the virtual entry point, where the execution from the main function starts, is 0x21CD0554. (Figure 5)
In this part, there is a lot of code for preparing the vm_dispatcher – mainly for preparing the bytecode and allocating memory for the
The most important parts of the code do the following:
1. Allocate 1MB with RWX permission for bytecode and a few more variables.
2. Allocate 0x10000 bytes RWX for local variables in the virtual machine for the current thread – the vm_stack.
3. Decrypt a piece of code using an XOR decryption routine. The decrypted code is an aPLib unpacking routine. The XOR decryption routine used in the sample is a slightly modified version of XOR dword, key routine. Actually, it skips the first of the six dwords and then XORs only the remaining 5 dwords with the key. Following is the algorithm for the routine (further referred to as XOR decryption_code):
```c
int array[6];
int key;
for (i = 1; i < 6; i++) {
array[i] ^= key;
}
```
4. Call aPLib unpacking routine to unpack bytecode. After unpacking, virtual opcodes are still encrypted. (Figure 6)
Preparing virtual opcodes (step 1, 3 and 4) is done only once – at the beginning – and is skipped in subsequent executions of vm_start, when only instructions for proper handling of flags and registers are executed.
Figure 6 // All the code from the \textit{vm\_start} to the \textit{vm\_dispatcher} in grouped nodes named based on their purpose.
**FINFISHER’S INTERPRETER**
This part includes the \textit{vm\_dispatcher} with all the \textit{vm\_handlers} (34 in FinFisher samples) and is crucial for analyzing and/or devirtualizing the virtual machine. The interpreter executes the bytecode.
The instruction \texttt{jmp dword ptr [eax+ecx*4]} jumps to one of the 34 \textit{vm\_handlers}. Each \textit{vm\_handler} implements one virtual machine instruction. In order to know what every \textit{vm\_handler} does, we first need to understand the \textit{vm\_context} and \textit{vm\_dispatcher}.
1. Creating an IDA graph
Before diving into it, creating a well-structured graph can really help understanding the interpreter. We recommend splitting the graph into two parts – the \textit{vm\_start} and the \textit{vm\_dispatcher}, i.e. to define a beginning of a function at the \textit{vm\_dispatcher}'s first instruction. What is still missing is the actual \textit{vm\_handlers} referenced by the \textit{vm\_dispatcher}. In order to connect these handlers with the graph of the
vm_dispatcher, the following functions can be used:
AddCodeXref(addr_of_jmp_instr, vm_handler, XREF_USER|fl_JN)
adding references from the last vm_dispatcher instruction to the beginnings of the vm_handlers
AppendFchunk
appending tails again
After appending every vm_handler to the dispatcher function, the resulting graph should look like (Figure 7)
2. Vm_dispatcher
This part is responsible for fetching and decoding the bytecode. It performs the following steps:
• Executes pusha and pushf instructions to prepare virtual registers and virtual flags for further execution of a virtual instruction.
• Retrieves the base address of the image and address of vm_stack
• Reads 24 bytes of bytecode specifying the next vm_instruction and its arguments
Figure 7 // Graph of the vm_dispatcher with all 34 vm_handlers.
• Decrypts the bytecode with the previously described XOR decryption routine
• Adds the image base to the bytecode argument in case the argument is a global variable
• Retrieves the virtual opcode (number 0-33) from the decrypted bytecode
• Jumps to the corresponding vm_handler which interprets the virtual opcode
After the vm_handler for an instruction has executed, the same sequence of steps is repeated for the next one, starting from the vm_dispatcher's first instruction.
In the case of the vm_call handler, the control is passed to the vm_start part instead (except for instances when a non-virtualized function follows).
3. Vm_context
In this part, we will describe the vm_context – a structure used by the virtual machine, containing all the information necessary for executing the vm_dispatcher and each vm_handler.
When looking at the code of both the vm_dispatcher and the vm_handlers in greater detail, we can notice there are a lot of data operation instructions, referring to ebx+offset, where offset is a number from 0x00 to 0x50. In Figure 8, we can see what the main part of vm_handler 0x05 in one FinFisher sample looks like. (Figure 8)
Figure 8 // Screenshot of one of the vm_handlers
The ebx register points to a structure we named \textit{vm\_context}. We must understand how this structure is used – what the members are, what they mean, and how they are used. When solving this puzzle for the first time, a bit of guessing is needed as to how the \textit{vm\_context} and its members are used.
For example, let’s have a look at the sequence of instructions at the end of the \textit{vm\_dispatcher}:
```
movzx ecx, byte ptr [ebx+0x3C]
// opcode for \textit{vm\_handler}
jmp dword ptr [eax+ecx*4]
// jumping to one of the 34 \textit{vm\_handlers}
```
Since we know that the last instruction is a jump to a \textit{vm\_handler}, we can conclude that ecx contains a virtual opcode and thus the 0x3C member of a \textit{vm\_struct} refers to a virtual opcode number.
Let’s make one more educated guess. At the end of almost every \textit{vm\_handler}, there is the following instruction:
```
add dword ptr [ebx], 0x18.
```
This same member of the \textit{vm\_context} was also used earlier in the \textit{vm\_dispatcher\’s code} – just before jumping to a \textit{vm\_handler}. The \textit{vm\_dispatcher} copies 24 bytes from the structure member to a different location ([ebx+38h]) and decrypts it with the XOR decryption routine to obtain a part of the actual bytecode.
Hence, we can start thinking of the first member of the \textit{vm\_context} ([ebx+0h]) as a \textit{vm\_instruction\_pointer}, and of the decrypted location (from [ebx+38h] to [ebx+50h]) as an ID of a virtual instruction, its virtual opcode and arguments. Together, we will call the structure \textit{vi\_params}.
Following the steps described above, and using a debugger to see what values are stored in the respective structure members, we can figure out all the members of the \textit{vm\_context}.
After the analysis, we can rebuild both FinFisher’s \textit{vm\_context} and \textit{vi\_params} structure:
```
struct vm\_context {
DWORD vm\_instruct\_ptr; // instruction pointer to the bytecode
DWORD vm\_stack; // address of the vm\_stack
DWORD tmp\_REG; // used as a “register” in the virtual machine
DWORD vm\_dispatcher\_loop; // address of the vm\_dispatcher
DWORD clean\_And\_VM\_Dispatch\_Fn; // address of the function which pops values and jumps
to the vm\_dispatcher skipping the first few instructions from it
DWORD clean\_Up\_Dynamic\_Code\_Fn; // address of the function which cleans vm\_instr\_ptr and
calls clean\_And\_VM\_Dispatch\_Fn
DWORD jmp\_Loc1; // address of jump location
DWORD jmp\_Loc2; // address of next vm\_opcode – just executing next vm\_instruction
DWORD Bytecode\_start; // address of the start of the bytecode in data section
DWORD Dispatch\_EBP;
DWORD Image\_Base; // Image base address
DWORD ESP0\_flags; // top of the native stack (there are saved flags of the vm\_code)
DWORD ESP1\_flags; // same as previous
DWORD Load\_OPcodes\_Section\_Fn;
vi\_params bytecode; // everything necessary for executing vm\_handler, see below
DWORD limit\_For\_Top\_Of\_Stack; // top limit for the stack
};
```
struct vi_params {
DWORD Virtual_instr_id;
DWORD OpCode; // values 0 – 33 -> tells which handler to execute
DWORD Arg0; // 4 dword arguments for vm_handler
DWORD Arg4; // sometimes unused
DWORD Arg8; // sometimes unused
DWORD ArgC; // sometimes unused
};
4. Virtual instruction implementations – vm_handlers
Each vm_handler handles one virtual opcode – since we have 34 vm_handlers, there are at most 34 virtual opcodes. Executing one vm_handler means executing one vm_instruction, so in order to reveal what a vm_instruction does, we need to analyze the corresponding vm_handler.
After reconstructing the vm_context and naming all the offsets from ebx, the previously shown vm_handler changes to a much more readable form, as seen in Figure 9.
At the end of this function, we notice a sequence of instructions, starting with the vm_instruction_pointer, being incremented by 24 – the size of each vm_instruction’s vi_params structure. Since this sequence is repeated at the end of almost every vm_handler, we conclude it is a standard function epilogue and the actual body of the vm_handler can be written as simply as:
```assembly
mov [tmp_REG], Arg0
```
So, there we go – we have just analyzed the first instruction of the virtual machine. :-)
Figure 9 // The previous vm_handler after inserting the vm_context structure
To illustrate how the analyzed instruction works when executed, let’s consider the vi_params structure filled as follows:
```c
struct vi_params {
DWORD ID_of_virt_instr = doesn’t matter;
DWORD OpCode = 0x0C;
DWORD Arg0 = 0x42;
DWORD Arg4 = 0;
DWORD Arg8 = 0;
DWORD ArgC = 0;
};
```
From what was stated above, we can see that the following instruction will be executed:
```c
mov [tmp_REG], 0x42
```
At this point, we should understand what one of the vm_instructions does. The steps we followed should serve as a decent demonstration of how the entire interpreter works.
However, there are some vm_handlers that are harder to analyze. This vm’s conditional jumps are tricky to understand because of the way they translate flags.
As mentioned before, the vm_dispatcher starts with pushing native EFLAGS (of vm_code) to the top of the native stack. Therefore, when the handler for a respective jump is deciding whether to jump or not, it looks at EFLAGS at the native stack and implements its own jump method. Figure 10 illustrates how the virtual JNP handler is implemented by checking the parity flag. (Figure 10)
For other virtual conditional jumps, it may be necessary to check several flags – for example, the jump result of the virtualized JBE depends on the values of both CF and ZF – but the principle stays the same.
After analyzing all 34 vm_handlers in FinFisher’s virtual machine, we can describe its virtual instructions as follows:
```
.text:00402ABA VM_table dd offset case_0_JL_loc1
.text:00402ABE dd offset case_1_JNP_loc1
.text:00402AC2 dd offset case_2_JLE_loc1
.text:00402AC5 dd offset case_3_vm_jcc
.text:00402ACA dd offset case_4_exec_native_code, same as case 6
.text:00402ACE dd offset case_5_mov_tmpREGRef_Arg0: mov [tmpREG], Arg0
.text:00402AD2 dd offset case_6_exec_native_code
.text:00402AD6 dd offset case_7_JZ_loc1
.text:00402ADA dd offset case_8_JG_loc1
.text:00402ADE dd offset case_9_mov_tmpREG_Arg0: mov tmpREG, Arg0
.text:00402AE2 dd offset case_A_zero_tmpREG; mov tmpREG, 0
.text:00402AB6 dd offset case_B_JS_loc1
.text:00402AAE dd offset case_C_mov_tmpREGDeref_reg: mov [tmpREG], reg
.text:00402AF2 dd offset case_D_mov_tmpREG_reg
.text:00402AF6 dd offset case_E_JB loc1
.text:00402AFA dd offset case_10_JNZ_loc1
.text:00402AFE dd offset case_11_JNO_loc1
.text:00402B02 dd offset case_12_vm_call
.text:00402B06 dd offset case_13_mov_tmpREG_reg: mov tmpREG, reg
.text:00402B0A dd offset case_14_JP_loc1
.text:00402B0E dd offset case_15_mov_reg_tmpREG: mov reg, tmpREG
.text:00402B12 dd offset case_16_JO_loc1
.text:00402B15 dd offset case_17_JGE_loc1
.text:00402B1A dd offset case_18_deref_tmpREG: mov tmpREG, [tmpREG]
.text:00402B1E dd offset case_19_shl_tmpREG_Arg0: shl tmpREG, (byte)Arg0
.text:00402B22 dd offset case_1A_JNS_loc1
.text:00402B25 dd offset case_1B_JNB_loc1
.text:00402B2A dd offset case_1C_push_tmpREG: push tmpREG
.text:00402B2E dd offset case_1D_JA_loc1
.text:00402B32 dd offset case_1E_add_tmpREG_arg0; add tmpREG, reg
.text:00402B36 dd offset case_1F_vm_jmp
.text:00402B3A dd offset case_20_add_tmpREG_arg0
.text:00402B3E dd offset case_21_mov_tmpREG_to_Arg0Deref. mov [Arg0], REG
```
Figure 11 // vm_table with all 34 vm_handlers accessed
Please note that the keyword “tmp_REG” refers to a virtual register used by the virtual machine – temporary register in the vm_context structure, while “reg” refers to a native register, e.g. eax.
Let’s have a look at the analyzed instructions of the virtual machine. For example, case_3_vm_jcc is a general jump handler that can execute any native jump, either conditional or unconditional.
Apparently, this virtual machine does not virtualize every native instruction – that’s where instructions in cases 4 and 6 come in handy.
These two vm_handlers are implemented to execute native code directly – all they do is to read the opcode of a native instruction given as an argument and execute the instruction.
One more thing to note is that the vm_registers are always at the top of the native stack, while the identifier of the register to be used is stored in the last byte of arg0 of the virtual instruction.
The following code can be used to access the respective virtual register:
5. Writing your own disassembler
After we have correctly analyzed all the \textit{vm\_instructions}, there is still one step to be done before we can start the analysis of the sample – we need to write our own disassembler for the bytecode (parsing it manually would be problematic due to its size).
By putting in the effort and writing a more robust disassembler we can save ourselves some trouble when FinFisher’s virtual machine is changed and updated.
Let’s start with the \textit{vm\_handler} 0x0C, which executes the following instruction:
\begin{verbatim}
mov [tmp_REG], reg
\end{verbatim}
This instruction takes exactly one argument – the identifier of a native register to be used as \textit{reg}. This identifier must be mapped into a native register name, for instance using a \textit{resolve\_reg} function as described above.
The following code can be used to disassemble this \textit{vm\_handler}:
\begin{verbatim}
def resolve_reg(reg_pos):
stack_regs = ['eax', 'ecx', 'edx', 'ebx', 'esp', 'ebp', 'esi', 'edi']
stack_regs.reverse()
return stack_regs[reg_pos]
reg_pos = 7 - (state[arg0] & 0x000000FF)
reg = resolve_reg(reg_pos)
\end{verbatim}
Again, \textit{vm\_handlers} for jumps are harder to understand. In case of jumps, members \textit{vm\_context.vi\_params.Arg0} and \textit{vm\_context.vi\_params.Arg1} store the offset by which to jump. This “jump offset” is actually an offset in the bytecode. When parsing jumps, we need to put a marker to the location to which it jumps. For example, this code can be used:
\begin{verbatim}
def computeLoc1(pos, vi_params):
global instr
jmp_offset = (vi_params[arg0] & 0x00FFFFFF) + (vi_params[arg1] & 0xFF000000)
if jmp_offset < 0x7FFFFFFF:
jmp_offset /= 0x18 # their increment by 0x18 is my increment by 1
else:
jmp_offset = int((-0x100000000 + jmp_offset) / 0x18)
return pos+jmp_offset
\end{verbatim}
Finally, there is a \textit{vm\_handler} responsible for executing native instructions from arguments, which needs special treatment. For this, we have to use a disassembler for native x86 instructions – for example, the open source tool Distorm.
The length of an instruction is stored in \textit{vm\_context.vi\_params.OpCode} & 0x00000FF00. The opcode of the native instruction that will be executed is stored in the arguments. The following code can be used to parse the \textit{vm\_handler} that executes native code:
For example, from the part of the bytecode shown in Figure 12, we may get the following output:
```
mov tmp_REG, 0
add tmp_REG, EBP
add tmp_REG, 0x10
mov tmp_REG, [tmp_REG]
push tmp_REG
mov tmp_REG, EAX
push tmp_REG
```
6. Understanding the implementation of this virtual machine
After we have analyzed all the virtual handlers and constructed a custom disassembler, we can have one more look at the virtual instructions to get an overall idea of how they were created.
First, we must understand that the virtualization protection was implemented at the assembly level. The authors translated native instructions into their own, somewhat complicated instructions, which are to be executed by a custom virtual CPU. To achieve this, a temporary “register” (tmp_REG) is used.
We can look at some examples to see how this translation works. For example, the virtual instruction from the previous example –
```assembly
mov tmp_REG, EAX
push tmp_REG
```
– was translated from the original native instruction `push eax` When virtualized, a temporary register was used in an intermediate step to change the instruction into something more complicated.
Let’s consider another example:
```assembly
mov tmp_REG, 0
add tmp_REG, EBP
add tmp_REG, 0x10
mov tmp_REG, [tmp_REG]
push tmp_REG
```
The native instructions that were translated into these virtualized instructions were the following (with reg being one of the native registers):
```assembly
mov reg, [ebp+0x10]
push reg
```
This is, however, not the only way to virtualize a set of instructions. There are other virtual machine protectors with other approaches. For instance, one of the commercial vm protectors translates each math operation instruction into NOR logic, with a number of temporary registers being used instead of one.
Conversely, FinFisher’s virtual machine did not go as far as to cover all the native instructions. While many of them can be virtualized, some can’t – math instructions, such as `add`, `imul` and `div`, being some examples. If these instructions appear in the original binary, the `vm_handler` responsible for executing native instructions is called to handle them in the protected binary. The only change is that EFLAGS and native registers are popped from the native stack just before the native instruction is executed, and pushed back after it is executed. This is how the virtualization of every native instruction was avoided.
A significant drawback of protecting binaries with a virtual machine is the performance impact. In the case of FinFisher’s virtual machine, we estimate it to be more than one hundred times slower than native code, based on the number of instructions that have to be executed to handle every single `vm_instruction` (`vm_dispatcher + vm_handler`).
Therefore, it makes sense to protect only selected parts of the binary – and this is also the case in the FinFisher samples we analyzed.
Moreover, as mentioned before, some of the virtual machine handlers can call native functions directly. As a result, the users of the virtual machine protection (i.e. the authors of FinFisher) can look at the functions at the assembly level and mark which of them are to be protected by the virtual machine. For those that are marked, their instructions will be virtualized, for those that are not, the original functions will be called by the respective virtual handler. Thus, the execution might be less time-consuming while the most interesting parts of the binary stay protected. (Figure 13)
7. Automating the disassembly process for more FinFisher samples
In addition to the length of the bytecode our parser has to process, we have to keep in mind that there is some randomization across various FinFisher samples. Although the same virtual machine has been used for the protection, the mapping between the virtual opcodes and the `vm_handler` is not always constant. They can be (and are) paired randomly and differently for each of the FinFisher samples we analyzed. It means that if the `vm_handler` for the 0x5 virtual opcode in this sample handles the `mov [tmp_REG], arg0` instruction, it may be assigned a different virtual opcode in another protected sample.
To address this issue, we can use a signature for each of the analyzed `vm_handler`. The IDA Python script in Appendix A can be applied after we have generated a graph as shown in Figure 7 (it is particularly important to have the jz/jnz jump obfuscation eliminated – as described in the first section of this guide) to name the handlers based on their signatures. (With a small modification, the script can also be used to recreate the signatures in case the `vm_handler` are changed in a future FinFisher update.)
As mentioned above, the first `vm_handler` in the FinFisher sample you analyze may be different than JL, as in the example FinFisher sample, but the script will identify all of the `vm_handler` correctly.
8. Compiling disassembled code without the VM
After disassembly and after a few modifications, it is possible to compile the code. We will treat virtual instructions as native instructions. As a result, we will get a pure binary without the protection.
Most of the `vm_instruction` can be compiled immediately using copy-paste, since the output of our disassembler mostly consists of native-looking instructions. But some cases need special treatment:
- `tmp_REG` – since we defined `tmp_REG` as a global variable, we need to make code adjustments for cases when an address stored in it is being dereferenced. (Since
dereferencing an address which is in a global variable is not possible in the x86 instruction set.) For example, the vm contains the virtual instruction `mov tmp_REG, [tmp_REG]` which needs to be rewritten as follows:
```
push eax
mov eax, tmp_REG
mov eax, [eax]
mov tmp_REG, eax
pop eax
```
- Flags – Virtual instructions do not change the flags, but native math instructions do. Therefore, we need to make sure that virtual math instruction won’t change flags in the devirtualized binary either, which means we have to save flags before executing this instruction and restore them after the execution.
- Jumps and calls – we have to put a marker to the destination virtual instruction (jumps) or function (calls).
- API function calls – in most cases, API functions are loaded dynamically, whereas in others they are referenced from the IAT of the binary, so these cases need to be handled accordingly.
- Global variables, native code – Some global variables need to be kept in the devirtualized binary. Also in the FinFisher dropper, there is a function for switching to x64 from x86 that is executed natively (actually it is done only with the `retf` instruction). All these must be kept in the code when compiling.
Depending on the output of your disassembler, you may still need to do a few more modifications to get pure native instructions that can be compiled. Then, you can compile the code with your favorite assembly-compiler into a binary without the VM.
CONCLUSION
In this guide, we have described how FinFisher uses two elaborate techniques to protect its main payload. The primary intention of this protection is not to avoid AV detection, but to cover the configuration files and new techniques implemented in the spyware by hindering analysis by reverse engineers. As no other detailed analysis of the obfuscated FinFisher spyware has been published to date, it seems the developers of these protection mechanisms have been successful.
We have shown how we can overcome the anti-disassembly layer automatically, and how the virtual machine can be efficiently analyzed.
We hope this guide can help reverse engineers analyze vm-protected FinFisher samples, as well to better understand other virtual machine protectors in general.
Appendix A
IDA Python script for naming FinFisher vm_handlers
The script is also available on ESET's Github repository:
https://github.com/eset/malware-research/blob/master/finfisher/ida_finfisher_vm.py
```python
import sys
SIGS = ['8db40b84b32c80b9400007df631c', 'case_0_JL_loc1', '8db40b84b32c80b9400007df631c', 'case_1_JLP_loc1', '8db40b84b32c80b9400007df631c', 'case_2_JLP_loc1', '8db40b84b32c80b9400007df631c', 'case_3_JLP_loc1', '8db40b84b32c80b9400007df631c', 'case_5_mov_tmp_REGref_arg0', '8db40b84b32c80b9400007df631c', 'case_5_mov_tmp_REGref_arg1', '8db40b84b32c80b9400007df631c', 'case_6_exec_native_code', '8db40b84b32c80b9400007df631c', 'case_7_JL_loc1', '8db40b84b32c80b9400007df631c', 'case_7_JL_loc1', '8db40b84b32c80b9400007df631c', 'case_8_JL_loc1', '8db40b84b32c80b9400007df631c', 'case_9_mov_tmp_REG_arg0', '8db40b84b32c80b9400007df631c', 'case_10_JP_loc1', '8db40b84b32c80b9400007df631c', 'case_11_JN_loc1', '8db40b84b32c80b9400007df631c', 'case_12_JM_loc1', '8db40b84b32c80b9400007df631c', 'case_13_mov_tmp_REG_mp_notRly', '8db40b84b32c80b9400007df631c', 'case_14_JP_loc1', '8db40b84b32c80b9400007df631c', 'case_15_mov_tmp_REG_reg', '8db40b84b32c80b9400007df631c', 'case_16_JP_loc1', '8db40b84b32c80b9400007df631c', 'case_17_JMP_reg', '8db40b84b32c80b9400007df631c', 'case_18_deref_tmp_REG', '8db40b84b32c80b9400007df631c', 'case_19_shl_tmp_REG_arg0', '8db40b84b32c80b9400007df631c', 'case_1A_JNS_loc1', '8db40b84b32c80b9400007df631c', 'case_1B_JN_loc1', '8db40b84b32c80b9400007df631c', 'case_1C_push_tmp_REG', '8db40b84b32c80b9400007df631c', 'case_1D_JA_loc1', '8db40b84b32c80b9400007df631c', 'case_1E_ad_d_stack_val_to_tmp_REG', '8db40b84b32c80b9400007df631c', 'case_1F_vm_jmp', '8db40b84b32c80b9400007df631c', 'case_20_add_arg0_to_tmp_REG', '8db40b84b32c80b9400007df631c', 'case_21_mov_tmp_REG_to_arg0 Dereferenced']
SWITCH = 0 # addr of jmp
dword ptr [eax+ecx*4] (jump to vm_handlers)
sig = []
def append_bytes(instr, addr):
for j in range(instr.size):
sig.append(Byte(addr))
```
ESET's guide to deobfuscating and devirtualizing FinFisher
For more information, visit:
https://github.com/eset/malware-research/blob/master/finfisher/ida_finfisher_vm.py
addr += 1
return addr
def makeSigName(sig_name, vm_handler):
print "naming %x as %s" % (vm_handler, sig_name)
MakeName(vm_handler, sig_name)
return
if SWITCH == 0:
print "First specify address of switch jump - jump to vm_handlers!"
sys.exit(1)
for i in range(SWITCH_SIZE):
addr = Dword(SWITCH+i*4)
faddr = addr
sig = []
while 1:
instr = DecodeInstruction(addr)
if instr.get_canon_mnem() == "jmp" and (Byte(addr) == 0xeb or Byte(addr) == 0xe9):
addr = instr.Op1.addr
continue
if instr.get_canon_mnem() == "jmp" and Byte(addr) == 0xff and Byte(addr+1) == 0x63 and (Byte(addr+2) == 0x18 or Byte(addr+2) == 0x1C):
addr = append_bytes(instr, addr)
break
if instr.get_canon_mnem() == "jmp":
break
if instr.get_canon_mnem() == "jz":
sig.append(Byte(addr))
addr += instr.size
continue
if instr.get_canon_mnem() == "jnz":
sig.append(Byte(addr))
addr += instr.size
continue
if instr.get_canon_mnem() == "nop":
addr += 1
continue
addr = append_bytes(instr, addr)
sig_str = "".join([hex(l)[2:] for l in sig])
hsig = ' '.join(map(chr, sig)).encode("hex")
for key, value in SIGS.iteritems():
if len(key) > len(sig_str):
if key.find(sig_str) >= 0:
makeSigName(value, faddr)
else:
if sig_str.find(key) >= 0:
makeSigName(value, faddr)
|
{"Source-Url": "https://www.welivesecurity.com/wp-content/uploads/2018/01/WP-FinFisher.pdf", "len_cl100k_base": 10263, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 49840, "total-output-tokens": 11824, "length": "2e13", "weborganizer": {"__label__adult": 0.0004878044128417969, "__label__art_design": 0.0003559589385986328, "__label__crime_law": 0.00179290771484375, "__label__education_jobs": 0.0003333091735839844, "__label__entertainment": 8.481740951538086e-05, "__label__fashion_beauty": 0.00015783309936523438, "__label__finance_business": 0.0001995563507080078, "__label__food_dining": 0.0003197193145751953, "__label__games": 0.001194000244140625, "__label__hardware": 0.002513885498046875, "__label__health": 0.0002894401550292969, "__label__history": 0.0002529621124267578, "__label__home_hobbies": 0.0001283884048461914, "__label__industrial": 0.0006957054138183594, "__label__literature": 0.00025010108947753906, "__label__politics": 0.0003170967102050781, "__label__religion": 0.0004551410675048828, "__label__science_tech": 0.04864501953125, "__label__social_life": 9.173154830932616e-05, "__label__software": 0.0301971435546875, "__label__software_dev": 0.91064453125, "__label__sports_fitness": 0.00022172927856445312, "__label__transportation": 0.0003113746643066406, "__label__travel": 0.00014579296112060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40970, 0.03076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40970, 0.91655]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40970, 0.84178]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 1548, null], [1548, 3437, null], [3437, 5483, null], [5483, 8089, null], [8089, 8788, null], [8788, 10889, null], [10889, 14203, null], [14203, 15269, null], [15269, 16441, null], [16441, 17261, null], [17261, 18473, null], [18473, 21543, null], [21543, 22897, null], [22897, 24041, null], [24041, 27119, null], [27119, 29563, null], [29563, 29784, null], [29784, 33071, null], [33071, 35093, null], [35093, 36567, null], [36567, 37349, null], [37349, 39547, null], [39547, 40970, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 1548, null], [1548, 3437, null], [3437, 5483, null], [5483, 8089, null], [8089, 8788, null], [8788, 10889, null], [10889, 14203, null], [14203, 15269, null], [15269, 16441, null], [16441, 17261, null], [17261, 18473, null], [18473, 21543, null], [21543, 22897, null], [22897, 24041, null], [24041, 27119, null], [27119, 29563, null], [29563, 29784, null], [29784, 33071, null], [33071, 35093, null], [35093, 36567, null], [36567, 37349, null], [37349, 39547, null], [39547, 40970, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40970, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40970, null]], "pdf_page_numbers": [[0, 59, 1], [59, 1548, 2], [1548, 3437, 3], [3437, 5483, 4], [5483, 8089, 5], [8089, 8788, 6], [8788, 10889, 7], [10889, 14203, 8], [14203, 15269, 9], [15269, 16441, 10], [16441, 17261, 11], [17261, 18473, 12], [18473, 21543, 13], [21543, 22897, 14], [22897, 24041, 15], [24041, 27119, 16], [27119, 29563, 17], [29563, 29784, 18], [29784, 33071, 19], [33071, 35093, 20], [35093, 36567, 21], [36567, 37349, 22], [37349, 39547, 23], [39547, 40970, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40970, 0.04072]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
796024c1cac4c53d6b51f5c9dbd4aa555cdcaea6
|
Multitask Pretraining with Structured Knowledge for Text-to-SQL Generation
Robert Giaquinto, Dejiao Zhang, Benjamin Kleiner, Yang Li
Ming Tan, Parminder Bhatia, Ramesh Nallapati, Xiaofei Ma
AWS AI Labs
{rgiaq,dejiaoz,kleinerb,ylizam,
mingtan,parmib,rnallapa,xiaofeim}@amazon.com
Abstract
Many machine learning-based low-code or no-code applications involve generating code that interacts with structured knowledge. For example, one of the most studied tasks in this area is generating SQL code from a natural language statement. Prior work shows that incorporating context information from the database schema, such as table and column names, is beneficial to model performance on this task. In this work we present a large pretraining dataset and strategy for learning representations of text, tables, and SQL code that leverages the entire context of the problem. Specifically, we build on existing encoder-decoder architecture by introducing a multitask pretraining framework that complements the unique attributes of our diverse pretraining data. Our work represents the first study on large-scale pretraining of encoder-decoder models for interacting with structured knowledge, and offers a new state-of-the-art foundation model in text-to-SQL generation.
We validate our approach with experiments on two SQL tasks, showing improvement over existing methods, including a 1.7 and 2.2 percentage point improvement over prior state-of-the-arts on Spider and CoSQL.
1 Introduction
Tables, relational databases, and other forms of structured knowledge (SK) encompass a massive amount of data across a wide range of applications. Extracting insights held in such data often requires proficiency in query languages like SQL, making it only accessible to the minority of people with the technical skills. A natural language interface, however, would expand access to these information exponentially. Likewise, querying via natural language allows users quickly hone in on an answer to their particular question, rather than visually scanning dense tables where the majority of the information is irrelevant to the user. To that end, we explore pretraining techniques for large language models that focus on the challenging interplay between structured and unstructured knowledge, and target a variety of downstream text-to-SQL tasks.
Recently there have been significant advancements in learning representations for tables (Yin et al., 2020; Herzig et al., 2020; Eisenschlos et al., 2020; Liu et al., 2020; Liu et al., 2022; Wang et al., 2021c; Yu et al., 2021; Cheng et al., 2022; Dong et al., 2022), which advanced the state-of-the-art in a range of table-to-text tasks, like table question-answering (Nan et al., 2022; Chen et al., 2021), fact verification (Chen et al., 2020; Aly et al., 2021), data-to-text (Parikh et al., 2020; Nan et al., 2021), and semantic parsing (Yu et al., 2019b; Zhong et al., 2017). While better table understanding benefits a range of tasks, pretraining focused on text-to-SQL has thus far received less attention. Pretrained encoders, such as TaBERT and TAPAS (Yu et al., 2021; Yin et al., 2020; Herzig et al., 2020), show that pretraining BERT-style encoders (Devlin et al., 2019) on tables with mask language modeling (MLM) loss produces a strong foundation model that can be extended for text-to-SQL. GRAPPA includes small amount of synthetic SQL code in the pretraining data to more specifically target the text-to-SQL task (Yu et al., 2021). These encoder-only approaches are, however, restricted in their generative capabilities as they must be combined with an additional module that is carefully designed to generate valid SQL code (Zhong et al., 2017; Wang et al., 2021a).
Encoder-decoder architectures like T5 (Raffel et al., 2020), on the other hand, exhibit better performance on text-to-SQL to-date when constraining the decoder with rules that check for syntactic correctness (Scholak et al., 2021). However, the T5-based models with exceptional text-to-SQL performance (Xie et al., 2022; Scholak et al., 2021) have still only been pretrained on natural language (NL) — begging the question, can text-to-SQL encoder-decoders benefit from pretraining on structured in-
formation or code? Most recently, Andrejczuk et al. (2022) proposed a multi-task tabular pretraining strategy for T5 model, but their work introduced the tabular knowledge to the model with a single data source, i.e. Wikipedia tables.
In this work we introduce our SQL and Table Aligned Multi-task Pretraining (STAMP) framework, which explores pretraining encoder-decoder models for text-to-SQL. Starting from text-only T5 (Raffel et al., 2020) checkpoints, our multi-stage pretraining framework refines previous text-only models by continuing training on a collection of large multi-modal datasets that combine structured knowledge with natural language and SQL. Additionally, inspired by the impressive generalization of large language models incorporating code in pretraining data (Athiwaratkun et al., 2022; Brown et al., 2020; Chowdhery et al., 2022; Du et al., 2022; Thoppilan et al., 2022), we apply our pretraining framework to CodeT5 (Wang et al., 2021b) checkpoints that are trained on code.
Building on recent work in multi-task pretraining (Tay et al., 2022; Aghajanyan et al., 2021; Sanh et al., 2022; Aribandi et al., 2021), we combine masked language modeling (MLM) with task-aware context-to-output objectives that vary across tasks and datasets. For pretraining datasets with multiple modalities (i.e. combinations of NL, SQL, and structured knowledge) or intrinsic splits (e.g. question and answer), we explore the benefit of the dual learning objectives (Wang et al., 2021b). We assess our pretraining strategy on a variety of SQL benchmarks following the UnifiedSKG framework (Xie et al., 2022). Our approach outperforms previous text- and code-only pretraining, and gives a new state-of-the-art on a range of benchmarks. To better understand our strategy, we present ablation studies on the optimal objective mix, the impact of linearizing structured knowledge into row- versus column-centric tables, and the effect of building on previously pretrained text- versus code-only checkpoints. Our work shows that continued pretraining with multi-task learning is a promising direction for advancing the capacity of language models.
2 Related Work
Encoder-only Encoder-only transformer architectures like BERT and its successors (Devlin et al., 2019; Liu et al., 2019; Joshi et al., 2020; Reimers and Gurevych, 2019; Clark et al., 2020) optimize masked language modeling (MLM) objectives while using a bidirectional receptive field covering the whole input sequence. The encoder-only architectures perform well across a variety of tasks like classification, regression, sentiment analysis, question-answering, and retrieval. However, recent work (Herzig et al., 2020; Yin et al., 2020; Yu et al., 2021) shows that tasks like table-to-text and text-to-SQL require additional pretraining on structured knowledge for good generalization, and adapting MLM objectives to the unique structure of tabular data improves learning.
Prior to BERT, text-to-SQL models like SQL-Net and Seq2SQL (Zhong et al., 2017; Xu et al., 2017) encoded inputs with bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) and generated queries via slot-filling. Text-to-SQL performance improved with the adoption of BERT-based encoders, for example (Yu et al., 2021; Wang et al., 2021a) attach feed forward networks and LSTMs to the BERT-style encoder to generate queries. Because encoder-only architectures are restricted in their ability to generate sequences, they require careful design to generate valid SQL queries and limit the complexity of those queries.
Encoder-Decoder Alternatively, encoder-decoders like BART (Lewis et al., 2019) and T5 (Raffel et al., 2020) combine the bidirectional encoder with a causal decoder are naturally suited for sequence-to-sequence tasks like text-to-SQL, and are quickly becoming the mainstream approach due to the reduced need for domain specific solutions (Qin et al., 2022). T5 (Raffel et al., 2020) in particular achieves impressive performance on a range of table-to-text and text-to-SQL tasks (Xie et al., 2022) despite pretraining that is limited to NL. Moreover, Shi et al. (2020) and Liu et al. (2022) leverage a BART-style encoder-decoder to improve the performance of pretrained models for text-to-SQL and table-to-text tasks, respectively. We follow this line, proposing a strategy that builds on top of T5 and CodeT5 (Wang et al., 2021b).
Multi-Task Training Raffel et al. (2020) explore various self-supervised objectives, and found the fill-in-the-blank style of denoising objective as most effective. Additionally, combining MLM objectives with small amounts of auxiliary objectives is effective (Liu et al., 2019; Aroca-Ouellette and Rudzicz, 2020). For encoder-decoder models Tay et al. (2022); Wang et al. (2021b) show the benefit of multi-task pretraining on a mix of the T5 span corruption objective (Raffel et al., 2020) along with a the causal language modeling (CLM) style of objective, similar to those used in decoder-only architectures (Brown et al., 2020). In the domain of text-to-SQL, Yu et al. (2021); Tao Yu et al. (2021) perform multitask learning by combining MLM with SQL specific objectives. Lastly, Xie et al. (2022); Aghajanyan et al. (2021); Aribandi et al. (2021); Sanh et al. (2022); FitzGerald et al. (2022); Chen et al. (2022) demonstrate that multi-task learning across a variety of datasets can improve performance relative to the single-task, single-dataset paradigm. Wang et al. (2021b) show that an objective mix specific to programming languages (PL) along with dual learning on bimodal data promotes generation on tasks combining PL and NL.
3 Multi-Task Pretraining on Structured Knowledge
Our SQL and Table Aligned Multi-task Pretraining (STAMP) model builds on the T5 encoder-decoder architecture and pretraining checkpoints (Raffel et al., 2020), and similarly our CodeSTAMP models build on the CodeT5 architecture and checkpoints Wang et al. (2021b). We develop a multi-task pretraining framework specifically designed to leverage our large and unique collection of data that combine various data modalities, namely natural language (NL), structured knowledge (SK), and SQL. STAMP introduces a new stage of pretraining that transitions T5 from being a purely NL programing language (PL) trained model to a backbone model that excels at text-to-SQL generation.
Next, we present the construction of our pretraining dataset in Section 3.1, the mixture of objectives designed to learn the unique structure of our data and align the NL, SK, SQL data modalities in Section 3.2, and our unified format for representing tasks and structured knowledge in Section 3.3.
3.1 Datasets and Pre-Processing
Our pretraining dataset consists of 18 million examples, with various combinations of NL, SQL code, and structured knowledge (see Figure 2). Our data is derived from diverse sources and we propose dif-
Different strategies to remove many low-quality and noisy data from each data source. We tokenize the raw data using the corresponding T5 and CodeT5 tokenizers, which we augment to support new special tokens for representing input data modality, output tasks, and table structures. We process all data into sequences of up to 1024 tokens. More details on pre-processing are in Appendix A.
**Table Data** Approximately half of our pretraining data \( (N = 10,136,268) \) combine tables with NL. These table datasets derive from Wikipedia, WDC’s Web Table Corpora, and arXiv. Pretraining on table datasets acts as a bridge from the previous text-only pretraining, while promoting alignment between NL and structured knowledge. In initial experiments we pretrained on all available table and NL pairs. However, after closer examination we discovered that a significant portion these examples exhibited minimal connection between the table and NL — and hence are unlikely to promote the desired alignment. Therefore, we choose to focus on high-quality examples and remove approximately 75% of the examples in which there is a tenuous or no connection between the table and the paired NL. To identify noisy examples we compute an edit similarity between the NL and the content of the table, we then drop examples with such similarity below a threshold. Likewise, to reduce noise within each example we truncate tables, keeping at most 6 rows and 25 columns which have the highest edit similarity between table and NL.
**SQL Data** The remainder of our pretraining data incorporate SQL. Approximately 10% of the examples \( (N = 1,918,468) \) are SQL code from GitHub repositories with permissive licenses. SQL code from GitHub only includes only a small amount NL in code comments, and some structured knowledge in the database schema definitions. We filter these data to remove duplicates and repetitive statements.
Approximately 25% of the examples \( (N = 4,479,767) \) are from SQL-related posts on Stack Overflow. These data combine NL questions and answers with snippets of SQL code, thereby bridging the NL knowledge learned during the prior text-only pretraining into domain-specific language, and aligning SQL with NL. We perform augmentations to increase the number of question-answer pairs and leverage hidden human supervision \(^1\) in the data. We first create five augmented versions of each question using random word deletion, random word appending, synonym replacement, and paraphrasing. We then create up to six versions of each original example by pairing combinations of answers with augmented versions of the questions.
Lastly, approximately 11% of the examples \( (N = 2,005,456) \) in our data derive from TAPEx (Liu et al., 2022), a dataset consisting of SQL generated from templates along with their corresponding execution result. To improve the quality and better align these data with downstream tasks we perform the following modifications. First, we remove 2.3 million duplicates (of the original 5 million examples), add a FROM clause to the SQL code with a fictitious table name using a random combination of 1-3 column names, and filter out any examples that could not be parsed by mo-sql-parsing\(^2\). Next, we train a SQL-to-Text model (T5-3B) on the Spider (Yu et al., 2019b) dataset in order to generate natural language statements for each SQL query.
### 3.2 Objectives for Multi-Task Pretraining
**MLM-Based Objectives** A critical component in pretraining encoder-decoder models is a MLM-based objective. In STAMP we follow the span corruption style of MLM from Raffel et al. (2020), which involves replacing contiguous whole words above some fixed threshold as latent human supervision.
---
\(^1\)As discussed in Appendix A, we consider accepted answers, favorite answers, or answers that received upvotes
\(^2\)https://github.com/klanhakoski/mo-sql-parsing
from the text with sentinel tokens in the inputs, and then the decoder generates the replaced text preceded by the corresponding to sentinel token. We set the mean span to 3, with a denoising rate of 15% following the default T5 configuration. This span corruption objective is applied to sequences of NL and SQL code. For pretraining datasets that also include structured knowledge we apply the masked column recovery (MCR) objective, as introduced in Yin et al. (2020), which encourages the model to learn table schemas using the natural language statement and row information as context. In our implementation, 25% of the column names and data types (when available) are masked with a sentinel token. Note, only MCR is applied to the sequence containing the column names to avoid overlapping MLM and MCR masking. More concretely, let $x_{\text{mask}} = (x_{\text{MLM}}, x_{\text{MCR}})$ be the input sequence combining MLM and MCR masking, then our masked span prediction loss $L_M$ over a sequence of length $T$ is:
$$L_M(\theta) = T \sum_{t=1}^{T} -\log P_\theta (y_t | z, y_{<t}),$$
where $z = x_{<S}$ is the left context and $y = x_{S\leq t}$ the right output.
**Combining Objectives** Prior work shows the importance of MLM (Liu et al., 2019; Aroca-Ouellette and Rudzicz, 2020; Raffel et al., 2020) and the benefit of including a small percentage of context-to-output objectives. For instance, Tay et al. (2022) recommend approximately 20% of the objective mixture to be context-to-output. However, unlike Tay et al. (2022) we are not pretraining from scratch, rather we seek to build on existing checkpoints and hence we consider greater rates of context-to-output. In our implementation, we sample an objective per-example during pretraining, where the pool of objectives depends on the data source of each example. Hence, each training mini-batch combines examples from multiple data sources that are formatted as a mix of objectives. Figure 2 summarizes our dataset and objective mix, showing the connection between each input data source and a corresponding objective.
**Context-to-Output Objectives** In addition to MLM-based objectives we include causal language modeling objectives (Radford et al., 2019; Liu et al., 2018), which partition sequences into contexts and outputs in order to mimic the format of many downstream tasks. For unimodal datasets, such as GitHub SQL, we create the context and output by uniformly sampling a split point based on line-breaks within each code example. For tabular datasets we treat the table as input and the paired NL as output, thereby teaching the model to connect the structured and unstructured information.
For Stack Overflow, the natural partition between a question and each of the answers defines the context to output splits. We use the augmentations described in 3.1 to create additional unique question-to-answer pairs. We apply dual learning to better align the question prompt with the answer.
Finally, for trimodal data like our augmented-TAPEX we model Table + NL $\rightarrow$ SQL, or in the dual learning (Wang et al., 2021b) setting we model Table + SQL $\rightarrow$ NL. Thus for a sequence $x$ of length $T$ with a split point $S \in (0, T)$ that is either randomly selected or based a natural split in the data, we define the context-to-output loss $L_{C2O}$ as:
$$L_{C2O}(\theta) = \sum_{t=S}^{T} -\log P_\theta (y_t | z, y_{<t}),$$
where $z = x_{<S}$ is the left context and $y = x_{S\leq t}$ the right output.
**Unified Format for Learning from Structured Knowledge**
In order to bridge the gap between pretraining and downstream tasks, we explore unified formats for structured knowledge. Connecting NL to structured knowledge is challenging with limited data. A unified table format, however, allows the model to leverage learning from large scale pretraining.
for smaller datasets. Moreover, in some cases Xie et al. (2022) report worse performance for multi-task versus single-task training, which we suspect is due to inconsistent formatting. Thus, we linearize structured knowledge into both row- and column-centric formats. Figure 3 shows the row-centric format, and Figure 4 shows the equivalent information in the column-centric format.
Lastly, we use special tokens in the encoder to preface each data modality (NL, structured knowledge, and SQL), and encourage sharing across tasks with common modalities. Additional tags prompt the decoder with the desired task, reflecting each of our objectives: MLM, table-to-text, SQL-to-SQL, Table and NL-to-SQL, Stack Overflow question answering, and dual learning variations.
4 Experiments
4.1 Evaluation Setup
We evaluate our pretrained checkpoints on SQL tasks following the UnifiedSKG framework (Xie et al., 2022). Specifically, for text-to-SQL benchmarking we evaluate on Spider without database row information (Yu et al., 2019b) and WikiSQL with row information (Zhong et al., 2017), as well as conversational text-to-SQL datasets SPaRC (Yu et al., 2019c) and CoSQL (Yu et al., 2019a), and in alignment with our bimodal objectives we also evaluate on SQL2Text (Shu et al., 2021). For each dataset we use pre-defined train, validation, and test splits. In Appendix C lists our evaluation settings, Appendix D contains details on the evaluation datasets, and Appendix E includes additional results.
4.2 Main Results
We present our main results in Table 1, with baseline results as reported in each comparison approach. We group models with SQL-specific decoders on top, and encoder-decoders like STAMP that have more general token decoders on bottom. Overall we find that our STAMP yields better results than domain specific solutions and text- or code-only pretrained models. SMBOP + GRAPPA (Rubin and Berant, 2021) is similar to our work with multi-task learning and additional pretraining, however they rely on a SQL specific parsing algorithm. Whereas, our framework focuses on larger, more diverse sources of structured knowledge and a complementary multi-task learning strategy.
We highlight that pretraining on structured information alone like TABERT (Yin et al., 2020), or a general code pretraining dataset like CodeT5 (Wang et al., 2021b) does not produce exceptional results on text-to-SQL. Likewise, a large multi-task learning approach like T0 performs worse than STAMP models and vanilla T5, indicating that the benefits of multi-task learning depend on having a degree of domain relevance. Specifically T0’s multi-task learning approach, which centers on text-only domains, does not benefit SQL tasks. Lastly, despite constrained decoding being very different than our approach, we include results for PICARD (Scholak et al., 2021) because it is an extremely effective approach that complements STAMP.
4.3 Ablation Studies
Denoising versus Context-to-Output In Table 2 we report development set performance of STAMP models that build on the T5-base checkpoint. We train each model on our full row-centrically orientated dataset and only vary the objective mixture. Unlike prior work (Tay et al., 2022; Aroca-Ouellette and Rudzicz, 2020) that pretrains from scratch, during our additional structured knowledge pretraining we observe that higher rates context-to-output objectives tend to perform best.
At the extremes of the objective mix we see mixed results. Setting MLM / context-to-output ratios to 100% / 0%, improves performance on text-to-SQL — indicating the benefit from our pretraining data. However, on the other extreme, model performance suffers with no MLM and only context-to-output. Nonetheless, by combining the two objectives we see the best performance overall. Specifically, an equal mix of MLM and completion either throughout pretraining or after one epoch of entirely MLM training results in noticeably higher performance compared to vanilla T5.
Our results complement those in literature (Tay et al., 2022; Wang et al., 2021b; Aghajanyan et al., 2021; Aribandi et al., 2021; Sanh et al., 2022; FitzGerald et al., 2022), showing the importance of mixing additional objectives with MLM. Unlike Tay et al. (2022), however, our results show that higher rates of context-to-output are optimal, which we attribute to our approach of building on prior checkpoints and not pretraining from scratch.
Tables versus SQL Datasets Table 3 presents an ablation study comparing of STAMP and CodeSTAMP models trained on different pretrain-
Table 1: Development set performance on text-to-SQL benchmarks for both T5, CodeT5, and our results with additional pretraining on our structured knowledge. All STAMP checkpoints train with a 50/50 mixture of context-to-output and MLM-based objectives. STAMP results are separated by variations in the pretraining data, specifically CC and RC denote column- and row-centric table formats, respectively, and w/ Tables denotes the full pretraining dataset whereas SQL-only is a subset that omits the NL+Table datasets. Note: A dagger (†) indicates constrained decoding approach, which is complementary but not used in our work, models in italics are our work.
Table 2: Development set performance for T5-base, and base-sized STAMP models pretrained on our full row-centric dataset with varying objective mixes. For each pretrained STAMP model we specify the proportion of training examples using the MLM-based objective, with the remaining examples using a dataset-specific context-to-output objective. We also explore dynamic mixing ratios, where 100→50% represents training with 100% MLM in the first epoch, followed by a 50%/50% mix of during the remaining epochs.
Row-Centric versus Column-Centric. We pre-process the pretraining and benchmark datasets from UnifiedSKG (Xie et al., 2022) with consistent table formatting. Row-centric formats are more similar to natural language and do not require learning any new special tokens, which better leverages the original NL pretraining of T5. Whereas, the column-centric format requires special tokens that preface the table, columns, and each value in a column. While new special tokens must be learned from scratch, we hypothesized that the column-centric format is advantageous since text-to-SQL is inherently more column and schema oriented and often not dependent on row information. Surprisingly, Table 3 shows no clear advantage for either RC or CC formats. In fact, the mixed results hold for even across model sizes (Large vs Base) and initial pretraining (T5 vs CodeT5). Our results suggest that further pretraining on enough high-quality data helps to nullify the advantages or disadvantages of each table linearization method.
T5 versus CodeT5 as Starting Point. Table 3 shows the high performance of base-sized CodeT5 et al. (2020), our results show that adding SQL code to the data mix further boosts performance.
### Table 3: Development set performance on SQL benchmarks for both the original T5-base, T5-large, CodeT5-base, and CodeT5-large checkpoints, as well as our results with additional pretraining on our structured knowledge pretraining dataset. All STAMP checkpoints train with a 50/50 mixture of context-to-output and MLM-based objectives. STAMP results are separated by variations in the pretraining data, specifically CC and RC denote column- and row-centric table formats, respectively, and w/ Tables denotes the full pretraining dataset described in 3 whereas the SQL-only subset omits the Text+Table datasets. The best performer at each model size is shown in **bold**.
<table>
<thead>
<tr>
<th>Starting Checkpoint</th>
<th>Additional STAMP Pretraining Data</th>
<th>Spider (Exec ↑)</th>
<th>Sup. WikiSQL (EM ↑)</th>
<th>SParC (EM ↑)</th>
<th>CoSQL (EM ↑)</th>
<th>SQL2Text (BLEC ↑)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5-Large</td>
<td>—</td>
<td>71.7</td>
<td>75.3</td>
<td>57.4</td>
<td>48.8</td>
<td>93.4</td>
</tr>
<tr>
<td>T5-Large</td>
<td>RC, w/ Tables</td>
<td>74.4</td>
<td>78.9</td>
<td><strong>61.4</strong></td>
<td><strong>53.7</strong></td>
<td>93.0</td>
</tr>
<tr>
<td>T5-Large</td>
<td>RC, SQL-only</td>
<td>72.8</td>
<td>79.5</td>
<td>60.1</td>
<td>51.4</td>
<td><strong>93.6</strong></td>
</tr>
<tr>
<td>T5-Large</td>
<td>CC, w/ Tables</td>
<td><strong>76.3</strong></td>
<td>79.3</td>
<td>59.6</td>
<td>51.4</td>
<td>93.3</td>
</tr>
<tr>
<td>T5-Large</td>
<td>CC, SQL-only</td>
<td>74.5</td>
<td>79.1</td>
<td>51.9</td>
<td>50.9</td>
<td>93.3</td>
</tr>
<tr>
<td>CodeT5-Large</td>
<td>—</td>
<td>68.4</td>
<td>76.6</td>
<td>57.9</td>
<td>48.4</td>
<td>91.9</td>
</tr>
<tr>
<td>CodeT5-Large</td>
<td>RC, w/ Tables</td>
<td>71.9</td>
<td>84.4</td>
<td>59.7</td>
<td>50.9</td>
<td>92.1</td>
</tr>
<tr>
<td>CodeT5-Large</td>
<td>CC, w/ Tables</td>
<td>72.8</td>
<td><strong>84.7</strong></td>
<td>58.7</td>
<td>52.0</td>
<td>92.1</td>
</tr>
<tr>
<td>T5-Base</td>
<td>—</td>
<td>60.8</td>
<td>74.1</td>
<td>49.9</td>
<td>42.4</td>
<td>93.7</td>
</tr>
<tr>
<td>T5-Base</td>
<td>RC, w/ Tables</td>
<td>64.5</td>
<td>77.9</td>
<td>51.9</td>
<td>44.5</td>
<td>93.2</td>
</tr>
<tr>
<td>T5-Base</td>
<td>RC, SQL-only</td>
<td>61.7</td>
<td>77.8</td>
<td>52.4</td>
<td>42.8</td>
<td>93.4</td>
</tr>
<tr>
<td>T5-Base</td>
<td>CC, w/ Tables</td>
<td>60.5</td>
<td>79.5</td>
<td>49.9</td>
<td>41.3</td>
<td>93.9</td>
</tr>
<tr>
<td>T5-Base</td>
<td>CC, SQL-only</td>
<td>59.2</td>
<td>79.5</td>
<td>46.8</td>
<td>38.9</td>
<td><strong>94.0</strong></td>
</tr>
<tr>
<td>CodeT5-Base</td>
<td>—</td>
<td>67.1</td>
<td>76.0</td>
<td>54.4</td>
<td>47.2</td>
<td>93.5</td>
</tr>
<tr>
<td>CodeT5-Base</td>
<td>RC, w/ Tables</td>
<td>69.0</td>
<td>83.5</td>
<td><strong>55.6</strong></td>
<td><strong>47.7</strong></td>
<td>92.9</td>
</tr>
<tr>
<td>CodeT5-Base</td>
<td>CC, w/ Tables</td>
<td><strong>69.2</strong></td>
<td><strong>84.5</strong></td>
<td>54.7</td>
<td>46.9</td>
<td>93.4</td>
</tr>
</tbody>
</table>
and CodeSTAMP models. Relative to their T5 and STAMP counterparts, the base-sized CodeT5 and CodeSTAMP models show significant performance gains across all text-to-SQL benchmarks. In particular, models based on the CodeT5-base checkpoint show exceptional performance when given row information in the tables, as is the case for WikiSQL. Interestingly, models based on CodeT5 do not exhibit the same performance gains compared to those based on T5 for large-sized models. In fact, models based on CodeT5-large only excel at WikiSQL, whereas models based on T5-large excel in all other tasks. We hypothesize that large-sized models based on CodeT5 do not outperform their peers in the same way as the base-sized models due to scaling issues caused by CodeT5’s much smaller CodeSearchNet (Husain et al., 2020) pretraining dataset, especially when using a smaller dataset to train the larger model. Additionally, we see that models based on CodeT5 checkpoints tend to perform worse on SQL2Text, which is likely because natural language in CodeT5’s original pretraining data is limited to comments in code, and hence the ability to generate natural language may be underdeveloped relative to T5.
### 5 Conclusion
We present STAMP, a pretraining framework for encoder-decoders on SQL tasks. We introduce a large scale pretraining dataset of tables, SQL code, discussions on Stack Overflow, and a modified TAPEX dataset (Liu et al., 2022). We complement our data with a multi-task learning framework to align the data modalities, finding that an equal mix of the objectives is optimal. We explore both row- and column-centric approaches to linearizing tables, creating a unified format across training stages. A column-centric format is often superior, challenging the conventional row-centric approach. Lastly, while PL pretraining may help generalization (Athiwaratkun et al., 2022), STAMP models based on T5 yield better performance.
---
3Our results for T5-Large on Spider, SParC, and CoSQL differ from Xie et al. (2022) and Scholak et al. (2021). On Spider we achieve 3.4%-points higher than Xie et al. (2022), and 4.5%-points higher than Scholak et al. (2021). In our implementation we use a maximum input sequence length of 1024 and an output sequence lengths of 256 to avoid truncation.
6 Limitations
While our work displays many strengths, we highlight some important limitations in our analysis. Namely, we pretrain our STAMP models on a range of sources containing structured knowledge, however our analysis is limited to text-to-SQL tasks and does not demonstrate if such pretraining helps more generally in structured information tasks. For instance, STAMP pretrains on tables with (1) masked column recovery as a way to learn the structure of a table using the rows and natural language statement as context, and (2) a context-to-output objective that always includes the table in the context (when available) — since this matches the format of text-to-SQL tasks. It is unclear if our objective choices for pretraining on tables perform equally well on the range of structured knowledge tasks, such as table question-answering, table summarization, data-to-text, fact verification, and others explored in Xie et al. (2022). Second, we acknowledge that significant GPU resources are required for pretraining, even in continued pretraining approaches like ours which limit the breadth of ablations studies. Conversely, our work explores pretraining at smaller scales where certain phenomena like strong zero-shot performance is unlikely. Pretraining specifically on structured knowledge has an unknown value at larger scales with models having tens or hundreds of billions of parameters.
7 Ethics Statement
We acknowledge the importance of the ACL Ethics Policy and agree with it. Large language models can appear confident while providing false information. In our work we are fortunate that incorrect SQL output is verifiable and take care to report the true reliability of the systems. Additionally we acknowledge that large language models, such as those studied in this work, may generate toxic language (Gehman et al., 2020). While we avoid pretraining on data sources and content from web domains with offensive language, we acknowledge that even our data gathered from reputable publishers introduces bias (Bolukbasi et al., 2016).
Acknowledgements
We would like to thank Henry Zhu for providing a sql-to-text model that we used to augment TAPEX with natural language statements.
References
Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, and Yasemin Altun. 2022. Table-to-text generation and pre-training with tabt5.
Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, and Zhiheng Huang. 2022. Improving Cross-task Generalization
of Unified Table-to-text Models with Compositional Task Configurations.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. TabFact: A Large-scale Dataset for Table-based Fact Verification.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining Text Encoders as Discriminators Rather Than Generators.
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. 2017. Don’t Decay the Learning Rate, Increase the Batch Size.
A.1 Stack Overflow Augmentations
We perform several augmentation steps on Stack Overflow examples to construct our pretraining dataset. Our first step is to create four augmented versions of each question using random word deletion, random word appending, synonym replacement, and paraphrasing. Next, we create up to five different combinations of input-label pairs by re-arranging the answers.
For some pertinent background on Stack Overflow, each example consists of a question and one or more answers. The user who answered the question can mark the answer that solved their problem as correct, and other users can upvote answers that they found useful as well.
Let \( N \) be the number of answers for a question. The following strategies are used to create the labels for the augmented examples:
1. The accepted answer (if there is one)
2. The most upvoted answer if it has been upvoted more than the accepted answer.
3. Concatenation of all answers
4. Randomly select an answer $A_i$ and append all answers up to and including that one to the question, then use the concatenation of all $A_{i+1}, A_{i+2} \ldots A_N$ answers as the label
5. Randomly select an answer, $A_i$, and append all answers up to and including that one to the question. Randomly select another answer, $A_k$, from the remaining $A_{i+1}, A_{i+2} \ldots A_N$ answers and use the concatenation of all $A_k, A_{k+1} \ldots A_N$ answers as the label
Each of these strategies is constrained by a total sequence length of 1024 tokens. If we need to truncate any tokens, we truncate them in the following order:
1. Text in Answer
2. Code in Question
3. Text in Question
Our intuition is that this is the order of least important to most important to preserve the logical relationship between question and answer, with code in the answer being the most critical (which is never truncated).
A.2 Data Filtering
As briefly mentioned in 3.1, we filter noisy examples from both the table and SQL dataset. Below we provide more details on this pre-processing step.
Tabular Filtering Since table data is often web-scraped it contains many noisy examples. Specifically, examples where the table information has a tenuous relation to the paired natural language statement. Moreover, since our initial collection of raw data was much larger for table sources versus SQL source, we chose to implement a filtering approach to reduce these noisy examples. Specifically, we first calculate the edit-similarity between each sample’s table and the NL statement, after removing special tokens or tags. We then compute the same metric on ToTTo, which is a high-quality table-to-text benchmark, and qualitatively chose our filtering threshold as 50.0 which is slightly lower than ToTTo’s average edit-similarity. All samples from our Wiki, Web, and ArXiv tables datasets with an edit-similarity below 50.0 are removed. In total we remove approximately 74% of samples from the raw data.
Github SQL Filtering For the Github SQL data we again see a large proportion of noisy or repetitive samples in the raw data. Specifically, Github SQL data can contain many repetitive statements within one sample, such as thousands of consecutive INSERT statements that data into a table. The insert statements are often either very repetitive, or contain very noisy information like compressed images, PDFs, or spatial objects. Our filtering method largely consists of using regular expression to identify such repetitive statements. After finding long sequences of insert statements we keep only a random sample of 10 insert statements if the insert statements are repetitive but not overly long or unreadable. However, we remove all insert statements that load noisy information into a table. In total the number of samples staying approximately the same, however we reduce the size of the dataset by approximately 61%.
A.3 Pretraining Dataset Statistics
In Table 4 we provide summary statistics for the pretraining dataset, including each of the SQL and Table subsets. Raw document counts help to show the amount of filtering applied to the raw data in order reduce noisy and potentially detrimental samples, whereas the final training sample counts show the training dataset size after tokenizing and partitioning documents into sequences.
B Pretraining Hyperparameters
Batch size. For 3B and large models we train for at a small batch size of 64 for the first epoch, then for most of the second and third epoch we double the batch size to 128, and then for the final 5-10% of training we double the batch size again to 256. Starting with a small batch size provides better gradient efficiency, while larger batch sizes give more precise gradient estimates which is beneficial later in training (Smith et al., 2017). For base sized models we opt for a batch size of 128 for all three epochs before the cooldown period.
Sequence length. Data are pre-processed and tokenized offline into sequences of at most 1024 tokens. We do not pack inputs, and instead use one example in per input and then pad accordingly. For the larger T5-3B model we found that training for the first 75-90% of steps on data pre-processed into a shorter max sequence length of 768 or 896, and then the remainder of training on data with 1024 tokens provided improved computational efficiency.
without a discernible degradation in performance. Encoder inputs begin with a special token indicating the data modality, and the decoder inputs begin with a special token indicating the desired task. All sequences end with the same end of sequence token as Raffel et al. (2020).
Optimization. All models are pretrained with the AdamW (Kingma and Ba, 2015) optimizer, using an initial learning rate of $1e^{-4}$, and set momentum of $\beta_1 = 0.9$ and $\beta_2 = 0.98$. Our learning rate warms-up linearly over the first 1% of training steps, and then decays following a fixed cosine annealing schedule to $1e^{-7}$ after approximately 3 epochs. We set a gradient norm clipping with a maximum gradient norm of 1.0 (Pascanu et al., 2013). We train models based on T5 (Raffel et al., 2020) using the bf16 data type, whereas for models based on CodeT5 (Wang et al., 2021b) we use the fp16 data type in order to match the data type from STAMP-CC. For finetuning we follow the experimental setup of UnifiedSKG (Xie et al., 2022). Specifically, we use the Adafactor optimizer with decaying learning rate that is initially set to 5e-5, we set the batch size to 32, train for up to 200 epochs, and generate sequences using a beam size of 1. However, for WikiSQL we set a batch of 128, train for a maximum of 100 epochs, and use a beam size of 4. We use the same maximum lengths for the input and output as UnifiedSKG, except for Spider, SParC, and CoSQL where we increase input maximum length to 1024 and output to 256 sentence piece tokens to avoid truncating the inputs or outputs.
**C Evaluation Settings**
For finetuning we follow the experimental setup of UnifiedSKG (Xie et al., 2022). Specifically, we use the Adafactor optimizer with decaying learning rate that is initially set to 5e-5, we set the batch size to 32, train for up to 200 epochs, and generate sequences using a beam size of 1. However, for WikiSQL we set a batch of 128, train for a maximum of 100 epochs, and use a beam size of 4. We use the same maximum lengths for the input and output as UnifiedSKG, except for Spider, SParC, and CoSQL where we increase input maximum length to 1024 and output to 256 sentence piece tokens to avoid truncating the inputs or outputs.
<table>
<thead>
<tr>
<th>Data Source</th>
<th>Modalities</th>
<th>Num. Raw Documents</th>
<th>Num. Training Samples</th>
<th>Avg. Number of Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Initial (K)</td>
<td>Filtered (K)</td>
<td></td>
</tr>
<tr>
<td>Github SQL</td>
<td>SQL</td>
<td>1,026</td>
<td>1,019</td>
<td>1,918</td>
</tr>
<tr>
<td>Stack Overflow</td>
<td>NL, SQL</td>
<td>1,670</td>
<td>1,631</td>
<td>4,480</td>
</tr>
<tr>
<td>Aug. TAPEX</td>
<td>NL, Table, SQL</td>
<td>2,165</td>
<td>2,165</td>
<td>2,005</td>
</tr>
<tr>
<td>Wiki Tables</td>
<td>NL, Table</td>
<td>6,350</td>
<td>3,080</td>
<td>3,080</td>
</tr>
<tr>
<td>Web Tables</td>
<td>NL, Table</td>
<td>32,295</td>
<td>7,032</td>
<td>7,032</td>
</tr>
<tr>
<td>ArXiv Tables</td>
<td>NL, Table</td>
<td>119</td>
<td>22</td>
<td>24</td>
</tr>
<tr>
<td>Full Dataset</td>
<td>NL, Table, SQL</td>
<td>43,766</td>
<td>14,991</td>
<td>18,612</td>
</tr>
</tbody>
</table>
Table 4: STAMP Pretraining dataset statistics by source. After the raw documents are filtered, we create training examples by partitioning documents into sequences of 1024 tokens which can result in more training samples than the initial set of raw documents. In the case of Stack Overflow we also augment the data creating a much larger collection of training samples from the initial pool of documents. Note: Raw document counts and final number of training samples are listed in thousands (K), the final pretraining dataset contains 18,612,078 samples.
<table>
<thead>
<tr>
<th>Pretrained Model</th>
<th>Finetune Method</th>
<th>Spider (Exec ↑)</th>
<th>Sup. WikiSQL (EM ↑)</th>
<th>SParC (EM ↑)</th>
<th>CoSQL (EM ↑)</th>
</tr>
</thead>
<tbody>
<tr>
<td>STAMP-RC</td>
<td>STF</td>
<td>74.4</td>
<td>78.9</td>
<td>61.4</td>
<td>53.7</td>
</tr>
<tr>
<td>STAMP-RC</td>
<td>MTF</td>
<td>74.0</td>
<td>78.6</td>
<td>61.9</td>
<td>55.0</td>
</tr>
<tr>
<td>STAMP-CC</td>
<td>STF</td>
<td>76.3</td>
<td>79.3</td>
<td>59.6</td>
<td>51.4</td>
</tr>
<tr>
<td>STAMP-CC</td>
<td>MTF</td>
<td>73.9</td>
<td>79.1</td>
<td>61.3</td>
<td>54.2</td>
</tr>
<tr>
<td>CodeSTAMP-RC</td>
<td>STF</td>
<td>74.5</td>
<td>84.3</td>
<td>58.8</td>
<td>50.6</td>
</tr>
<tr>
<td>CodeSTAMP-RC</td>
<td>MTF</td>
<td>73.3</td>
<td>83.9</td>
<td>59.4</td>
<td>51.9</td>
</tr>
<tr>
<td>CodeSTAMP-CC</td>
<td>STF</td>
<td>72.8</td>
<td>84.7</td>
<td>58.7</td>
<td>52.0</td>
</tr>
<tr>
<td>CodeSTAMP-CC</td>
<td>MTF</td>
<td>71.3</td>
<td>83.5</td>
<td>58.3</td>
<td>50.8</td>
</tr>
</tbody>
</table>
Table 5: Development set performance on text-to-SQL benchmarks for large sized T5, STAMP CodeT5, and CodeSTAMP that are either Single-Task Finetuned (STF) or Multi-Task Finetuned (MTF) on all text-to-SQL datasets simultaneously. All STAMP checkpoints are pretrained with a 50/50 mixture of context-to-output and MLM-based objectives on the full pretraining dataset. STAMP results differentiated by whether they’re trained with column-CC or row-centric RC table formats. We highlight results where multi-task finetuning outperforms single-task finetuning on an equivalent model in bold.
**D Evaluation Datasets**
We evaluate our model on each of the aforementioned datasets using the standard metrics for each task. We use the standard train, validation, and test splits for each of the datasets.
**Spider** The Spider dataset has 10,181 question-query pairs with queries using 200 databases representing 138 different domains and tables that are joined via foreign keys. We use the standard training and development splits, where training, development, and test sets have a 7:1:2 ratio, and each database appears in only one set (Yu et al., 2019b).
**Fully Supervised WikiSQL** The WikiSQL dataset has 80,564 question-query pairs, involving over 30,000 tables from Wikipedia (Zhong et al.,...
Table 6: Average performance on SQL benchmarks over three finetuning runs with standard deviations. All STAMP checkpoints train with a 50/50 mixture of context-to-output and MLM-based objectives. STAMP results are separated by variations in the pretraining data, specifically CC and RC denote column- and row-centric table formats, respectively, and w/Tables denotes the full pretraining dataset whereas SQL-only is a subset that omits the NL+Table datasets. Note: A dagger (†) indicates datasets where only a development set is available for assessing variance in performance, and models in italics are our work.
E Additional Results
Single- versus Multi-Task Learning We explore the benefits of finetuning and evaluating either individually on each dataset (Single-Task Finetuning, STF) versus finetuning on all of the text-to-SQL benchmarks simultaneously then evaluating (Multi-Task Finetuning, MTF). For multi-task finetuning we balance the size of different datasets during training using the temperature up-sampling method proposed in Xie et al. (2022) and set the temperature to 2. The results of the ablation are presented in Table 5. We find mixed the results of multi-task finetuning. In almost every model MTF results in noticeably better performance on the conversational SQL datasets SParC and CoSQL, however results for Spider and WikiSQL are slightly worse. We suspect that the close similarity between SParC and CoSQL explains the mutual benefit of multi-task finetuning. On the other hand, Spider uses a schema-only input format, whereas WikiSQL includes database content and is typically less difficult than Spider.
Performance Confidence Intervals In Table 6 we report more a more detailed look at our main results. Specifically, we report the average performance of our models over three finetuning runs and list the standard deviation in the performances.
|
{"Source-Url": "https://assets.amazon.science/57/d9/0301d74f4dbaac6e89a9d1b11ef7/multitask-pretraining-with-structured-knowledge-for-text-to-sql-generation.pdf", "len_cl100k_base": 11757, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 52866, "total-output-tokens": 15386, "length": "2e13", "weborganizer": {"__label__adult": 0.0005478858947753906, "__label__art_design": 0.0011434555053710938, "__label__crime_law": 0.0005249977111816406, "__label__education_jobs": 0.00656890869140625, "__label__entertainment": 0.00030303001403808594, "__label__fashion_beauty": 0.00033164024353027344, "__label__finance_business": 0.0006198883056640625, "__label__food_dining": 0.0004246234893798828, "__label__games": 0.0012636184692382812, "__label__hardware": 0.0010967254638671875, "__label__health": 0.0007724761962890625, "__label__history": 0.0005064010620117188, "__label__home_hobbies": 0.00021660327911376953, "__label__industrial": 0.0006899833679199219, "__label__literature": 0.00115203857421875, "__label__politics": 0.0004012584686279297, "__label__religion": 0.0007300376892089844, "__label__science_tech": 0.184326171875, "__label__social_life": 0.0002543926239013672, "__label__software": 0.033294677734375, "__label__software_dev": 0.763671875, "__label__sports_fitness": 0.0003476142883300781, "__label__transportation": 0.0004701614379882813, "__label__travel": 0.00024962425231933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57139, 0.04085]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57139, 0.22135]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57139, 0.82423]], "google_gemma-3-12b-it_contains_pii": [[0, 4221, false], [4221, 6372, null], [6372, 11110, null], [11110, 15015, null], [15015, 18867, null], [18867, 23435, null], [23435, 25815, null], [25815, 31398, null], [31398, 36129, null], [36129, 41122, null], [41122, 43727, null], [43727, 44654, null], [44654, 49054, null], [49054, 55259, null], [55259, 57139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4221, true], [4221, 6372, null], [6372, 11110, null], [11110, 15015, null], [15015, 18867, null], [18867, 23435, null], [23435, 25815, null], [25815, 31398, null], [31398, 36129, null], [36129, 41122, null], [41122, 43727, null], [43727, 44654, null], [44654, 49054, null], [49054, 55259, null], [55259, 57139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57139, null]], "pdf_page_numbers": [[0, 4221, 1], [4221, 6372, 2], [6372, 11110, 3], [11110, 15015, 4], [15015, 18867, 5], [18867, 23435, 6], [23435, 25815, 7], [25815, 31398, 8], [31398, 36129, 9], [36129, 41122, 10], [41122, 43727, 11], [43727, 44654, 12], [44654, 49054, 13], [49054, 55259, 14], [55259, 57139, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57139, 0.2]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
c7247dc25dc32fb597b39b04210afcfa0dced0ca
|
ABSTRACT
The problem of managing evolving data has attracted considerable research attention. Researchers have focused on the modeling and querying of schema/instance-level structural changes, such as, addition, deletion and modification of attributes. Databases with such a functionality are known as temporal databases. A limitation of the temporal databases is that they treat changes as independent events, while often the appearance (or elimination) of some structure in the database is the result of an evolution of some existing structure. We claim that maintaining the causal relationship between the two structures is of major importance since it allows additional reasoning to be performed and answers to be generated for queries that previously had no answers.
We present here a novel framework for exploiting the evolution relationships between the structures in the database. In particular, our system combines different structures that are associated through evolution relationships into virtual structures to be used during query answering. The virtual structures define “possible” database instances, in a fashion similar to the possible worlds in the probabilistic databases. The framework includes a query answering mechanism that allows queries to be answered over these possible databases without materializing them. Evaluation of such queries raises many interesting technical challenges, since it requires the discovery of Steiner forests on the evolution graphs. On this problem we have designed and implemented a new dynamic programming algorithm with exponential complexity in the size of the input query and polynomial complexity in terms of both the attribute and the evolution data sizes.
Categories and Subject Descriptors
H.m [Information Systems]: Miscellaneous
General Terms
Algorithms, Performance
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
1. INTRODUCTION
Advances in information and telecommunication technologies of the last two decades have allowed organizations and individuals alike to develop large scale data collections and make them available on-line. These collections are often about entities that persist over time and the changes that occur to their attributes/relationships. Considerable effort has gone toward the development of advanced solutions for managing the evolution of data[1, 22, 21], schema[5, 3, 19] and schema transformations[27, 26, 13]. Temporal databases are one of the outcomes of this effort, where the notion of versioning has been central [4, 6].
In temporal databases, users have the ability to access and query snapshots of the data at different points in time. Unfortunately, such work fails to capture the full spectrum of evolutionary phenomena. Specifically, those approaches are founded on the assumption that the nature of each real world entity represented in the database persists over time, e.g., students are added, modified, and eventually deleted, but never become professors, with a direct link between the student tuple and the professor one. As such, evolution amounts only to temporal changes of attributes/relationships [1, 5, 21]. Evolution of an entity that spans different concepts (e.g., student to professor, research lab to independent corporate entity) are unaccounted for. And so are evolution phenomena where one entity is split into several (e.g., Germany splitting into East and West Germany at the end of WW II), or the other way around (e.g., East and West Germany amalgamating into one entity at the end of the Cold War). The result of this is that historical queries, such as “give me all the heads-of-state of Germany between 1800 and 2000” are hard to deal with, as they essentially require hand-coding of the history of Germany into several queries to be processed separately. Note that this may look similar to terminology evolution [25], i.e., using different terms to describe the same real world entity at different points in time, but it actually goes far beyond that.
This form of evolution finds a natural fit in Dataspace Systems [10, 16] that are anchored on the notion of an entity. Such entities may split/merge or otherwise evolve during their lifetime. Modeling and supporting evolution relationships for historical query processing finds many additional real-world applications. For instance, modern historians will be able to model and study the chains of human achievements and developments, e.g., how the concept of biotechnology evolved from its beginnings as an agricultural technology to the current notion that is coupled to genetics and molecular biology. Educators will better track how courses evolve and how the material and educational objectives for a course
Assume now a temporal database that models the above information as illustrated in Figure 1, and consider a user who is interested in finding the lab that invented the laser and the ASR patent. It is true that these two patents have been filed by different labs, the AT&T Bell Labs and the AT&T Labs Inc. Thus, the query will return no results. However, it can be noticed that the latter entity is an evolution of the former. It may be the case that the user does not have the full knowledge of the way the labs have changed or in her own mind, the two labs are still considered the same. We argue that instead of expecting from the user to know all the details of the evolution granularity and the way the data has been stored, which means that the user’s conceptual model should match the one of the database, we’d like the system to try to match the user’s conceptual model. This means that the system should have the evolution relationships represented explicitly and take them into account when evaluating a query. In particular, we want the system to treat the AT&T Bell Labs, the AT&T Labs Inc, and the AT&T Labs as one unified (virtual) entity. That unified entity is the inventor of both the laser and the ASR, and should be the main element of the response to the user’s query.
Of course, the query response is based on the assumption that the user did not intend to distinguish between the three aforementioned labs. Since this is an assumption, it should be associated with some degree of confidence. Such a degree can be based, for instance, on the number of labs that had to be merged in order to produce the answer. A response that involves 2 evolution-related entities should have higher confidence than one involving 4.
As a similar example, consider a query asking for all the partners of AT&T Labs Inc. Apart from those explicitly stated in the data (in the specific case, none), a traversal of the history of the labs can produce additional items in the answer, consisting of the partners of its predecessors. The further this traversal goes, the less likely it is that this is what the user wanted; thus, the confidence of the answer that includes the partners of its predecessors should be reduced. Furthermore, if the evolution relationships have also an associated degree of confidence, i.e., less than 100% certainty, the confidence computation of the answers should take this into consideration as well.
### 3. DATA MODEL
We adopt a concept model [7] that is gaining popularity in many areas including databases [10]. Its fundamental component is the entity which is used to model a real world object. An entity is a data structure consisting of a unique identifier and a set of attributes. Each attribute has a name and a value. The value of an attribute can be an atomic value or an entity identifier. More formally, assume the existence of an infinite set of entity identifiers $\mathcal{O}$, an infinite set of names $\mathcal{N}$ and an infinite set of atomic values $\mathcal{V}$.
**Definition 3.1.** An attribute is a pair $\langle n,v \rangle$, with $n \in \mathcal{N}$ and $v \in \mathcal{V} \cup \mathcal{O}$. Attributes for which $v \in \mathcal{O}$ are specifically referred to as associations. Let $A = \mathcal{N} \times (\mathcal{V} \cup \mathcal{O})$ be the set of all the possible attributes. An entity is a tuple $\langle id, A \rangle$ where $A \subseteq A$, is finite, and $id \in \mathcal{O}$. The $id$ is referred to as the entity identifier while the set $A$ as the set of attributes of the entity.
We will use the symbol $E$ to denote the set of all possible entities that exist and we will also assume the existence of a Skolem function $Sk$ [15]. Recall that a Skolem function is a function that provides a unique different value for two different arguments. Each entity is uniquely identified by its identifier, thus, we will often use the terms entity and entity identifier interchangeably if there is no risk of confusion. A database is a collection of entities, that is closed in terms of associations between the entities.
DEFINITION 3.2. A database is a finite set of entities E⊆E such that for each association ⟨n, e⟩ of an entity e∈E: e∈E.
As a query language we adopt a datalog style language. A query consists of a head and a body. The body is a conjunction of atoms. An atom is an expression of the form e(⟨n1,v1⟩, ⟨n2,v2⟩, ..., ⟨nk,vk⟩) or an arithmetic condition such as =, ≤, etc. The head is always a non-arithmetic atom. Given a database, the body of the query is said to be true if all its atoms are true. A non-arithmetic atom e(⟨n1,v1⟩, ⟨n2,v2⟩, ..., ⟨nk,vk⟩) is true if there is an entity with an identifier e and attributes ⟨ni, vi⟩ for every i=1..k. When the body of a query is true, the head is also said to be true. If a head e(⟨n1,v1⟩, ⟨n2,v2⟩, ..., ⟨nk,vk⟩) is true, the answer to the query is an entity with identifier e and attributes ⟨ni, vi⟩ for i=1..k.
The components e, n, and vi for i=1..k of any atom in a query can be either a constant or a variable. Variables used in the head or in arithmetic atoms must also be used in some non-arithmetic atom in the body. If a variable is used at the beginning of an atom, it is bound to entity identifiers. If the variable is used inside the parenthesis but before the “:” symbol, it is bound to attribute names, and if the variable is in the parenthesis after the “:” symbol, it is bound to attribute values. A variable assignment in a query is an assignment of its variables to constants. A true assignment is an assignment that makes the body of the query true. The answer set of a query involving variables is the union of the answers produced by the query for each true assignment.
EXAMPLE 3.3. Consider the query:
$\exists x(\text{isHolder} \cdot x \cdot \text{name} \cdot \text{AT&T Labs Inc.})$, isHolder(x) that looks for entities called “AT&T Labs Inc.” and are holders of a patent. For every such entity that is found, an entity with the same identifier is produced in the answer set and has an attribute isHolder with the patent as a value.
In order to model evolution we need to model the lifespan of the real world objects that the entities represent and the evolution relationship between them. For the former, we assume that we have a temporal database, i.e., each entity is associated to a time period; however, this is not critical for this work so we will omit that part from the following discussions. To model the evolution relationship, on the other hand, we consider a special association that we elevate into a first-class citizen in the database. We call this association an evolution relationship. Intuitively, an evolution relationship from one entity to another is an association indicating that the real world object modeled by the latter is the result of some form of evolution of the object modeled by the former. In Figure 1, the dotted lines between the entities illustrate evolution relationships. A database with evolution relationships is an evolution database.
DEFINITION 3.4. An evolution database is a tuple (E,Ω), such that (E) is a database and Ω is a partial order relation over E. An evolution relationship is every association ⟨e1, e2⟩∈Ω.
Given an evolution database, one can construct a directed acyclic graph by considering as nodes the entities and as edges its evolution relationships. We refer to this graph as the evolution graph of the database.
Our proposal is that entities representing different evolution phases of the same real world object can be considered as one for query answering purposes. To formally describe this idea we introduce the notion of coalescence. Coalescence is defined only on entities that are connected through a series of evolution relationships; the coalescence of those entities is a new entity that replaces them and has as attributes the union of their attributes (including associations).
DEFINITION 3.5. Given an evolution database (E,Ω), the coalescence of two entities e1:(id1,A1), e2:(id2,A2)∈E, connected through an evolution relationship ev is a new evolution database (E′,Ω′) such that Ω′=Ω ev and E′=(E−{e1,e2})∪{e′new}, where e′new:(idnew,Anew) is a new entity with a fresh identifier idnew=Sk(id1, id2) and Anew=A1∪A2. Furthermore, each association ⟨n, id⟩ or ⟨n, id⟩ of an entity e∈E, is replaced by ⟨n, id⟩. The relationship between the two databases is denoted as (E,Ω) ev→(E′,Ω′).
The Skolem function that we have mentioned earlier defines a partial order among the identifiers, and this partial order extends naturally to entities. We call that order subsumption.
DEFINITION 3.6. An identifier id is said to be subsumed by an identifier id, denoted as id<_id id, if there is some identifier id≠id and id<_id id such that id=Sk(id1, id2). An entity e1:(id1, A1) is said to be subsumed by an entity e2:(id2, A2), denoted as e1<_e e2, if id1<_id id2 and for every attribute ⟨n, v⟩∈A1 there is attribute ⟨n, v⟩∈A2 such that v1=v2 or, assuming that the attribute is an association, v1<_a v2.
Given an evolution database (E,Ω), and a set Ω⊆Ω one can perform a series of consecutive coalescence operations, each one coalescing the two entities that an evolution relationship in the Ω associates.
DEFINITION 3.7. Given an evolution database D:(E,Ω) and a set Ω={m1, m2, ..., mn} such that Ω⊆Ω, let Dm be the evolution database generated by the sequence of coalescence operations D→m1→m2→...→mn. The possible world of D according to Ω is the database Dm generated by simply omitting from Dm all its evolution relationships.
Intuitively, a set of evolution relationships specifies sets of entities in a database that should be considered as one, while the possible world represents the database in which these entities have actually been coalesced. Our notion of a possible world is similar to the notion of a possible worlds in probabilistic databases [8].
Theorem 3.8. The possible world of an evolution database D:(E,Ω) for a set Ω⊆Ω is unique.
Due to this uniqueness, a set Ω of evolution relationships of a database can be used to refer to the possible world.
According to the definition of a possible world, an evolution database can be seen as a shorthand of a set of databases, i.e., its possible worlds. Thus, a query on an evolution database can be seen as a shorthand for a query on its possible worlds. Based on this observation we define the semantics of query answering on an evolution database.
DEFINITION 3.9. The evaluation of a query q on an evolution database D is the union of the results of the evaluation of the query on every possible world Dm of D.
For a given query, there may be multiple possible worlds that generate the same results. To eliminate this redundancy we require every coalescence to be well-justified. In particular, our principle is that no possible world or variable assignment will be considered, unless it generates some new results in the answer set. Furthermore, among the different possible worlds that generate the same results in the answer set, only the one that requires the smaller number of coalescences will be considered. To support this, we define a subsumption relationship among the variable assignments across different possible worlds and we redefine the semantics of the evaluation of a query.
4.2 Materializing all the possible worlds
Since the possible worlds do not depend on the query that needs to be evaluated, they can be pre-computed and stored in advance so that they are available at query time. Of course, as it is the case of any materialization technique, the materialized data need to be kept in sync with the evolution database when its data is modified. Despite the fact that this will require some effort, there are already well-known techniques for change propagation [3] that can be used. The major drawback, however, is the space overhead. A possible world contains all the attributes of the evolution database, but in fewer entities. Given that the number of attributes are typically larger than the number of entities, and that entities associated with evolution relationships are far fewer than the total number of entities in the database, we can safely assume that the size of a possible world will be similar to the one of the evolution database. Thus, the total space required will be $2^n$ times the size of the evolution database. The query answering time, on the other hand, will be $2^n$ times the average evaluation time of the query on a possible world.
4.3 Materializing only the maximum world
An alternative solution is to generate and materialize the possible world $D_{max}$ generated by performing all possible coalescences. For a given evolution database $(E, \Omega)$, this is the one constructed according to the set of all evolution relationships in $\Omega$. Any query that has an answer in some possible world of the evolution database will also have an answer in this maximal possible world $D_{max}$. This solution work has two main limitations. First it does not follow our minimalistic principle and performs coalescences that are not needed, i.e., they do not lead to any additional results in the result set. Second, the generated world fails to include results that distinguish different phases of the lifespan of an entity (phases that may have to be considered individual entities) but the approach coalesces them in one just because they are connected through evolution relationships.
4.4 On-the-fly coalescence computations
To avoid any form of materialization, we propose an alternative technique that computes the answers on the fly by performing coalescences on a need-to-do basis. In particular, we identify the attributes that satisfy the different query conditions and from them the respective entities to which they belong. If all the attributes satisfying the conditions are on the same entity, then the entity is added in the answer set. However, different query conditions may be satisfied by attributes in different entities. In these cases we identify sets of entities for each one of which the union of the attributes of its entities satisfy all the query conditions. For each such a set, we coalesce all its entities into one if they belong to the same connected component of the evolution graph. Doing the coalescence it is basically like creating the respective possible world; however, we generate only the part of that world that is necessary to produce an answer to the query. In more details, the steps of the algorithm are the following.
[Step 1: Query Normalization] We decompose every non-arithmetic atom in the body of the query that has more than one condition into a series of single-condition atoms. More specifically, any atom of the form $\phi(n_1, v_1, n_2, v_2, \ldots, n_k, v_k)$ is decomposed into a conjunction of atoms $\phi(n_1, v_1), \phi(n_2, v_2), \ldots, \phi(n_k, v_k)$.
[Step 2: Individual Variable Assignments Generation] For each non-arithmetic atom in the decomposed query, a list is constructed
that contains assignments of the variables in the respective atom to constants that make the atom true. Assuming a total of $N$ non-arithmetic atoms after the decomposition, let $L_1, L_2, \ldots, L_N$ be the generated lists. Each variable assignment actually specifies the part of the evolution database that satisfies the condition described in the atom.
**[Step 3: Candidate Assignment Generation]** The elements of the lists generated in the previous step are combined together to form complete variable assignments, i.e., assignments that involve every variable in the body of the query. In particular, the cartesian product of the lists is created. Each element in the cartesian product is a tuple of assignments. By construction, each such tuple will contain at least one assignment for every variable that appears in the body of the query. If there are two assignments of the same attribute bound variable to different values, the whole tuple is rejected. Any repetitive assignments that appear within each non-rejected tuple is removed to reduce redundancy. The result is a set of variable assignments, one from each of the tuples that have remained.
**[Step 4: Arithmetic Atom Satisfaction Verification]** Each assignment generated in the previous step for which there is at least one arithmetic atom not evaluating to true, is eliminated from the list.
**[Step 5: Candidate Coalescence Identification]** Within each of the remaining assignments we identify entity-bound variables that have been assigned to more than one value. Normally this kind of assignment evaluates always to false. However, we treat them as suggestions for coalescences, so that the assignment will become a true assignment (ref. previous Section). For each assignment $h$ in the list provided by Step 4, the set $V_h = \{V_{e_1}, V_{e_2}, \ldots, V_{e_4}\}$ is generated, where $V_i$ is the set of different entities that variable $x$ has been assigned in assignment $h$. In order for the assignments of variable $x$ to evaluate to true, we need to be able to coalesce the entities in $V_i$. To do so, these entities have to belong to the same connected component in the evolution graph of the database. If this is not the case, the assignment $h$ is ignored.
**[Step 6: Coalescence Realization & Cost Computation]** Given a set $V_h = \{V_{e_1}, V_{e_2}, \ldots, V_{e_4}\}$ for an assignment $h$ among those provided by Step 5, we need to find the minimum cost coalescences that need to be done such that all the entities in a set $V_i$, for $i=1..k$, are coalesced to the same entity. This will make the assignment $h$ a true assignment, in which case the head of the query can be computed and an answer generated in the answer set. The cost of the answer will be the cost of the respective possible world, which is measured in terms of the number of coalescences that need to be performed. Finding the required coalescences for the set $V_i$ that minimizes the cost boils down to the problem of finding a Steiner forest [12].
**Example 4.1.** Let us consider again the query of Example 3.3. In Step 1, its body will be decomposed into two parts: $\$x\{\text{name: ‘AT&T Labs Inc.’}\}$ and $\$x\{\text{isHolder: ‘$y’}\}$. For those two parts, during Step 2, the lists $L_1 = \{\{\$x = e1\}\}$ and $L_2 = \{\{\$x = e1, \$y ‘P2PV\text{ideo’}\}, \{\$x = e3, \$y = ‘ASR’\}, \{\$x = e4, \$y = ‘Laser’\}, \{\$x = e5, \$y = ‘VoIP’\}\}$ will be created. Step 3 creates their cartesian product $L = \{\{\$x = e1, \$x = e1, \$y = ‘P2PV\text{ideo’}\}, \{\$x = e1, \$x = e3, \$y = ‘ASR’\}, \{\$x = e1, \$x = e4, \$y = ‘Laser’\}, \{\$x = e1, \$x = e5, \$y = ‘VoIP’\}\}$. The only attribute bound variable is $\$y$ but this is never assigned to more than one different value at the same time so nothing is eliminated. Since there are no arithmetic atoms, Step 4 makes no change either to the list $L$. If for instance, the query had an atom $\$y \neq ‘VoIP’$ in its body, then the last element of the list would have been eliminated. Step 5 identifies that the last three elements in $L$ have the entity bound variable $\$x$ assigned to two different values; thus, it generates the candidate coalescences: $V_1 = \{e_1, e_3\}$, $V_2 = \{e_1, e_4\}$ and $V_3 = \{e_1, e_5\}$. Step 6 determines that all three coalescences are possible. Entities $e_1, e_2$ and $e_3$ will be coalesced for $V_1$, $e_1, e_2$ and $e_3$ and $e_4$ for $V_2$, and the $e_1, e_2$, $e_3$ and $e_5$ for $V_3$.
**Figure 2:** An illustration of the Steiner forest problem.
## 5. STEINER FOREST ALGORITHM
The last step of the evaluation algorithm presented in the previous section takes as input a set of entity sets and needs to perform a series of coalesce operations such that all the entities within each set will become one. To do so, it needs to find an interconnect on the evolution graph among all the entities within each set. Note that the interconnect may involve additional entities not in the set that unavoidably will also have to be coalesced with those in the set. Thus, it is important to find an interconnect that minimizes the total cost of the coalescences. The cost of a coalescence operation is the weight of the evolution relationship that connects the two entities that are coalesced. Typically, that cost is equal to one, meaning that the total cost is actually the total number of coalescence operations that need to be performed. For a given set of entities, this is known as the problem of finding the Steiner tree [11]. However, given a set of sets of entities, it turns out that finding the optimal solution, i.e., the minimum cost interconnect of all the entities, is not always the same as finding the Steiner tree for each of the sets individually. The specific problem is found in the literature as the Steiner forest problem [12].
The difference in the case of the Steiner forest is that edges can be “used” by more than one interconnect. More specifically, the Steiner tree problem aims at finding a tree on an undirected weighted graph that connects all the nodes in a set and has the minimum cost. In contrast to the minimum spanning tree, a Steiner tree is allowed to contain intermediate nodes in order to reduce its total cost. The Steiner forest problem takes as input sets of sets of nodes and needs to find a set of non-connected trees (branches) that make all the nodes in each individual set connected and the total cost is minimal. Even if the cost of the individual trees is not always the minimal. We refer to these individual trees with the term branches.
Figure 2 illustrates the difference through an example. Assuming that we have the graph shown in the figure and the two sets of nodes $\{x,y\}$ and $\{u,v\}$. Clearly, the minimum cost branch that connects nodes $x$ and $y$ is the one that goes through the nodes $a$, $b$ and $c$. Similarly the minimum cost branch that connects $u$ and $v$ is the one that goes through the nodes $e$, $f$ and $g$. Each of the two branches has cost 4 (the number of edges in the branch), thus, the total cost will be 8. However, if instead we connect all the four nodes $x$, $y$, $u$ and $v$ though the tree that uses the nodes $i$, $j$, $k$ and $m$, then, although the two nodes in each set are connected with a path of 5 edges, the total cost is 7.
Algorithm 1 Steiner tree algorithm
Input: graph $G_f : E \to \mathbb{R}^+$, groups $V = V_1, \ldots, V_L$
Output: ST for each element in $\text{flat}(V)$
1: $Q_T$: priority queue sorted in the increasing order
2: $Q_T \leftarrow \emptyset$
3: for all $s_i \in \max\text{flat}(V)$ do
4: enqueue $T(s_i, \{s_i\})$ into $Q_T$
5: end for
6: while $Q_T \neq \emptyset$ do
7: dequeue $Q_T$ to $T(v, p)$
8: if $p \in \text{flat}(V)$ then
9: $\text{ST}(p) = T(v, p)$
10: end if
11: if $\text{ST}$ has all values then
12: return $\text{ST}$
13: end if
14: for all $u \in N(v)$ do
15: if $T(v, p) \oplus (v, u) < T(u, p)$ then
16: $T(u, p) \leftarrow T(v, p) \oplus (v, u)$
17: update $Q_T$ with the new $T(u, p)$
18: end if
19: end for
20: $p_1 \leftarrow p$
21: for all $p_2$, s.t. $p_1 \cap p_2 = \emptyset$ do
22: if $T(v, p_1) \oplus T(v, p_2) < T(v, p_1 \cup p_2)$ then
23: $T(u, p_1 \cup p_2) \leftarrow T(v, p_1) \oplus T(v, p_2)$
24: update $Q_T$ with the new $T(u, p_1 \cup p_2)$
25: end if
26: end for
27: end while
Formally, the Steiner forest problem is defined as follows. Given a graph $G = (N, E)$ and a cost function $f : E \to \mathbb{R}^+$, alongside a set of groups of nodes $V = V_1, \ldots, V_L$, where $V_i \subseteq N$, find a set $C \subseteq E$ such that $C$ forms a connected component that involves all the nodes of every group $V_i$ and the $\sum f(c_i) | c_i \in C$ is minimal.
The literature contains a number of approximate solutions [2][18][14] as well as a number of exact solution using Dynamic Programming [11][9][20] for the discovery of Steiner trees. However, for the Steiner forest problem (which is known to be NP-hard [12]) although there are approximate solutions [12], no optimal algorithm has been proposed so far. In the current work we are making a first attempt toward that direction by describing a solution that is based on dynamic programming and is constructed by extending an existing Steiner tree discovery algorithm.
To describe our solution it is necessary to introduce the set $\text{flat}(V)$. Each element in $\delta(V)$ is a set of nodes created by taking the union of the nodes in a subset of $V$. More specifically, $\text{flat}(V) = \{U | U \subseteq \bigcup_{V_i \in V} V_i , S \subseteq V \}$. Clearly $\text{flat}(V)$ has $2^L$ members. We denote by $\max\text{flat}(V)$ the maximal element in $\text{flat}(V)$ which is the set of all possible nodes that can be found in all the sets in $V$, i.e., $\max\text{flat}(V) = \{n | n \in V_1 \cup \ldots \cup V_L \}$.
Our solution for the computation of the Steiner forest consists of two parts. In the first part, we compute the Steiner trees for every member of the $\text{flat}(V)$ set, and in the second part we use the computed Steiner trees to generate the Steiner forest on $\mathcal{V}$.
The state-of-the-art optimal (i.e., no approximation) algorithm for the Steiner tree problem is a dynamic programming solution developed in the context of keyword searching in relational data [9]. The algorithm is called the Dynamic Programming Best First (DPBF) algorithm and is exponential in the number of input nodes and polynomial with respect to the size of graph. We extend DPBF in order to find a set of Steiner trees, in particular a Steiner tree for every element in $\text{flat}(V)$. The intuition behind the extension is that we initially solve the Steiner tree problem for the $\max\text{flat}(V)$ and continue iteratively until the Steiner trees for every element in $\text{flat}(V)$ has been computed. We present next a brief description of DPBF alongside our extension.
Let $T(v, p)$ denote the minimum cost tree rooted at $v$ that includes the set of nodes $p \subseteq \max\text{flat}(V)$ Note that by definition, the cost of the tree $T(s, \max\text{flat}(V))$ is 0, for every $s \in \max\text{flat}(V)$.
Trees can be iteratively merged in order to generate larger trees by using the following three rules.
$$T(v, p) = \min(T_p(v, p), T_m(v, p))$$ \hspace{1cm} (1)
$$T_\delta(v, p) = \min_{u \subseteq N(v)} T(v, u) \oplus T(u, p)$$ \hspace{1cm} (2)
$$T_m(v, p_1 \cup p_2) = \min_{p_1, p_2} T(v, p_1) \oplus T(v, p_2)$$ \hspace{1cm} (3)
where $\oplus$ is an operator that merges two trees into a new one and $N(v)$ is the set of neighbour nodes of node $v$.
In [9] it was proved that these equations are dynamic programming equations leading to the optimal Steiner tree solution for $\max\text{flat}(V)$ set of nodes. To find it, the DPBF algorithm employs the Dijkstra’s shortest path search algorithm in the space of $T(v, p)$. The steps of the Steiner tree computation are shown in Algorithm 1. In particular, we maintain a priority queue $Q_T$ that keeps in an ascending order the minimum cost trees that have been found at any given point in time. Naturally, a $\text{dequeue}$ operation retrieves the tree with the minimal cost. Using the greedy strategy we look for the next minimal tree which can be obtained from the current minimal. In contrast to DPBF, we do not stop when the best tree has been found, i.e. when the solution for $\max\text{flat}(V)$ has been reached, but we keep collecting minimal trees (lines 7-10) until all elements in $\text{flat}(V)$ have been computed (lines 11-13).
To prove that all the elements of $\text{flat}(V)$ are found during that procedure, it suffices to show that our extension corresponds to the finding of all the shortest paths for a single source in the Dijkstra’s algorithm. The time and space complexity for finding the Steiner trees is $O(3^{L_1}n + 2^{L_1}((\sum l_i + \log(n)) + m))$ and $O(2^{L_1}n)$, respectively, where $n$ and $m$ are the number of nodes and edges of graph $G$, and $l_i$ is the size of the $i$th set $V_i$ in the input of set $\mathcal{V}$ of the algorithm.
Once all the Steiner trees for $\text{flat}(V)$ have been computed, we use them to find the Steiner forest for $\mathcal{V}$. The Steiner forest problem has an optimal substructure and its subproblems overlap. This means that we can find a dynamic programming solution to it. To show this, we first we consider the case for $L=1$, i.e., the case in which we have only one group of nodes. In that case finding the Steiner forest is equivalent to finding the Steiner tree for the single set of nodes that we have. Assume now that $L>1$, i.e., the input set $\mathcal{V}$ is $\{V_1, \ldots, V_L\}$, and that we have already computed all the Steiner forests for every set $V \subseteq \mathcal{V}$. Let $SF(V)$ denote the Steiner forest for an input set $V$. We do not know the exact structure of $SF(V)$, i.e. how many branches it has and what elements of $\mathcal{V}$ are included in each. Therefore, we need to test all possible hypotheses of the forest structure, which are $2^L$, and pick the one that has minimal cost. For instance, we assume that the forest has a branch that includes all nodes in $V_i$. The total cost of the forest with that assumption is the sum of the Steiner forest on $V_i$ and the Steiner forest for $\{V_2, \ldots, V_L\}$ which is a subset of $\mathcal{V}$, hence it is considered known. The Steiner forest on $V_i$ is actually a Steiner tree. This is based on the following lemma.
**Lemma 5.1.** Each branch of a Steiner forest is a Steiner tree.
Algorithm 2: Steiner forest algorithm
Input: \( G = (N, E), V = \{ V_1, \ldots, V_L \}, ST(s) \forall s \in \text{flat}(V) \)
Output: \( SF(V) \)
1: for all \( V_i \in \mathcal{V} \) do
2: \( SF(V_i) = ST(V_i) \)
3: end for
4: for \( i = 2 \) to \( L - 1 \) do
5: for all \( H \subseteq V \) and \( |H| = i \) do
6: \( u \leftarrow \infty \)
7: for all \( E \subseteq H \) and \( E \neq \emptyset \) do
8: \( u \leftarrow \min(u, ST(maxflat(E)) \oplus SF(H \setminus E)) \)
9: end for
10: \( SF(H) \leftarrow u \)
11: end for
12: end for
13: \( u \leftarrow \infty \)
14: for all \( H \subseteq V \) and \( H \neq \emptyset \) do
15: \( u \leftarrow \min(u, ST(maxflat(H)) \oplus SF(V \setminus H)) \)
16: end for
17: \( SF(V) \leftarrow u \)
**Proof.** This proof is done by contradiction. Assuming that a branch of the forest is not a Steiner tree, it can be replaced with a Steiner tree and reduce the overall cost of the Steiner forest. This means that the initial forest was not minimal.
We can formally express the above reasoning as:
\[
SF(V) = \min_{H \subseteq V} \left( ST(maxflat(H)) \oplus SF(V \setminus H) \right) \tag{4}
\]
Using the above equation in conjunction with the fact that \( SF(V) = ST(V_1) \), if \( V = \{ V_1 \} \), we construct an algorithm (Algorithm 2) that finds the Steiner forest in a bottom-up fashion. The time and space requirements of the specific algorithm are \( O(3^L - 2^L(L/2 - 1) - 1) \) and \( O(2^L) \), respectively. Summing this with the complexities of the first part, it gives a total time complexity \( O(3^{2L}n + 2^{L-1}(\sum l_i + \log n + m)) + 3^L - 2^L(L/2 - 1) - 1 \) with space requirement \( O(2^{L-1}n + 2^L) \).
6. QUERY EVALUATION OPTIMIZATION
In the case of top-k query processing there is no need to actually compute all possible Steiner forests to only reject some of them later. It is important to prune as early as possible cases which are expected not to lead to any of the top-k answers. We have developed a technique that achieves this. It is based on the following lemma.
**Lemma 6.1.** Given two sets of sets of nodes \( V' \) and \( V'' \) on a graph \( G \) for which \( V' \subseteq V'' \): \( cost(SF(V')) \leq cost(SF(V'')) \).
**Proof.** The proof is based on the minimality of a Steiner forest. Let \( SF(V') \) and \( SF(V'') \) be Steiner forests for \( V' \) and \( V'' \), with costs \( w_1 \) and \( w_2 \), respectively. If \( cost(SF(V')) \leq cost(SF(V'')) \), then we can remove \( V'' \setminus V' \) from \( V'' \) and cover \( V' \) with a smaller cost forest than \( SF(V') \), which contradicts the fact that \( SF(V') \) is a Steiner forest.
To compute the top-k answers to a query we do the following steps. Assume that \( B = \{ V_1, \ldots, V_r \} \) is a set of inputs for the Steiner forest algorithm. First, we find the \( B_{\min} \subseteq B \) such that for each \( V' \in B_{\min} \) there is no \( V'' \in B \) such that \( V'' \not\subseteq V' \). Then, we compute the Steiner forest for each element in \( B_{\min} \). According to Lemma 6.1 and the construction procedure of \( B_{\min} \) we ensure that the the Top-1 is among the computed Steiner forests. We remove the input which corresponds to that Top-1 answer from \( B \) and then we continue with the computation of Steiner forests to update \( B_{\min} \). The above steps are repeated until \( k \) answers have been found.
7. EXPERIMENTS
To evaluate the efficiency and effectiveness of our approach we performed two kinds of experiments. First we studied the behavior of the Steiner forest discovery algorithm in isolation, and then we evaluated the performance of the query answering mechanism we have developed and which uses internally the Steiner forest algorithm. We also studied the effectiveness of our optimization technique for top-k query answering.
In the experiments we used both synthetic and real data. We used the term non-evolution data to refer to entities and attributes, and the term evolution data to refer to the the evolution relationships and more generally to the evolution graph. We noticed that in the real datasets, the non-evolution data were much larger than the evolution data and we maintained a similar analogy during our synthetic data generation.
For the synthetic data generation we used the Erdős-Rényi graph generator [17] which can produce random graphs for which the probability to have an edge between two nodes is constant and independent of other edges. Since many real world data follow a power law distribution, for the non-evolution synthetic data we used the Zipf’s distribution. In our own implementation of the Zipfian distribution, as a rank we considered the number of occurrences of an attribute-value pair in the entire dataset (e.g., if the attribute \( \langle \text{State, CA} \rangle \) appeared 15 time, its rank was 15). This allowed us to model the fact that there are few frequent attribute-value pairs and the majority are rare attributes. The real corpora that we used had similar properties. We will refer to the synthetic dataset generated using this method as ER-SYNTH.
For real dataset we used an extract from the trademark corpora which is available from the United States Patent and Trademark Office.\(^1\) The trademarks are a kind of intellectual property which may belong to an individual or a company. If the owner of some trademark changes, the trademark has to be re-registered accordingly. Analyzing the trademark owner lists we could extract sequences of companies that have registered the same trademark, and we used this as an indication for evolution. We constructed the evolution graphs by considering each such a pair as an evolution relationship. The dataset we generated that way from the UPTSO files contained approximately 16K unique companies, 200K attributes (the information about companies such as name, place where it is registered and so on), and an evolution graph with 573 components of sizes between 5 and 373. To make the dataset extracted from real data even richer, i.e., with components of higher complexity, we used two graph merging strategies. In the first, a new evolution component is constructed by connecting two components through an artificial evolution relationship edge between two random nodes from the two components. We refer to this kind of merge as CHAIN, because it creates a chain of source graphs. In the second strategy, two components are merged, by choosing an arbitrary node from one component and then adding evolution relationship edges to some random node of every other component. We refer to this method as STAR. Datasets generated using these methods will be denoted as REAL-CHAIN and REAL-STAR, respectively.
The naive evaluation strategies that were described in Sec-
\(^1\)http://www.uspto.gov/
We kept the Σ of the Steiner forest algorithm. The current experiment showed that the execution time depends fully on \( \sum_{i=1}^{L} |V_i| \) and not on \( L \) itself. This means that within a reasonable range of query sizes the number of forest branches does not have any influence on the performance.
Scaling the Graph. In this experiment we studied the Steiner forest discovery time with respect to the size of the graph. We used three kinds of graph data: ER-SYNTH, REAL-CHAIN and REAL-STAR, with sizes from 25 to 250 nodes with a step of 25.
For the synthetic dataset the number of edges and nodes was almost the same. We generated 25 random inputs to the Steiner forest problem with \( L = 2 \) and \( |V_1| = 3 \), and \( |V_2| = 3 \). The results of this experiment are presented in Figure 3(c). The query evaluation time has a linear trend as expected, and it was interesting that the execution time was always less than a second.
We also studied the scalability of the algorithm in terms of the parameter \( L \). For three queries with \( \sum |V_i| = 6 \) and \( L = 1, 2 \) and 3 we varied the evolution graph size from 25 to 250 with step 25. The graph we used was the ER-SYNTH with the same number of nodes and edges as before. The results are shown in Figure 3(d), where it can be observed that the scalability of the algorithm depends on the total number of elements in the sets in the input set \( V \), i.e., the \( \sum_{i=1}^{L} |V_i| \), and not on the number of forest branches, i.e., the number \( L \), at least for values of \( \sum_{i=1}^{L} |V_i| \) up to 10.
7.2 Query Evaluation
Apart from experimenting with the Steiner forest algorithm in isolation, we run a number of experiments to evaluate the query answering mechanism we have developed for evolution databases. The query evaluation time depends not only on the evolution graph size and structure but also on the size and structure of the whole evolution database. First, we analyzed the behavior of the system with respect to the query size. The query size is determined by the number of distinct variables, and their number of occurrences in the query. We started with a 1-variable query and we observe its behavior as size increases. Then, we tested the scalability of the query evaluation mechanism as a function of the evolution graph only. Finally, we studied the scalability as a function of the data (i.e., attributes and associations) and we found that their distribution (but not their size) can dramatically affect the performance of the system.
In the synthetic data analysis, we generated values that were following the Zipfian distribution for the attributes/associations. We controlled the generation of the ER-SYNTH dataset through four parameters, and in particular, the pool of entities, the exponent that is used to adjust the “steepness” of the Zipfian distribution, the number of elements that describes the maximum frequency of an attribute or association, and the number of attributes. The values of the parameters for the synthetic data are chosen to coincide with those of the real corpora.
Scaling the Query. We considered a number of 1-variable queries with a body of the form:
\[ \text{Sx(attr}_1: \text{value}_1), \ldots , \text{Sx(attr}_N: \text{value}_N) \]
and we performed a number of experiments for different values of \( N \), i.e., the number of atoms in the query. For every atom we randomly chose an attribute-value pair from a pool of available distinct attribute-name-value pairs. The ER-SYNTH graph that was generated had 57 nodes and 65 edges. The results are shown in
Figure 3: Steiner forest discovery performance
Figure 4: Query evaluation performance
Figure 4(a). The same figure includes the results of the query evaluation on the real dataset that had a size similar to the synthetic. For the generation of their non-evolution data we had the exponent set to 3, the number of elements parameter set to 15 and their total number was 537. The results of the Figure 4(a) are in a logarithmic scale and confirm the expectation that the query evaluation time is growing exponentially as the number of variables in the query grows. If we compare these results with those of the Steiner forest algorithm for the respective case, it follows that the integrated system adds a notable overhead on top of the Steiner forest algorithm execution time. This is due to the number of coalescence candidates and the number of Steiner forests that needed to be computed in order to obtain the cost of the elements in the answer set. Although the parameters for the generation of the synthetic and real data coincided, their trends were different, as Figure 4(c) illustrates.
We further tested how the number of entity variables in the query affect the performance. Note that we are mainly interested in the entity-bound variables. Let $M$ represent the number of distinct entity-bound variables in the query, and $M_i$ the number of appearances of the $i$-th variable in the query. Note that the number of distinct variables will require to solve a Steiner forest problem in which the input $V=\{V_1, \ldots, V_L\}$ will have $L=M$ and $|V_i|=M_i$, for each $i=1..M$. The total number of variable appearances in the query will naturally be $\sum_{i=1}^{M} M_i$.
In the experiment, we chose a constant value for the $\sum_{i=1}^{M} M_i$ and we run queries for $M=1, 2$ or 3. As a dataset we used the ER-SYNTH with 57 nodes and 65 edges. 537 attributes were generated with the exponent parameter having the value 3 and the number of elements parameter to have the value 15. A total of 53 synthetic associations were also generated with the exponent parameter having the value 3, and the number of elements parameter to have the value 10. We used 25 randomly generated queries for each of the 3 $M$ values, and took their average execution time. We did multiple runs of the above experiments for different values of $\sum_{i=1}^{M} M_i$ between 4 and 10. The outcome of the experiments is shown in Figure 4(b) in a logarithmic scale. Clearly, the number of branches in a forest did not affect the query evaluation time, i.e., queries with many variables showed the same increase in time as the 1-variable query for the same $\sum_{i=1}^{M} M_i$.
Scaling the Data. In this experiment we examined the query evaluation time with respect to the size of the evolution graph. As evolution data, we used both real and synthetic sources. Regarding the real data, we used a series of graphs (and their attributes as non-evolution data) with sizes from 25 to 250 with step 25. The number of edges was 110% of the number of nodes for all graphs. For the real dataset we used both the REAL-CHAIN and the REAL-STAR data. For each graph we generated 25 random queries with 3 distinct variables, i.e., $M=3$, and each variable had $M_1=2$, $M_2=2$ and $M_3=3$ appearances in the query, and we measured the average time required to evaluate them. As a synthetic dataset the ER-SYNTH was used, generated to be the same size as before but with the following Zipfian distribution parameters: exponent 3, number of elements 15 and number attributes 10 times more than the number of nodes. Note that we did not generate associations because the trademark dataset did not have any association that we could use as a guide. Figure 4(c) depicts the results of this experiment. It shows that there is a linear growth of time which is accompanied with an increasing oscillations which can be explained by the growing exponent of non-evolution data, i.e. the number of coalescence candidates may become too large for evolution graphs with considerable size.
Furthermore, we studied how the query evaluation time scales for different values of $M$, i.e. for different distinct variables but with the same total number of variable appearances in the query (i.e., the $\sum_{i=1}^{M} M_i$). We used the ER-SYNTH dataset again with sizes from 25 to 250, using a step 25. The number of evolution relationships was 110% of the number of entities. For each case, we generated 25 random queries with $M=1$ having $M_1=6$, $M=2$ having $M_1=3$, and $M_2=3$ and finally, $M=3$ having $M_1=3$, $M_2=2$, and $M_3=2$. We executed these queries and measured the average evaluation time. The non-evolution data was following the Zipfian distribution with exponent 3, the number of elements was 15 and the total number of attributes was 10 times more that the number of nodes (entities). For the associations, the exponent was 3, the number of elements was 10 and their total number was 5 times more that the respective number for nodes. The results are presented in Figure 4(d). Similarly to the previous experiment, we observed a linear growth with increasing oscillations.
Evolution scalability for different forest structures. We further examined how the number of evolution graph components influence the query evaluation time. For this purpose, we generated data using ER-SYNTH, and in particular 5 datasets of evolution graphs with a total size of 300 nodes and 330 edges. The sets had 1, 2, 3, 4 and 5 evolution graphs respectively. For each set we run 25 random queries with two distinct variables ($L=2$) that were appearing in the query 3 times each, i.e., $M_1=3$ and $M_2=3$ and measured their average execution time. As non-evolution data, we generated attributes and associations with varying exponent parameter, 2.5, 3 and 3.5. The total number of elements and attributes/associations were 15 and 1000 in one case, while it was 10 and 100 in the other. Figure 5(a) contains a table with the query evaluation time for each number of branches and exponent values. From the result, we could observe the dramatic decrease in time with respect to the number of evolution graph components. This can be explained by the fact that the query evaluation "smeared" across a number of evolution
tributed over 15 entities. The exponent we used varied from attributes distributed over 30 entities, and 1000 associations distributed over 15 entities.
Data distribution dependency. Finally, we studied the properties of the system in relation to the data distribution parameter, namely the exponent of Zip’s distribution. The query optimizer described in Section 6 was taken into consideration here and we analyzed how the non-evolution data were affecting the top-k query answering. For this experiment we used the following input parameters: 25 random queries with $M=2$ distinct variables, and $M_1 = 3$ and $M_2 = 3$ respective appearances of each distinct variable in the query. We used an ER-SYNTH dataset, the evolution graph of which had $n = 57$ nodes and $m = 65$ evolution edges. We also had 10000 attributes distributed over 30 entities, and 1000 associations distributed over 15 entities. The exponent we used varied from 2.25 to 3.05 with a step of 0.1. The results of the specific experiment are presented in Figure 5(b). For small exponents the difference between regular query answering and the top-10 or top-1 was significant. To justify this, recall that the number of pruned candidates depends on how different are the input sets in the Steiner forest algorithm input (ref. Section 6), thus, when the exponent is small the input sets share many entities.
8. CONCLUSION
In this work we presented a novel framework for dealing with evolution of entities at different granularity levels. We have made first class citizens of the system associations between entities indicating that they represent the same real world object but in different evolution phases. We have designed and implemented a technique that allows query answering over such databases even if the evolution model that the user has in mind is of different granularity than the one used in the database. The solution required the computation of a Steiner forest. For the later we have presented a novel algorithm for computing its optimal solution. Finally, we have performed a number of extensive experimental evaluation to determine the efficiency of our technique.
References
|
{"Source-Url": "http://sbykau.com/files/papers/Bykau%20et%20al.%20-%202011%20-%20Supporting%20Queries%20Spanning%20Across%20Phases%20of%20Evolving%20Artifacts%20using%20Steiner%20Forests.pdf", "len_cl100k_base": 13019, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39994, "total-output-tokens": 15241, "length": "2e13", "weborganizer": {"__label__adult": 0.00036978721618652344, "__label__art_design": 0.0005664825439453125, "__label__crime_law": 0.0005497932434082031, "__label__education_jobs": 0.004421234130859375, "__label__entertainment": 0.00015854835510253906, "__label__fashion_beauty": 0.00025463104248046875, "__label__finance_business": 0.0007920265197753906, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0008311271667480469, "__label__hardware": 0.0010623931884765625, "__label__health": 0.0008754730224609375, "__label__history": 0.0006685256958007812, "__label__home_hobbies": 0.00018990039825439453, "__label__industrial": 0.0006608963012695312, "__label__literature": 0.0007581710815429688, "__label__politics": 0.0003523826599121094, "__label__religion": 0.0005812644958496094, "__label__science_tech": 0.343505859375, "__label__social_life": 0.00016808509826660156, "__label__software": 0.03875732421875, "__label__software_dev": 0.6025390625, "__label__sports_fitness": 0.0002849102020263672, "__label__transportation": 0.0007610321044921875, "__label__travel": 0.00030803680419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57235, 0.04223]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57235, 0.55033]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57235, 0.90358]], "google_gemma-3-12b-it_contains_pii": [[0, 5067, false], [5067, 9153, null], [9153, 16362, null], [16362, 20071, null], [20071, 27364, null], [27364, 34613, null], [34613, 41430, null], [41430, 45089, null], [45089, 51320, null], [51320, 57235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5067, true], [5067, 9153, null], [9153, 16362, null], [16362, 20071, null], [20071, 27364, null], [27364, 34613, null], [34613, 41430, null], [41430, 45089, null], [45089, 51320, null], [51320, 57235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57235, null]], "pdf_page_numbers": [[0, 5067, 1], [5067, 9153, 2], [9153, 16362, 3], [16362, 20071, 4], [20071, 27364, 5], [27364, 34613, 6], [34613, 41430, 7], [41430, 45089, 8], [45089, 51320, 9], [51320, 57235, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57235, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
4fe6dcdd210cfe4348d439c6aac1bc85da9378ba
|
We reduce synthesis for $CTL^*$ properties to synthesis for LTL. In the context of model checking this is impossible — $CTL^*$ is more expressive than LTL. Yet, in synthesis we have knowledge of the system structure and we can add new outputs. These outputs can be used to encode witnesses of the satisfaction of $CTL^*$ subformulas directly into the system. This way, we construct an LTL formula, over old and new outputs and original inputs, which is realisable if, and only if, the original $CTL^*$ formula is realisable. The $CTL^*$-via-LTL synthesis approach preserves the problem complexity, although it might increase the minimal system size. We implemented the reduction, and evaluated the $CTL^*$-via-LTL synthesiser on several examples.
1 Introduction
In reactive synthesis we automatically construct a system from a given specification in some temporal logic. The problem was introduced by Church for Monadic Second Order Logic [4]. Later Pnueli introduced Linear Temporal Logic (LTL) [15] and together with Rosner proved 2EXPTIME-completeness of the reactive synthesis problem for LTL [16]. In parallel, Emerson and Clarke introduced Computation Tree Logic (CTL) [5], and later Emerson and Halpern introduce Computation Tree Star Logic ($CTL^*$) [6] that subsumes both CTL and LTL. Kupferman and Vardi showed [12] that the synthesis problem for $CTL^*$ is 2EXPTIME-complete.
Intuitively, LTL allows one to reason about infinite computations. The logic has temporal operators, e.g., $G$ (always) and $F$ (eventually), and allows one to state properties like “every request is eventually granted” ($G(r \rightarrow Fg)$). A system satisfies a given LTL property if all its computations satisfy it.
In contrast, CTL and $CTL^*$ reason about computation trees, usually derived by unfolding the system. The logics have—in addition to temporal operators—path quantifiers: $A$ (on all paths) and $E$ (there exists a path). CTL forbids arbitrary nesting of path quantifiers and temporal operators: they must interleave. E.g., $AGg$ (“on all paths we always grant”) is a CTL formula, but $AGFg$ (“on all paths we infinitely often grant”) is not a CTL formula. $CTL^*$ lifts this limitation.
The expressive powers of CTL and LTL are incomparable: there are systems indistinguishable by CTL but distinguishable by LTL, and vice versa. One important property inexpressible in LTL is the resettability property: “there is always a way to reach the ‘reset’ state” ($AGEFreset$).
There was a time when CTL and LTL competed for “best logic for model checking” [20]. Nowadays most model checkers use LTL. LTL is also prevalent in reactive synthesis. SYNTCOMP [9]—the reactive synthesis competition with the goal to popularise reactive synthesis—has two distinct tracks, and both use LTL as their specification language.
Yet LTL leaves the designer without structural properties. One solution is to develop general $CTL^*$ synthesisers like the one in [10]. Another solution is to transform the $CTL^*$ synthesis problem into the form understandable to LTL synthesisers, i.e., to reduce $CTL^*$ synthesis to LTL synthesis. Such a
---
*The authors-order was decided by tossing the coin.*
reduction would automatically transfer performance advances in LTL synthesisers to a CTL* synthesiser. This paper shows one such reduction.
Our reduction of CTL* synthesis to LTL synthesis works as follows.
First, recall how the standard CTL* model checking works [2]. The verifier introduces a proposition for every state subformula—formulas starting with an A or an E path quantifier—of a given CTL* formula. Then the verifier annotates system states with these propositions, in the bottom up fashion, starting with propositions that describe subformulas over original propositions (system inputs and outputs) only. Therefore the system satisfies the CTL* formula iff the initial system state is annotated with the proposition describing the whole CTL* formula (assuming that the CTL* formula starts with A or E).
Now let us look into CTL* synthesis. The synthesiser has the flexibility to choose the system structure, as long as it satisfies a given specification. We introduce new propositions—outputs that later can be hidden from the user—for state subformulas of the CTL* formula, just like in the model checking case above. We also introduce additional propositions for existentially quantified subformulas—to encode the witnesses of their satisfaction. Such propositions describe the directions (inputs) the system should take to satisfy existentially quantified path formulas. The requirement that new propositions indeed denote the truth of the subformulas can be stated in LTL. For example, for a state subformula Aϕ, we introduce proposition pAϕ, and require G[pAϕ → ϕ], where ϕ' is ϕ with state subformulas substituted by the propositions. For an existential subformula Eϕ, we introduce proposition pEϕ and require, roughly, G[pEϕ → ((GdpEϕ) → ϕ')], which states: if the proposition pEϕ holds, then the path along directions encoded by dpEϕ satisfies ϕ' (where ϕ' as before). We wrote “roughly”, because there can be several different witnesses for the same existential subformula starting at different system states: they may meet in the same system state, but depart afterwards—then, to able to depart from the meeting state, each witness should have its own direction d. We show that, for each existential subformula, a number ≈ 2^(|ΦCTL*|) of witnesses is sufficient, where ΦCTL* is a given CTL* formula. This makes the LTL formula exponential in the size of the CTL* formula, but the special—conjunctive—nature of the LTL formula ensures that the synthesis complexity is 2EXPTIME wrt. |ΦCTL*|.
Our reduction is “if and only if”, and it preserves the synthesis complexity. However, it may increase the size of the system, and is not very well suited to establish unrealisability. Of course, to show that the CTL* formula is unrealisable, one could reduce CTL* synthesis to LTL synthesis, then reduce the LTL synthesis problem to solving parity games, and derive the unrealisability from there. But the standard approach for unrealisability checking—by synthesising the dualised LTL specification—does not seem to be practical, since the automaton for the negated LTL formula explodes in size.
Finally, we have implemented the converter from CTL* into LTL, and evaluated CTL*-via-LTL synthesis approach, using two LTL synthesisers and CTL* synthesiser [10], on several examples. The experimental results show that such an approach works very well—outperforming the specialised CTL* synthesiser [10]—when the number of CTL*-specific formulas is small.
The paper structure is as follows. Section 2 defines Büchi and co-Büchi word automata, tree automata, CTL* with inputs, Moore systems, computation trees, and other useful notions. Section 3 contains the main contribution: it describes the reduction. In Section 4 we briefly discuss checking unrealisability of CTL* specifications. Section 5 describes the experimental setup, specifications, solvers used, and synthesis timings. We conclude in Section 6.
---
1Reducing LTL synthesis to solving parity games is practical, as SYNTCOMP’17 [9] showed: such synthesiser ltl2syn was among the fastest.
2Available at https://github.com/5nizza/party-elli, branch “cav17”
2 Definitions
Notation: \( \mathbb{B} = \{ \text{true, false} \} \) is the set of Boolean values, \( \mathbb{N} \) is the set of natural numbers (excluding 0), \([i,j]\) for integers \( i \leq j \) is the set \( \{i, ..., j\} \), \([k]\) is \( \{1, k\} \) for \( k \in \mathbb{N} \). By default, we use natural numbers.
In this paper we consider finite systems and automata.
2.1 Moore Systems
A (Moore) system \( M \) is a tuple \((I, O, T, t_0, \tau, out)\) where \( I \) and \( O \) are disjoint sets of input and output variables, \( T \) is the set of states, \( t_0 \in T \) is the initial state, \( \tau : T \times 2^I \rightarrow T \) is a transition function, \( out : T \rightarrow 2^O \) is the output function that labels each state with a set of output variables. Note that systems have no dead ends and have a transition for every input. We write \( t \xrightarrow{i} t' \) when \( t' = \tau(t,i) \) and \( out(t) = o \).
For the rest of the section, fix a system \( M = (I, O, T, t_0, \tau, out) \).
A system path is a sequence \( t_1t_2... \in T^\omega \) such that, for every \( i \), there is \( e \in 2^I \) with \( \tau(t_i,e) = t_{i+1} \). An input-labeled system path is a sequence \((t_1,e_1)(t_2,e_2) ... \in (T \times 2^I)^\omega \) where \( \tau(t_i,e_i) = t_{i+1} \) for every \( i \). A system trace starting from \( t_1 \in T \) is a sequence \((o_1 \cup e_1)(o_2 \cup e_2) ... \in (2^I \cup 2^O)^\omega \), for which there exists an input-labeled system path \((t_1,e_1)(t_2,e_2) ... \) and \( o_i = out(t_i) \) for every \( i \). Note that, since systems are Moore, the output \( o_i \) cannot “react” to input \( e_i \). I.e., the outputs are “delayed” with respect to inputs.
2.2 Trees
A (infinite) tree is a tuple \((D, L, V \subseteq D^*, l : V \rightarrow L)\), where
- \( D \) is the set of directions,
- \( L \) is the set of node labels,
- \( V \) is the set of nodes satisfying: (i) \( e \in V \) is called the root (the empty sequence), (ii) \( V \) is closed under prefix operation (i.e., every node is connected to the root), (iii) for every \( n \in V \) there exists \( d \in D \) such that \( n \cdot d \in V \) (i.e., there are no leaves),
- \( l \) is the nodes labeling function.
A tree \((D, L, V, l)\) is exhaustive iff \( V = D^* \).
A tree path is a sequence \( n_1n_2... \in V^\omega \), such that, for every \( i \), there is \( d \in D \) and \( n_{i+1} = n_i \cdot d \).
In contexts where \( I \) and \( O \) are inputs and outputs, we call an exhaustive tree \((D = 2^I, L = 2^O, V = D^*, l : V \rightarrow 2^O)\) a computation tree. We omit \( D \) and \( L \) when they are clear from the context. E.g. we can write \((V = (2^I)^*, l : V \rightarrow 2^O)\) instead of \((2^I, 2^O, V = (2^I)^*, l : V \rightarrow 2^O)\).
With every system \( M = (I, O, T, t_0, \tau, out) \) we associate the computation tree \((D, L, V, l)\) such that, for every \( n \in V \): \( l(n) = out(\tau(t_0, n)) \), where \( \tau(t_0, n) \) is the state, in which the system, starting in the initial state \( t_0 \), ends after reading the input word \( n \). We call such a tree a system computation tree.
A computation tree is regular iff it is a system computation tree for some system.
2.3 CTL* with Inputs (release PNF) and LTL
For this section, fix two disjoint sets: inputs \( I \) and outputs \( O \). Below we define CTL* with inputs (in release positive normal form). The definition differentiates inputs and outputs (see Remark 1).
Syntax of CTL* with inputs. State formulas have the grammar:
\[
\Phi = \text{true} \mid \text{false} \mid o \mid \neg \Phi \mid \Phi \land \Phi \mid \Phi \lor \Phi \mid A \Phi \mid E \Phi
\]
where $o \in O$ and $\varphi$ is a path formula. Path formulas are defined by the grammar:
$$\varphi = \Phi | i | \neg i | \varphi \land \varphi | \varphi \lor \varphi | X \varphi | \varphi \cup \varphi | \varphi R \varphi,$$
where $i \in I$. The temporal operators $G$ and $F$ are defined as usual.
The above grammar describes the $CTL^\ast$ formulas in positive normal form. The general $CTL^\ast$ formula (in which negations can appear anywhere) can be converted into the formula of this form with no size blowup, using equivalence $\neg(\Phi \cup \Phi) \equiv \neg \Phi \neg \Phi$.
**Semantics of $CTL^\ast$ with inputs.** We define the semantics of $CTL^\ast$ with respect to a computation tree $(V, l)$. The definition is very similar to the standard one [2], except for a few cases involving inputs (marked with “+”).
Let $n \in V$ and $o \in O$. Then:
- $n \not\models \Phi$ iff $n \models \Phi$ does not hold
- $n \models \text{true}$ and $n \not\models \text{false}$
- $n \models o$ iff $o \in l(n), n \models \neg o$ iff $o \not\in l(n)$
- $n \models \Phi_1 \land \Phi_2$ iff $n \models \Phi_1$ and $n \models \Phi_2$. Similarly for $\Phi_1 \lor \Phi_2$.
- $n \models A \varphi$ iff for all tree paths $\pi$ starting from $n$: $\pi \models \varphi$. For $E \varphi$, replace “for all” with “there exists”.
Let $\pi = n_1n_2... \in V^\omega$ be a tree path, $i \in I$, and $n_2 = n_1 \cdot e$ where $e \in 2^I$. For $k \in \mathbb{N}$, define $\pi[k] = n_kn_{k+1}...$, i.e., the suffix of $\pi$ starting in $n_k$. Then:
- $\pi \models \Phi$ iff $n_1 \models \Phi$
- $\pi \models i$ iff $i \in e$, $\pi \models \neg i$ iff $i \not\in e$. Note how inputs are shifted wrt. outputs.
- $\pi \models \varphi_1 \land \varphi_2$ iff $\pi \models \varphi_1$ and $\pi \models \varphi_2$. Similarly for $\varphi_1 \lor \varphi_2$.
- $\pi \models X \varphi$ iff $\pi[2] \models \varphi$
- $\pi \models \Phi_1 \cup \Phi_2$ iff $\exists l \in \mathbb{N} : (\pi[l] \models \Phi_2 \land \forall m \in [1, l - 1] : \pi[m] \models \Phi_1)$
- $\pi \models \Phi_1 R \Phi_2$ iff $(\forall l \in \mathbb{N} : \pi[l] \models \Phi_2) \land (\exists l \in \mathbb{N} : \pi[l] \models \Phi_1 \land \forall m \in [1, l] : \pi[m] \models \Phi_2)$
A computation tree $(V, l)$ satisfies a $CTL^\ast$ state formula $\Phi$, written $(V, l) \models \Phi$, iff the root node satisfies it. A system $M$ satisfies a $CTL^\ast$ state formula $\Phi$, written $M \models \Phi$, iff its computation tree satisfies it.
**Remark 1** (Subtleties). Note that $(V, l) \models i \land o$ is not defined, since $i \land o$ is not a state formula. Let $r \in I$ and $g \in O$. By the semantics, $E r \equiv \text{true}$ and $E \neg r \equiv \text{true}$, while $E g \equiv g$ and $E \neg g \equiv \neg g$. This are the consequences of how we group inputs with outputs.
**LTL.** The syntax of LTL formula (in general form) is:
$$\phi = \text{true} \mid \text{false} \mid p \mid \neg p \mid \phi \land \phi \mid \neg \phi \mid \phi \cup \phi \mid X \phi,$$
where $p \in I \cup O$. Temporal operators $G$ and $F$ are defined as usual. The semantics is standard (see e.g. [2]), and can be derived from that of $CTL^\ast$ assuming that $\pi \models \neg \phi$ iff $\pi \not\models \phi$. A computation tree $(V, l)$ satisfies an LTL formula $\phi$, written $(V, l) \models \phi$, iff all tree paths starting in the root satisfy it. A system satisfies an LTL formula iff its computation tree satisfies it.
2.4 Word Automata
A word automaton $A$ is a tuple $(\Sigma, Q, q_0, \delta, acc)$ where $\Sigma$ is an alphabet, $Q$ is a set of states, $q_0 \in Q$ is the initial state, $\delta : Q \times \Sigma \to 2^Q \setminus \{\emptyset\}$ is a transition relation, $acc : Q^\omega \to \mathbb{B}$ is a path acceptance condition. Note that word automata have no dead ends and have a transition for every letter of the alphabet. A word automaton is deterministic when $|\delta(q, \sigma)| = 1$ for every $(q, \sigma) \in Q \times \Sigma$.
For the rest of this section, fix word automaton $A = (\Sigma, Q, q_0, \delta, acc)$ with $\Sigma = 2^{H \cup D}$.
A path in automaton $A$ is a sequence $q_1q_2\ldots \in Q^\omega$ such that there exists $a_i \in \Sigma$ for every $i$ such that $(q_i, a_i, q_{i+1}) \in \delta(q_i)$. A word $a_1a_2\ldots \in \Sigma^\omega$ generates a path $\pi = q_1\ldots$ iff for every $i$: $(q_i, a_i, q_{i+1}) \in \delta$. A path $\pi$ is accepted iff $acc(\pi)$ holds.
We define two acceptance conditions. Let $\pi \in Q^\omega$, $\text{Inf}(\pi)$ be the elements of $Q$ appearing in $\pi$ infinitely often, and $F \subseteq Q$. Then:
- Büchi acceptance: $acc(\pi)$ holds iff $\text{Inf}(\pi) \cap F \neq \emptyset$.
- co-Büchi acceptance: $acc(\pi)$ holds iff $\text{Inf}(\pi) \cap F = \emptyset$.
We distinguish two types of word automata: universal and non-deterministic ones. A nondeterministic word automaton $A$ accepts a word from $\Sigma^\omega$ iff there exists an accepted path generated by the word that starts in an initial state. Universal word automata require all such paths to be accepted.
**Abbreviations.** NBW means nondeterministic Büchi automaton, and UCW means universal co-Büchi automaton.
2.5 Synthesis Problem
The $\text{CTL}^*$ synthesis problem is:
*Given: the set of inputs $I$, the set of outputs $O$, $\text{CTL}^*$ formula $\Phi$*
*Return: a computation tree satisfying $\Phi$, otherwise "unrealisable"*
The inputs to the problem are called a specification. A specification is realisable if the answer is a tree, and then the tree is called a model of the specification. Similarly we can define the LTL synthesis problem.
It is known [12, 16] that the $\text{CTL}^*$ and LTL synthesis problems are 2EXPTIME-complete, and any realisable specification has a regular computation tree model.
2.6 Tree Automata
This paper can be understood without complete understanding of alternating tree automata, but since they are mentioned in several places, we define them here. Namely, below we define alternating hesitant tree automata [13], which describe $\text{CTL}^*$ formulas, similarly to how NBWs describe LTL formulas. The difference is due to the mix of $E$ and $A$ path quantifiers—hesitant tree automata have an acceptance condition that mixes Büchi and co-Büchi acceptance conditions and certain structural properties.
We start with a general case of alternating tree automata and then define alternating hesitant tree automata.
For a finite set $S$, let $\mathcal{B}^+(S)$ denote the set of all positive Boolean formulas over elements of $S$.
**Alternating Tree Automata**
An alternating tree automaton is a tuple $(\Sigma, D, Q, q_0, \delta, acc)$, where $\Sigma$ is the set of node propositions, $D$ is the set of directions, $q_0 \subseteq Q$ is the initial state, $\delta : Q \times \Sigma \to 2^+((D \times Q)$ is the transition relation, and
acc is an acceptance condition \( acc : Q^0 \rightarrow \mathbb{B} \). Note that \( \delta(q, \sigma) \neq \text{false} \) for every \((q, \sigma) \in Q \times \Sigma\), i.e., there is always a transition. Tree automata consume exhaustive trees like \((D, L = \Sigma, V = D^*, l : V \rightarrow \Sigma)\) and produce run-trees.
Fix two disjoint sets, inputs \( I \) and outputs \( O \).
**Run-tree** of an alternating tree automaton \((\Sigma = 2^O, D = 2^I, Q, q_0, \delta, acc)\) on a computation tree \((V = (2^I)^*, l : V \rightarrow 2^O)\) is a tree with directions \(2^I \times Q\), labels \( V \times Q\), nodes \( V' \subseteq (2^I \times Q)^*\), labeling function \( l'\) such that
- \( l'(\varepsilon) = (\varepsilon, q_0)\),
- if \( v \in V'\) with \( l'(v) = (n, q)\), then:
- there exists \(\{(d_1, q_1), ... , (d_k, q_k)\}\) that satisfies \(\delta(q, l(n))\) and \(n \cdot (d_i, q_i) \in V'\) for every \(i \in [1, k]\).
Intuitively, we run the alternating tree automaton on the computation tree:
1. We mark the root node of the computation tree with the automaton initial state \(q_0\). We say that initially, in the node \(\varepsilon\), there is only one copy of the automaton and it has state \(q_0\).
2. We read the label \(l(n)\) of the current node \(n\) of the computation tree and consult the transition function \(\delta(q, l(n))\). The latter gives a set of conjuncts of atoms of the form \((d', q') \in D \times Q\). We nondeterministically choose one such conjunction \(\{(d_1, q_1), ... , (d_k, q_k)\}\) and send a copy of the alternating automaton into each direction \(e_i\) in the state \(q_i\). Note that we can send up to \(|Q|\) copies of the automaton into one direction (but into different automaton states). That is why a run-tree defined above has directions \(2^I \times Q\) rather than \(2^I\).
3. We repeat step (2) for every copy of the automaton. As a result we get a run-tree: the tree labeled with nodes of the computation tree and states of the automaton.
A run-tree is accepting iff every run-tree path starting from the root is accepting. A run-tree path \(v_1v_2...\) is accepting iff \(acc(q_1q_2...\) holds (\(acc\) is defined later), where \(q_i\) for every \(i \in \mathbb{N}\) is the automaton state part of \(l'(v_i)\).
An alternating tree automaton \(A = (\Sigma = 2^O, D = 2^I, Q, q_0, \delta, acc)\) accepts a computation tree \((V = (2^I)^*, l : V \rightarrow 2^O)\), written \((V, l) \models A\), iff the automaton has an accepting run-tree on that computation tree. An alternating tree automaton is non-empty iff there exists a computation tree accepted by it.
Similarly, a Moore system \(M = (I, O, T, k_0, \tau, \text{out})\) is accepted by the alternating tree automaton \(A = (\Sigma = 2^O, D = 2^I, Q, q_0, \delta, acc)\), written \(M \models A\), iff \((V, l) \models A\), where \((V = (2^I)^*, l : V \rightarrow 2^O)\) is the system computation tree.
Different variations of acceptance conditions are defined the same way as for word automata.
We can define nondeterministic and universal tree automata in a way similar to word automata.
**Alternating Hesitant Tree Automata (AHT)**
An alternating hesitant tree automaton (AHT) is an alternating tree automaton \((\Sigma, D, Q, q_0, \delta, acc)\) with the following acceptance condition and structural restrictions. The restrictions reflect the fact that AHTs are tailored for CTL* formulas.
- \(Q\) can be partitioned into \(Q^N_1, ... , Q^N_k, Q^U_1, ... , Q^U_k\), where superscript \(N\) means nondeterministic and \(U\) means universal. Let \(Q^N = \bigcup Q^N_i\) and \(Q^U = \bigcup Q^U_i\). (Intuitively, nondeterministic state sets describe \(E\)-quantified subformulas of the CTL* formula, while universal — \(A\)-quantified subformulas.)
There is a partial order on \( \{ Q^N_1, \ldots, Q^N_k, Q^U_1, \ldots, Q^U_k \} \). (Intuitively, this is because state subformulas can be ordered according to their relative nesting.)
The transition function \( \delta \) satisfies: for every \( q \in Q \), \( a \in \Sigma \)
- if \( q \in Q^N_i \), then: \( \delta(q, a) \) contains only disjunctively related elements of \( Q^N_i \); every element of \( \delta(q, a) \) outside of \( Q^N_i \) belongs to a lower set;
- if \( q \in Q^U_i \), then: \( \delta(q, a) \) contains only conjunctively related elements of \( Q^U_i \); every element of \( \delta(q, a) \) outside of \( Q^U_i \) belongs to a lower set.
Finally, \( acc : Q^\omega \to \mathbb{B} \) of AHTs is defined by a set \( Acc \subseteq Q \): \( acc(\pi) \) holds for \( \pi = q_1q_2... \in Q^\omega \) iff one of the following holds.
- The sequence \( \pi \) is trapped in some \( Q^U_i \) and \( \text{Inf}(\pi) \cap (Acc \cap Q^U) = \emptyset \) (co-B"uchi acceptance).
- The sequence \( \pi \) is trapped in some \( Q^N_i \) and \( \text{Inf}(\pi) \cap (Acc \cap Q^N) \neq \emptyset \) (B"uchi acceptance).
An example of an alternating hesitant tree automaton is in Figure 1.
### 3 Converting CTL* to LTL for Synthesis
In this section, we describe how and why we can reduce CTL* synthesis to LTL synthesis. First, we recall the standard approach to CTL* synthesis, then describe, step by step, the reduction and the correctness argument, and then discuss some properties of the reduction.
#### LTL Encoding
Let us first look at standard automata based algorithms for CTL* synthesis [12]. When synthesising a system that realizes a CTL* specification, we normally
- Turn the CTL* formula into an alternating hesitant tree automaton \( A \).
- We move from computation trees to annotated computation trees that move the (memoryless) strategy of the verifier\(^3\) into the label of the computation tree. This allows for using the derived universal co-B"uchi tree automaton \( U \), making the verifier deterministic: it does not make any decisions, as they are now encoded into the system.
- We determinise \( U \) to a deterministic tree automaton \( D \).
- We play an emptiness game for \( D \).
- If the verifier wins, his winning strategy (after projection of the additional labels) defines a system, if the spoiler wins, the specification is unrealisable.
We draw from this construction and use particular properties of the alternating hesitant tree automaton \( A \). Namely, \( A \) is not a general alternating tree automaton, but is an alternating hesitant tree automaton. Such an automaton is built from a mix of nondeterministic B"uchi and universal co-B"uchi word automata, called “existential word automata” and “universal word automata”. These universal and existential word automata start at any system state [tree node] where a universally and existentially, respectively, quantified subformula is marked as true in the annotated model [annotated computation tree]. We use the
\(^1\)In a Boolean formula, atoms \( E \) are disjunctively [conjunctively] related iff the formula can be written into DNF [CNF] in such a way that each cube [clause] has at most one element from \( E \).
\(^3\)Such a strategy maps, in each tree node, an automaton state to a next automaton state and direction.
term “existential word automata” to emphasise that the automaton is not only a non-deterministic word automaton, but it is also used in the alternating tree automaton in a way, where the verifier can pick the system [tree] path along which it has to accept.
**Example 1 (Word and tree automata).** Consider formula $\mathsf{EGEX}(g \land X(g \land F \neg g))$ where the propositions consist of the single output $g$ and the single input $r$. Figure 1 shows non-deterministic word automata for the subformulas, and the alternating (actually, nondeterministic) tree automaton for the whole formula. In what follows, we work mostly with word automata.
We are going to show, step by step, how and why we can reduce $\mathsf{CTL}^*$-synthesis to LTL synthesis. The steps are outlined in Figure 2.
**Step A (the starting point).** The verifier takes as input: a computation tree, universal and existential word automata for $\mathsf{CTL}^*$ subformulas, and the top-level proposition corresponding to the whole $\mathsf{CTL}^*$ formula. It has to produce an accepting run tree (if the computation tree satisfies the formula).
**Step B.** Given a computation tree, the verifier maps each tree node to an (universal or existential word) automaton state, and moves from a node according to the quantification of the automaton (either in all tree directions or in one direction). The decision, in which tree direction to move and which automaton state to pick for the successor node, constitutes the strategy of the verifier. Each time the verifier has to move in several tree directions (this happens when the node is annotated with a universal word automaton state), we spawn a new version of the verifier, for each tree direction and transition of the universal word automaton.
The strategy of the verifier is a mapping of states of the existential word automata to a decision, which consists of a tree direction (the continuation of the tree path along which the automaton shall accept) and an automaton successor state transition. This is a mapping $\mathsf{dec} : Q \to 2^I \times Q$ such that $\mathsf{dec}(q) = (e, q')$ implies that $q' \in \delta(q, (l(n), e))$, where $\delta$ corresponds to the existential word automaton.
The verifier takes a computation tree, universal and existential word automata, and the top-level proposition, that together encode a given CTL* formula. It produces an accepting run tree (if the computation tree satisfies the formula).
We encode the verifier decisions into annotated computation trees, making the verifier deterministic. Figure 3b shows such an annotated computation tree.
The new annotation is a re-phrasing of the previous one. Figure 4 gives an example.
We keep directions in the annotation but remove next-states—now the verifier has to choose. Figure 5 gives an example.
Now the obligation of the verifier can be stated in LTL (or using universal co-Büchi word automata).
Figure 2: Steps in the proof of reduction of CTL* synthesis to LTL synthesis.
Example 2. Figure 3 shows an annotated model and computation tree.
Step C. The verifier strategy (encoded in the annotated computation tree) encodes both, the words on which the nondeterministic automata are interpreted and witnesses of acceptance (accepting automata paths on those words). For the encoding in LTL that we will later use, it is enough to map out the automaton word, and replace the witnesses by what it actually means: that the automaton word satisfies the respective path formula.
Example 3. In Figure 3b, the verifier strategy in the root node maps out the word $(\bar{g}, P_{\text{EX}}, r)(\bar{g}, P_{\text{EX}}, \bar{r})^0$
\[4\] The verifier, when in the tree node or system state, moves according to this strategy.
on which the NBW in Figure 1b is run, and the witness of acceptance \( (q_0')^\omega \). The blue path encodes the word \((\tilde{g}, r)(g, r)(g, r)(\tilde{g}, r)^\omega\) and the witness \(q_0q_1q_2q_3q_4^\omega\) for the NBW in Figure 1a. In total, we can see 5 tree paths that are mapped out by the annotated computation tree.
To map out the word, we look at the set of tree paths that are mapped out in an annotated computation tree and define equivalence classes on them. Two tree paths are equivalent if they share a tail (or, equivalently, if one is the tail of the other).
There is a simple sufficient condition for two mapped out tree paths to be equivalent: if they pass through the same node of the annotated computation tree in the same automaton state, then they have the same future, and are therefore equivalent.
**Example 4.** In Figure 3b the blue and pink paths are equivalent, since they share the tail. The sufficient condition fires in the top node, where the tree paths meet in automaton state \(q_3\).
The sufficient condition implies that we cannot have more non-equivalent tree paths passing through a tree node than there are states in all existential word automata; let us call this number \(k\). For each tree node, we assign unique numbers from \(\{1, \ldots, k\}\) to equivalence classes, and thus any two non-equivalent tree paths that go through the same tree node have different numbers. As this is an intermediate step in our translation, we are wasteful with the labeling:
1. we map existential word automata states to numbers (IDs) using a label \(id: Q \rightarrow \{1, \ldots, k\}\), we choose the direction \(d: \{1, \ldots, k\} \rightarrow 2^I\) to take, and choose the successor state, \(succ: Q \rightarrow Q\), such that \(succ(q) \in \delta \left(q, (l(n), d(id(q)))\right)\), where \(l(n)\) is the label of the current node \(n\), and
2. we maintain the same state ID along the chosen direction: \(id(q) = id(succ(q))\).
Note that (1) alone can be viewed as a re-phrasing of the labeling \(dec\) that we had before on page 11. The requirement (2) is satisfiable, because a tree path maintains its equivalence class. Therefore any annotated computation tree can be re-labeled! This step is shown in Figure 2c, the labels are: \((out: O \rightarrow \mathbb{B}, p: F \rightarrow \mathbb{B}, id: Q \rightarrow \{1, \ldots, k\}, d: \{1, \ldots, k\} \rightarrow 2^I, succ: Q \rightarrow Q)\).
**Example 5.** A re-labeled computation tree is in Figure 4.
**Step D.** In the new annotation with labels \((out, p, id, d, succ)\), labeling \(d\) alone maps out the tree path for each ID. The remainder of the information is mainly there to establish that the corresponding word is accepted by the respective word automaton (equivalently: satisfies the respective path formula). If we use only \(d\), then the only missing information is where the path starts and which path formula it belongs to—the information originally encoded by \(p\).
We address these two points by using numbered computation trees. Recall that the annotated computation trees have a propositional labeling \(p: F \rightarrow \mathbb{B}\) that labels nodes with subformulas. In the numbered computation trees, we replace \(p\) for existential subformulas \(F_{exist} \subseteq F\) by labeling \(v: F_{exist} \rightarrow \{0, \ldots, k\}\), where, for an existentially quantified formula \(E\varphi \in F_{exist}\) and a tree node \(n\):
- \(v_{E\varphi}(n) = 0\) encodes that no claim that \(E\varphi\) holds is made (similar to the proposition \(p_{E\varphi}\) being “false” in the annotated tree), whereas
- a value \(v_{E\varphi}(n) \in \{1, \ldots, k\}\) is interpreted similarly to the proposition \(p_{E\varphi}\) being “true”, but also requires that a witness for \(E\varphi\) is encoded on the tree path that starts in \(n\) and follows directions \(d_{E\varphi}(n)\).
---
5The condition is sufficient but not necessary. Recall that each mapped out tree path corresponds to at least one copy of the verifier that ensures the path is accepting. When two verifiers go along the same tree path, it can be annotated with different automata states (for example, corresponding to different automata). Then such paths do not satisfy the sufficient condition, although they are trivially equivalent.
Figure 4: A re-labeled computation tree. Notation “$q_0 \mapsto (1,q_1)$” means $id(q_0) = 1$ and $succ(q_0) = q_1$, and “$1 \mapsto \{r\}$” means $d$ maps 1 to $\{r\}$. Since the blue and pink paths are equivalent, the label $id$ maps the corresponding automata states in the nodes to the same number, 1. The IDs of the green and yellow paths differ implying that they are not equivalent and hence do not share the tail (their tails cannot be seen in the figure).
Example 6. The tree in Figure 4 becomes a numbered computation tree if we replace the propositional labels $p_{EX}$ and $p_{EG}$ with ID numbers as follows. The root node has $v_{EX} = 1$ and $v_{EG} = 4$, the left child has $v_{EX} = 1$, the left-left child has $v_{EX} = 2$, the left-left-left child has $v_{EX} = 3$. Note that $id(q_0) = v_{EX}$ and $id(q_0) = v_{EG}$ whenever those vs are non-zero. The nodes outside of the dashed path have $v_{EX} = v_{EG} = 0$, meaning that no claims about satisfaction of the path formulas has to be witnessed there.
Initially, we use ID labeling $v$ in addition with $\langle out, id, d, succ, p^{univ} \rangle$, where $p^{univ}$ is a restriction of $p$ on $F_{univ}$, and then there is no relevant change in the way the (deterministic) verifier works. I.e., a numbered computation tree can be turned into annotated computation tree, and vice versa, such that the numbered tree is accepted iff the annotated tree is accepted.
Now we observe that labeling $id$ and $succ$ are used only to witness that each word mapped out by $d$ is accepted by respective existential word automata. I.e., $id$ and $succ$ make the verifier deterministic. Let us remove $id$ and $succ$ from the labeling. We call such trees lean-numbered computation trees; they have labeling $\langle out : O \rightarrow \mathbb{B}, v : F_{exist} \rightarrow \{0,\ldots,k\}, d : \{1,\ldots,k\} \rightarrow 2^I, p^{univ} : F_{univ} \rightarrow \mathbb{B} \rangle$. This makes the verifier nondeterministic. We still have the property: every accepting annotated computation tree can be turned into an accepting lean-numbered computation tree, and vice versa. This step is shown in Figure 2d; an
example of a lean-numbered computation tree is in Figure 5.
**Step E (the final step).** We show how labeling \((\text{out}, v, d, p^{\text{univ}})\) allows for using LTL formulas instead of directly using automata for the acceptance check. The encoding into LTL is as follows.
- For each existentially quantified formula \(E\varphi\), we introduce the following LTL formula (recall that \(v_{E\varphi} = 0\) encodes that we do not claim that \(E\varphi\) holds in the current tree node, and \(v_{E\varphi} \neq 0\) means that \(E\varphi\) does hold and \(\varphi\) holds if we follow \(v_{E\varphi}\)-numbered directions):
\[
\bigwedge_{j \in \{1,\ldots,k\}} G \left[ v_{E\varphi} = j \rightarrow \left( Gd_j \rightarrow \varphi \right) \right],
\]
where \(\varphi'\) is obtained from \(\varphi\) by replacing the subformulas of the form \(E\psi\) by \(v_{E\psi} \neq 0\) and the subformulas of the form \(A\psi\) by \(p_{A\psi}\).
- For each subformula of the form \(A\varphi\), we simply take
\[
G \left[ p_{A\varphi} \rightarrow \varphi \right],
\]
where \(\varphi'\) is obtained from \(\varphi\) as before.
- Finally, the overall LTL formula is the conjunction
\[
\Phi' \land \bigwedge_{E\varphi \in \text{exist}} \text{Eq.1} \land \bigwedge_{A\varphi \in \text{univ}} \text{Eq.2}
\]
where the Boolean formula \(\Phi'\) is obtained by replacing in the original CTL* formula every \(E\varphi\) by \(v_{E\varphi} \neq 0\) and every \(A\varphi\) by \(p_{A\varphi}\).
**Example 7.** Let \(I = \{r\}, O = \{g\}\). Consider the CTL formula
\[
EG \neg g \land AGEF \neg g \land EFg.
\]
The sum of states of individual NBWs is 5 (assuming the natural translations), so we introduce integer propositions \(v_{EG\bar{g}}\), \(v_{EG\bar{g}}\), \(v_{EF\bar{g}}\), ranging over \(\{0,\ldots,5\}\), and five Boolean propositions \(d_1, \ldots, d_5\); we also introduce Boolean proposition \(p_{AG(v_{EF\bar{g}} \neq 0)}\). The LTL formula is:
\[
\begin{align*}
&v_{EG\bar{g}} \neq 0 \land p_{AG(v_{EF\bar{g}} \neq 0)} \land v_{EF\bar{g}} \neq 0 \land \\
&G \left[ p_{AG(v_{EF\bar{g}} \neq 0)} \rightarrow G(v_{EF\bar{g}} \neq 0) \right] \land \\
&G \left[ v_{EF\bar{g}} = j \rightarrow (Gd_j \rightarrow F \neg g) \right] \\
&\bigwedge_{j \in \{1\ldots5\}} G \left[ v_{EF\bar{g}} = j \rightarrow (Gd_j \rightarrow G \neg g) \right] \\
&\quad \left[ v_{EF\bar{g}} = j \rightarrow (Gd_j \rightarrow F g) \right]
\end{align*}
\]
Figure 6 shows a model satisfying the LTL specification.
Note that we can avoid introducing propositions for universally quantified subformulas \(F_{\text{univ}}\); whenever you see such a proposition in \(\varphi'\) in Eq. 1 or in \(\Phi'\) in Eq. 3, replace it with subformula \(\varphi''\) which it describes.
The whole discussion leads us to the theorem.
**Theorem 1.** Let \(I\) be the set of inputs and \(O\) be the set of outputs, and \(\Phi_{\text{LTL}}\) be derived from a given \(\Phi_{\text{CTL}^*}\) as described above. Then:
\[
\Phi_{\text{CTL}^*} \text{ is realisable } \iff \Phi_{\text{LTL}} \text{ is realisable.}
\]
R. Bloem, S. Schewe, A. Khalimov
\[
v_{\text{EF}g} = v_{\text{EG}g} = 2, d_2 = \neg r \\
v_{\text{EF}g} = 3, d_3 = r \\
v_{\text{EF}g} = v_{\text{EF}g} = 0
\]
Figure 6: A Moore machine for Example 7. The witness for \( \text{EG} \neg g \) is: \( v_{\text{EF}g}(t_0) = 2 \), we move along \( d_2 = \neg r \) looping in \( t_0 \), thus the witness is \( (t_0)^\omega \). The witness for \( \text{EF} g \): since \( v_{\text{EF}g}(t_0) = 3 \), we move along \( d_3 = r \) from \( t_0 \) to \( t_1 \), where \( d_3 \) is not restricted, so let \( d_3 = \neg r \) and then the witness is \( t_0(t_1)^\omega \). The satisfaction of \( \text{AGEF} \neg g \) means that every state has \( v_{\text{EF}g} \neq 0 \), which is true. In \( t_0 \) we have \( \neg g \), so \( \text{EF} \neg g \) is satisfied; for \( t_1 \) we have \( v_{\text{EF}g}(t_1) = 2 \) hence we move \( t_1 \stackrel{r}{\rightarrow} t_0 \) and \( \text{EF} \neg g \) is also satisfied.
Complexity
The translated LTL formula \( \Phi_{\text{LTL}} \), due to Eq. 1, in the worst case, can be exponentially larger than \( \Phi_{\text{CTL}} \): \( |\Phi_{\text{LTL}}| \geq 2^{|\Phi_{\text{CTL}}|} \). Yet, the upper bound on the size of \( UCW_{\Phi_{\text{LTL}}} \) is \( 2^{|\Phi_{\text{CTL}}|} \) rather than \( 2^{|\Phi_{\text{LTL}}|} \), because:
- the size of the UCW is additive in the size of the UCWs of the individual conjuncts, and
- each conjunct UCW has almost the same size as a UCW of the corresponding subformula, since, for every LTL formula \( \varphi \), \( |UCW_{G[p \rightarrow (Gd \rightarrow \varphi)]}| = |UCW_{\varphi}| + 1 \).\(^6\)
Determinising \( UCW_{\Phi_{\text{LTL}}} \) gives a parity game with up to \( 2^{|\Phi_{\text{LTL}}|} \) states and \( 2^{|\Phi_{\text{CTL}}|} \) priorities \([19, 14, 18]\). The recent quasipolynomial algorithm \([3]\) for solving parity games has a particular case for \( n \) states and \( \log(n) \) many priorities, where the time cost is polynomial in the number of game states. This gives us \( O(2^{|\Phi_{\text{CTL}}|}) \)-time solution to the derived LTL synthesis problem. The lower bound comes from the \( 2\text{EXPTIME} \)-completeness of the \( \text{CTL}^* \) synthesis problem \([17]\).
**Theorem 2.** Our solution to the \( \text{CTL}^* \) synthesis problem via the reduction to LTL synthesis is \( 2\text{EXPTIME} \)-complete.
Minimality
Although the reduction to LTL synthesis preserves the complexity class, it does not preserve the minimality of the models. Consider an existentially quantified formula \( E \varphi \). A system path satisfying the formula may pass through the same system state more than once and exit it in different directions.\(^7\) Our encoding forbids that.\(^8\) I.e., in any system satisfying the derived LTL formula, a system path mapped out by an ID has a unique outgoing direction from every visited state. As a consequence, such systems are less concise. This is illustrated in the following example.
**Example 8 (Non-minimality).** Let \( I = \{ r \} \), \( O = \{ g \} \), and consider the \( \text{CTL}^* \) formula
\[
\text{EX}(g \land X(g \land F \neg g))
\]
\(^6\)To see this, recall that we can get \( UCW_{\varphi} \) by treating \( \text{NBW}_{\neg \varphi} \) as a UCW, and notice that \( |NBW_{F[p \rightarrow (Gd \land \neg \varphi)]}| = |NBW_{\varphi}| + 1 \).
\(^7\)E.g., in Figure 3a the system path \( t_0t_1t_1(t_0)^\omega \), satisfying \( \text{EX}(g \land X(g \land F \neg g)) \), double-visits state \( t_1 \) and exits it first in direction \( r \) and then in \( \neg r \), where \( t_0 \) is the system state on the left and \( t_1 \) is on the right.
\(^8\)Recall that with \( E \varphi \) we associate a number \( v_{\varphi} \), such that whenever in a system state \( v_{\varphi} \) is non-zero, then the path mapped out by \( v_{\varphi} \)-numbered directions satisfies the path formula \( \varphi \). Therefore whenever \( v_{\varphi} \)-numbered path visits a system state, it exits it in the same direction \( d_{v_{\varphi}} \).
CTL* Synthesis via LTL Synthesis
The NBW automaton for the path formula has 5 states (Figure 1a), so we introduce integer proposition \( v \) ranging over \( \{0, \ldots, 5\} \) and Boolean propositions \( d_1, d_2, d_3, d_4, d_5 \). The LTL formula is
\[
v \neq 0 \land \bigwedge_{j \in \{1 \ldots 5\}} G [v = j \rightarrow (G d_j \rightarrow X (g \land X (g \land F \neg g)))]
\]
A smallest system for this LTL formula is in Figure 7. It is of size is 3, while a smallest system for the original CTL* formula is of size 2 (Figure 3a).
Bounded reduction
While we have realisability equivalence for sufficiently large \( k \), \( k \) is a parameter, where much smaller \( k \) might suffice. In the spirit of bounded synthesis, it is possible to use smaller parameters in the hope of finding a model. These models might be of interest in that they guarantee a limited entanglement of different tree paths, as they cap the number of tails of tree paths that go through the same node of a computation tree. Such models are therefore simple in some formal sense, and this sense is independent of the representation by an automaton. (As opposed to a lower bound of a sufficiently high number \( k \), for which we have explicitly used the representation by an automaton.)
4 Checking Unrealisability of CTL*
How does a witness of unrealisability for CTL* look like? I.e., when a formula is unrealisable, is there an “environment model”, like in the LTL case, which disproves any system model?
The LTL formula and the annotation shed light on this: the model for the dualised case is a strategy to choose original inputs (depending on the history of \( v, d, p \), and original outputs), such that any path in the resulting tree violates the original LTL formula. I.e., the spoiler strategy is a tree, whose nodes are labeled with original inputs, and whose directions are defined by \( v, d, p \), and original outputs.
Example 9. Consider an unrealisable CTL* specification: \( AG g \land EFX \neg g \), inputs \( \{r\} \), outputs \( \{g\} \). After reduction to LTL we get specification: inputs \( \{r\} \), outputs \( \{g, p_{AGg}, v_{EFXg}, d_1, d_2\} \), and the LTL formula is
\[
p_{AGg} \land v_{EFXg} \neq 0 \land G [p_{AGg} \rightarrow G g] \land \bigwedge_{j \in \{1,2\}} G [(v_{EFXg} = j \land G d_j) \rightarrow FX \neg g].
\]
The dual specification is: the system type is Mealy, new inputs \( \{g, p_{AGg}, v_{EFXg}, d_1, d_2\} \), new outputs \( \{r\} \),...
and the LTL formula is the negated original LTL:
$$p_{AGs} \land v_{EFXg} \neq 0 \land G[p_{AGs} \rightarrow Gg] \rightarrow \bigvee_{j \in \{1,2\}} F[(v_{EFXg} = j \land Gd_j) \land GXg].$$
This dual specification is realisable, and it exhibits e.g. the following witness of unrealisability: the output $r$ follows $d_1$ or $d_2$ depending on input $v_{EFXg}$. (The new system needs two states. State 1 describes “I’ve seen $v_{EFXg} \in \{0,1\}$ and I output $r$ equal to $d_1$”; from state 1 we irrevocably go into state 2 once $v_{EFXg} = 2$ and make $r$ equal to $d_2$).
Although our encoding allows for checking unrealisability of CTL* (via dualising the converted LTL specification), this approach suffers from a very high complexity. Recall that the LTL formula can become exponential in the size of a CTL* formula, which could only be handled, because it became a big conjunction with sufficiently small conjuncts. After negating, it becomes a large disjunction, which makes the corresponding UCW doubly exponential in the size of the initial CTL* specification (vs. single exponential for the non-negated case). This seems—there may be a more clever analysis of the formula structure—to make the unrealisability check via reduction to LTL cost three exponents in the worst case (vs. 2EXP by the standard approach).
What one could try is to let the new system player in the dualised game choose a number of disjunctive formulas to follow, and allow it to revoke the choice finitely many times. This is conservative: if following $m$ different disjuncts in the dualised formula is enough to win, then the new system wins. Also, parts of the disjunction might work well (“delta-debugging”); this could then be handled precisely.
5 Experiments
We implemented the CTL* to LTL converter `ctl_to_ltl.py` inside PARTY [11]. PARTY also has two implementations of the bounded synthesis approach [8], one encodes the problem into SMT and another reduces the problem to safety games. Also, PARTY has a CTL* synthesiser [10] based on the bounded synthesis idea that encodes the problem into SMT. In this section we compare those three solvers, where the first two solvers take LTL formulas produced by our converter. All logs and the code are available in repository https://github.com/5nizza/party-elli, the branch “cav17”. The results are in Table 1, let us analyse them.
**Specifications.** We created realisable arbiter-like CTL* specifications. The number after the specification name indicates the number of clients. All specifications have LTL properties in the spirit of “every request must eventually be granted” and the mutual exclusion of the grants. Also:
- “res_arbiter” has the resettability property $AGEF(\bigwedge_i \neg g_i)$;
- “loop_arbiter” in addition has the looping property $\bigwedge_i EFG g_i$;
- “postp_arbiter” has the CTL* property $\bigwedge_i AGEF(\neg g_i \land r_i \land X(\neg g_i \land r_i \land X \neg g_i))$;
- “prio_arbiter” prioritizes requests from one client (this is expressed in LTL), and has the resettability property;
- “user_arbiter” contains only existential properties that specify different sequences of requests and grants.
**LTL formula and automata sizes.** LTL formula increases $\approx |Q|$ times when $k$ increases from 1 to $|Q|$, just as described by Eq. 1. But this increase does not incur the exponential blow up of the UCWs: they also increase only $\approx |Q|$ times.
Table 1: Comparison of different synthesis approaches for $CTL^*$ specifications. All specifications are realisable. $|CTL^*|$ is the size of the non-reduced AST of the $CTL^*$ formula, $|LTL|$ — similarly, but it has two numbers: when the parameter $k$ is set to 1 ($k$ is the number of witness IDs), and when $k$ is the upper bound (the number of existential states). $|AHT|$ is the sum of the number of automata states for all subformulas. $|UCW|$ is the number of states in the UCW of the translated LTL formula: we show two numbers, when $k$ is set to 1 and when it is the upper bound. Timings are in seconds, the timeout is 3 hours (denoted “$to$”). “Time $CTL^*$” is the synthesis time and [model size] required for $CTL^*$ synthesizer star.py, “time LTL(SMT)” — for synthesizer elli.py which implements the original bounded synthesis for LTL via SMT [8], “time LTL(game)” — for synthesizer kid.py which implements the original bounded synthesis for LTL via reduction to safety games [8]. Both “time LTL” columns have two numbers: when $k$ is set to the minimal value for which the LTL is realisable, and when $k$ is set to the upper bound. The subscript near the number indicates the value of $k$: e.g. $to_8$ means the timeout on all values of $k$ from 1 to $Q = 8$; $to_{12(3)}$ means there was the timeout for $k = |Q| = 12$ and the last non-timeout was for $k = 3$; $20_1$ means 20 seconds and the minimal $k$ is 1. The running commands were: “elli.py --incr spec”, “star.py --incr spec”, “kid.py spec”.
| $|CTL^*|$ | $|LTL|$ ($k_1:k_2$) | $|AHT|$ | $|UCW|$ ($k_1:k_2$) | time $CTL^*$ | time LTL(SMT) ($k_{min,k_2}$) | time LTL(game) ($k_{min,k_2}$) |
|-------|-----------------|------|-----------------|----------|-----------------|------------------|
| 12 | 109, 168 | 10 | 8, 10 | 7380 [7] | $to_{1}$ | $30_1: 60_2$ |
| 12 | 105, 682 | 12 | 11, 41 | 2 [4] | $20_1: 131_6$ | $183_1: to_{5(3)}$ |
| 12 | 80, 183 | 15 | 14, 70 | $6360 [7]$ | $to_5$ | $to_8$ |
| 12 | 113, 2097 | 19 | 15, 114 | 3 [4] | $2_1: 1735_{12}$| $20_1: to_{12(3)}$ |
| 12 | 162, 276 | 24 | 19, 10 | $2920 [5]$ | $60_1: to_{16(5)}$ | $70_1: to_{16(2)}$ |
| 12 | 82, 92 | 13 | 14, 16 | 60 [5] | $141_1: 19_2$ | $91_1: 17_2$ |
| 12 | 117, 125 | 15 | 16, 18 | $to$ | $4318_1: to_2$ | $26_1: 50_2$ |
| 12 | 99, 190 | 23 | 25, $to$ | 3 [5] | $1855_1: to_{16}$| $to_{16}$ |
Synthesis time. The game-based LTL synthesiser is the fastest in half of the cases, but struggles to find a model when $k$ is large. The LTL part of specifications “res_arbiter” and “prio_arbiter” is known to be simpler for game-based synthesisers than for SMT-based ones—adding the simple resettability property does not change this.
Model sizes. The reduction did not increase the model size in all the cases.
6 Conclusion
We presented the reduction of $CTL^*$ synthesis problem to LTL synthesis problem. The reduction preserves the worst-case complexity of the synthesis problem, although possibly at the cost of larger systems. The reduction allows the designer to write $CTL^*$ specifications even when she has only an LTL synthesiser at hand. We experimentally showed—one the small set of specifications—that the reduction is practical when the number of existentially quantified formulas is small.
We briefly discussed how to handle unrealisable $CTL^*$ specifications. Whether our suggestions are practical on typical specifications—this is still an open question. A possible future direction is to develop a similar reduction for logics like ATL* [1], and to look into the problem of satisifiability of $CTL^*$ [7].
Acknowledgements. This work was supported by the Austrian Science Fund (FWF) under the RiSE National Research Network (S11406), and by the EPSRC through grant EP/M027287/1 (Energy Efficient Control). We thank SYNT organisers for providing the opportunity to improve the paper, and reviewers for their patience.
References
|
{"Source-Url": "https://export.arxiv.org/pdf/1711.10636", "len_cl100k_base": 15174, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 85588, "total-output-tokens": 18529, "length": "2e13", "weborganizer": {"__label__adult": 0.0004897117614746094, "__label__art_design": 0.0006895065307617188, "__label__crime_law": 0.0005750656127929688, "__label__education_jobs": 0.0013513565063476562, "__label__entertainment": 0.00019228458404541016, "__label__fashion_beauty": 0.00027441978454589844, "__label__finance_business": 0.000476837158203125, "__label__food_dining": 0.0006546974182128906, "__label__games": 0.002227783203125, "__label__hardware": 0.0017232894897460938, "__label__health": 0.0009183883666992188, "__label__history": 0.0006451606750488281, "__label__home_hobbies": 0.00020825862884521484, "__label__industrial": 0.00109100341796875, "__label__literature": 0.0007824897766113281, "__label__politics": 0.0005540847778320312, "__label__religion": 0.0009813308715820312, "__label__science_tech": 0.313232421875, "__label__social_life": 0.0001430511474609375, "__label__software": 0.00801849365234375, "__label__software_dev": 0.6630859375, "__label__sports_fitness": 0.00046634674072265625, "__label__transportation": 0.0012111663818359375, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56852, 0.02903]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56852, 0.58294]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56852, 0.7983]], "google_gemma-3-12b-it_contains_pii": [[0, 3190, false], [3190, 7329, null], [7329, 11017, null], [11017, 14492, null], [14492, 17909, null], [17909, 21695, null], [21695, 25030, null], [25030, 27259, null], [27259, 28037, null], [28037, 28779, null], [28779, 33104, null], [33104, 35273, null], [35273, 38366, null], [38366, 42411, null], [42411, 44892, null], [44892, 48331, null], [48331, 52550, null], [52550, 56356, null], [56356, 56852, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3190, true], [3190, 7329, null], [7329, 11017, null], [11017, 14492, null], [14492, 17909, null], [17909, 21695, null], [21695, 25030, null], [25030, 27259, null], [27259, 28037, null], [28037, 28779, null], [28779, 33104, null], [33104, 35273, null], [35273, 38366, null], [38366, 42411, null], [42411, 44892, null], [44892, 48331, null], [48331, 52550, null], [52550, 56356, null], [56356, 56852, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56852, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56852, null]], "pdf_page_numbers": [[0, 3190, 1], [3190, 7329, 2], [7329, 11017, 3], [11017, 14492, 4], [14492, 17909, 5], [17909, 21695, 6], [21695, 25030, 7], [25030, 27259, 8], [27259, 28037, 9], [28037, 28779, 10], [28779, 33104, 11], [33104, 35273, 12], [35273, 38366, 13], [38366, 42411, 14], [42411, 44892, 15], [44892, 48331, 16], [48331, 52550, 17], [52550, 56356, 18], [56356, 56852, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56852, 0.03793]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7a42812b3a691a44a810b9b9d530f9573c842e26
|
Devil: An IDL for Hardware Programming
Fabrice Mérillon, Laurent Réveillère, Charles Consel, Renaud Marlet, Gilles Muller
To cite this version:
HAL Id: hal-00350223
https://hal.archives-ouvertes.fr/hal-00350223
Submitted on 6 Jan 2009
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Devil: An IDL for Hardware Programming
Fabrice Méillon Laurent Réveillère*
Charles Consel* Renaud Marlet† Gilles Muller
Compose Group, IRISA / INRIA, University of Rennes I
Campus Universitaire de Beaulieu, F-35042 Rennes Cedex, France
E-mail: {merillon,lreveill,consel,marlet,muller}@irisa.fr
Abstract
To keep up with the frantic pace at which devices come out, drivers need to be quickly developed, debugged and tested. Although a driver is a critical system component, the driver development process has made little (if any) progress. The situation is particularly disastrous when considering the hardware operating code (i.e., the layer interacting with the device). Writing this code often relies on inaccurate or incomplete device documentation and involves assembly-level operations. As a result, hardware operating code is tedious to write, prone to errors, and hard to debug and maintain.
This paper presents a new approach to developing hardware operating code based on an Interface Definition Language (IDL) for hardware functionalities, named Devil. This IDL allows a high-level definition of the communication with a device. A compiler automatically checks the consistency of a Devil definition and generates efficient low-level code.
Because the Devil compiler checks safety critical properties, the long-awaited notion of robustness for hardware operating code is made possible. Finally, the wide variety of devices that we have already specified (mouse, sound, DMA, interrupt, Ethernet, video, and IDE disk controllers) demonstrates the expressiveness of the Devil language.
1 Introduction
A device driver is a key system component that makes hardware innovation available to end users. Device drivers are critical both in general-purpose computers and in the fast-evolving domain of appliances. If driver development falls behind, product competitiveness can be compromised. If a device driver is faulty, a hardware innovation may turn into a disaster instead of improving competitiveness.
Still, ever since the first device drivers have been written, their development process has made little (if any) progress. This situation has particularly disastrous effects when considering hardware operating code (i.e., code communicating with the hardware). This layer of code is well-known to be low level and error prone.
Hardware operating code is low level because it consists of many bit operations. Indeed, we have found that bit operations can represent up to 30% of driver code. Such low-level programming is obviously prone to errors and requires tedious debugging. In fact, advances in programming languages have had no impact on the development of hardware operating code: there is no syntactic support for low-level operations, there is no verification support to identify incorrect usage of these operations, and there is no tool support to facilitate debugging.
Additionally, hardware documentation typically contains imprecise or inaccurate information. Therefore, writing hardware operating...
code typically involves laboriously searching for obscure incantations aimed at performing specific operations on the device. Not only can this sometime cause unexpected behavior, but it also makes re-use of hardware operating code difficult.
Finally, there are no recognized methodologies for structuring device drivers. Even worse, a driver is often written by modifying an existing one. As a result, the code quickly becomes tangled, which makes debugging and maintenance complex.
Our proposal
This paper describes a new approach to developing the hardware operating layer of a driver. Our approach allows drivers to be written in a high-level language, allows important safety properties to be checked, and allows low-level code to be automatically generated.
We introduce an Interface Definition Language (IDL) to describe hardware functionalities, named Devil. IDLs are extensively used in modern OSes, either to hide heterogeneity and intricacies of message construction in distributed systems [3, 13], or to glue together components in modular operating systems [2, 9, 10]. Just as RPC IDLs conventionally define operations and their input/output types, Devil specifies the functional interface of the device. To do so, it provides the programmer with abstractions and syntactic constructs that are specific to describing devices. From a Devil specification, a compiler automatically generates stubs containing low-level code to operate the device. Furthermore, verification tools enable critical safety properties to be checked at compile time, and at run time if necessary.
Just as an IDL typically allows code to be re-used, a Devil specification can be re-used in different contexts (e.g., various operating systems). More generally, our vision is that Devil specifications either should be written by device vendors or should be widely available as public domain libraries in order to ease driver development.
Our contributions are as follows.
- We have designed and implemented an IDL for devices. This language is an alternative to assembly-language-like programming of devices.
- We propose tools to verify critical safety properties of hardware operating code. These tools enable us to provide the long-awaited notion of robustness for device drivers.
- We present a comparison between Devil specifications and existing driver code. This comparison is based on experimental data which demonstrate that a Devil specification is up to 5.9 times less prone to errors than C code, with almost no loss in performance.
The rest of this paper is organized as follows. Section 2 presents the Devil language. Section 3 describes the safety properties that can be verified both statically on Devil specifications and dynamically by the generated interface. Section 4 assesses the benefits of our approach by comparing hand-crafted drivers with equivalent ones written using Devil. Section 5 describes related work. Section 6 concludes and suggests future work.
2 Devil
Devil is an IDL for specifying the functional interface of a device. To design Devil, we have studied a wide spectrum of devices and their corresponding drivers, mainly from Linux sources: Ethernet, video, sound, disk, interrupt, DMA and mouse controllers. This study was supported by literature about driver development [7, 16], device documentation available on the web, and discussions with device driver experts for Windows, Linux and embedded operating systems. Devil has proved expressive enough to describe even devices having a contorted interface such as the Crystal CS4236B sound controller.
Concretely, a device can be described by three layers of abstraction: ports, registers, and device variables. The entry point of a Devil specification is the declaration of a device, parameterized by ports or ranges of ports, which
device logitech_busmouse (base : bit[8] port @ {0..3}) 1
{
// Signature register (SR)
register sig_reg = base @ 1 : bit[8]; 4
variable signature = sig_reg, volatile, write trigger : int(8); 5
// Configuration register (CR)
register cr = write base @ 3, mask ‘1001000.’ : bit[8]; 8
variable config = cr[0] : { CONFIGURATION => ‘1’, DEFAULT_MODE => ‘0’ }; 9
// Interrupt register
register interrupt_reg = write base @ 2, mask ‘000.0000’ : bit[8]; 12
// Index register
register index_reg = write base @ 2, mask ‘1..00000’ : bit[8]; 16
private variable index = index_reg[6..5] : int(2); 17
register x_low = read base @ 0, pre {index = 0}, mask ‘****....’ : bit[8]; 19
register x_high = read base @ 0, pre {index = 1}, mask ‘****....’ : bit[8]; 20
register y_low = read base @ 0, pre {index = 2}, mask ‘****....’ : bit[8]; 21
register y_high = read base @ 0, pre {index = 3}, mask ‘....*....’ : bit[8]; 22
structure mouse_state = { 24
variable dx = x_high[3..0] # x_low[3..0], volatile : signed int(8); 25
variable dy = y_high[3..0] # y_low[3..0], volatile : signed int(8); 26
variable buttons = y_high[7..5], volatile : int(3); 27
}
}
Figure 1: Logitech Busmouse Specification
abstract physical addresses. Ports then allow device registers to be declared; these define the granularity of interactions with the device. Finally, device variables are defined from registers, forming the functional interface to the device.
These three layers of abstraction are illustrated by the following fragment of the Devil description of the Logitech Busmouse controller (see Figure 1 for a complete description).
device logitech_busmouse( base : bit[8] port@{0..3})
{
register sig_reg = base @ 1 : bit[8];
variable signature = sig_reg, ... : int(8);
... }
The logitech_busmouse declaration is parameterized by a range of ports specified as the main address base and a range of offsets (from 0 to 3). An eight-bit register sig_reg is declared at port base, offset by 1. Finally, the device variable signature is the interpretation of this register as an eight-bit unsigned integer. This fragment declares a device whose functional interface consists of a device variable (signature). Only device variables are visible from outside a Devil description ports and registers are hidden. In fact, for each variable the Devil compiler generates two C stubs that permit to write or read the variable by emitting the proper I/O operations.
In the rest of this section, we first describe the basic Devil constructs, and then present advanced Devil features that allow the description of devices with contorted addressing modes.
2.1 Basic Devil
Ports, registers, and device variables are the basic layers of abstraction that describe the interface of a device. We now present their usage by describing in detail the Devil specification of the Logitech Busmouse (see Figure 1), and a fragment of the NE2000 Ethernet controller.
Ports. The port abstraction is at the basis of the communication with the device. A port hides the fact that, depending on how the device is mapped, it can be operated via either I/O or memory operations. A device often has several communication points whose addresses are derived from one or more base addresses. Therefore, the port constructor, denoted by @, takes as arguments a ranged port and a constant offset (e.g., base@01 as illustrated by line 4 of the Busmouse specification). To enable veri-
fication, the range of valid offsets must be specified within the entry point declaration (e.g., port@\{0..3\} as illustrated by line 1 of the Busmouse specification).
Registers. Registers define the granularity of interaction with a device: as such register size (in number of bits) must be explicitly specified. Registers are typically defined using two ports: one for reading and one for writing. Only one port needs to be provided when reading and writing share the same port, or when the register is read-only or write-only.
A register declaration may be associated with a mask to specify bit constraints. An element of this mask can either be ‘*’ to denote a relevant bit, ‘0’ or ‘1’ to denote a bit that is irrelevant when read but has a fixed value (0 or 1) when written, or ‘-’ to denote a bit that is irrelevant whether read or written. As an example, consider the declaration of the write-only register \texttt{index\_reg} in line 16 of the Busmouse specification.
\begin{verbatim}
register index_reg = write base@2, mask '1..00000' : bit[8];
\end{verbatim}
This mask indicates that only bits 6 and 5 are relevant. Also, bit 7 is forced to 1 when written while bits 4 through 0 are forced to 0. Proper register masking is performed as part of the stubs generated by the Devil compiler.
Device variables. In order to minimize the number of I/O operations required for communicating with a device, hardware designers often group several independent values into a single register. Accessing these values requires bit mask and shift operations which are error-prone in a general programming language such as C. Devil abstracts values as device variables, which are defined as a sequence of bit registers. Device variables are strongly typed in order to detect potential misuses of the device. Possible types are booleans, enumerated types, signed or unsigned integers of various sizes, and ranges or sets of integers. In line 17 of the Busmouse specification, the 5th and 6th bit of the \texttt{index\_reg} register make up a two-bit unsigned integer variable (i.e., a variable that can take a value from 0 to 3). The \texttt{private} attribute means that the \texttt{index} variable is not defined in the functional interface of the Busmouse controller and can not be directly accessed by the driver programmer.
\begin{verbatim}
private variable index = index\_reg[6..5] : int(2);
\end{verbatim}
Access pre-actions. Device functionalities are often extended by mapping multiple registers to a single physical address. Examples are index-based addressing mode and banks of registers. As a result, accessing such registers requires the setting of a specific context which may involve several I/O operations. To capture this situation, Devil allows pre-actions to be attached to a register. Lines 19 and 20 of the Busmouse specification declare two read-only registers on the same port \texttt{base@0}, provided that the variable \texttt{index} is set either to 0 or 1 prior to the port access.
\begin{verbatim}
register x\_low = read base@0, mask '****....',
pre {index = 0} : bit[8];
register x\_high = read base@0, mask '****....',
pre {index = 1} : bit[8];
\end{verbatim}
Register concatenation. Device variables can be spread over several registers. As illustrated by line 25 of the Busmouse specification, constructing the \texttt{dx} variable requires concatenation of the two registers \texttt{x\_high} and \texttt{x\_low}. The 8-bit variable \texttt{dx} is obtained by concatenating the four lower bits of register \texttt{x\_high} with the four lower bits of register \texttt{x\_low}.
\begin{verbatim}
variable dx = x\_high[3..0] # x\_low[3..0], ...
\end{verbatim}
Enumerated types. Devil allows defining an enumerated type to abstract the concrete representation of bit values. The symbols <=, => and <==> define read, write and read-write constraints, respectively. Enumerated types are used to specify the valid values of a device variable. As an example, the \texttt{config} variable declaration shown in line 9 of the Busmouse specification declares the two modes (\texttt{CONFIGURATION} and \texttt{DEFAULT\_MODE}) that can be written to the \texttt{config} variable.
\begin{verbatim}
variable config = cr[0] : {
CONFIGURATION => '1', DEFAULT\_MODE => '0'};
\end{verbatim}
Caching and synchronization. Sharing one or more registers between variables induces cache and synchronization problems. When one variable needs to be written independently
from the others, the Devil compiler has to determine a value to assign to the other variables. The choice of value depends on whether the access to that variable is idempotent. A Devil variable can be associated with a behavior qualifier that specifies the access semantics. No qualifier (the default case) means that the access is idempotent and thus can be redone without side effect; consequently, the variable value can be cached. Such a behavior is often associated with variables that serve as parameters.
A trigger behavior means that a write (or read) access to the variable induces a side effect on the controller. Since the side effect cannot be re-done, multiple trigger variables cannot be defined on a register unless a neutral value is provided. Command variables usually have a trigger behavior. The following fragment from an NE2000 Ethernet controller presents examples of the trigger behavior.
```plaintext
register cmd = base@0 : bit[8];
variable st = cmd[1..0], write trigger except NEUTRAL;
variable txp = cmd[2], write trigger except NOP;
variable rd = cmd[5..3], write trigger except NODMA;
private variable page = cmd[7..6] : int(2);
```
In this example, the register `cmd` is split into four variables. While the `page` variable has an idempotent behavior, the variables `st`, `txp` and `rd` trigger an action when written, except for specific values (NEUTRAL, NOP and NODMA).
Finally, a volatile behavior specifies that a read operation is not idempotent; two successive reads may deliver different values. When one needs to get a consistent value of several volatile variables, it is necessary to read them together in one or multiple read operations and cache the result for later use. To do so, Devil allows several variables to be grouped using a structure. The use of a structure is demonstrated by the `dx`, `dy` and `buttons` variables of the Busmouse specification (lines 19 to 22).
```plaintext
structure mouse_state = {
variable dx =
x_high[3..0] # x_low[3..0], volatile : ...;
variable dy =
y_high[3..0] # y_low[3..0], volatile : ...;
variable buttons = y_high[7..3], volatile : ...;
};
```
To access field variables `dy` and `buttons`, the programmer first has to read the `mouse_state` structure. Stubs generated for the structure perform the effective I/O operations, while stubs for the field variables access only the cache. It should be noted that since `dy` and `buttons` share the `y_high` register, `y_high` is read only once. Use of the stubs by the driver programmer is detailed in section 4.1.
Cache and synchronization issues are usually only informally documented by hardware vendors. When programming controllers in a general programming language, cache and synchronization issues are typically solved in an ad-hoc manner that limits code re-use and driver evolution. In fact, the lack of a rigorous description of variable behaviors often leads to laborious testing until the expected functionality is obtained. Also, without specific language support, no verification of the correct usage of variables is possible; this opens opportunities for undetected errors.
Assessment. By clearly defining the semantics of variable behavior, a Devil specification serves as knowledge repository for the correct use of a device. In fact, the driver programmer is guided by the interface generated from the Devil specification. This simplifies driver development and improves re-use. Furthermore, verification is possible at two design stages: (i) on the Devil specification itself so as to check consistency of declarations, (ii) on the correct usage of interface procedures generated by the Devil compiler. These advantages are even more crucial when the device interface is awkward and contorted. The next section presents advanced Devil constructions which permit to handle these situations.
2.2 Advanced Devil
To maximize performance, most modern devices offer a simple, flat interface to registers. However, devices are rarely built from scratch and many of them are evolutions or supersets of previous controllers. For example, today’s PCs still rely on DMA, interrupt and graphics controllers that were designed more than twenty years ago.
Design constraints of older devices were guided not only by performance but also by technology and the size of the available I/O address space. Adding functionalities to a device while maintaining backward compatibility induces tricks for addressing additional registers. These issues result in contorted addressing modes, making the programming of such devices even more complex and error-prone. Devil has been specifically targeted towards supporting such devices. Let us now present some of the advanced Devil features using fragments from the Devil specifications of the 8237A DMA, the 8259A interrupt, the Crystal CS4236B, and the IDE controllers.
Register serialization. The 8237A DMA controller provides 16-bit counters through a single 8-bit port. As illustrated by the following example, constructing the counter \( x \) requires concatenation of the two registers \( \text{cnt\_high} \) and \( \text{cnt\_low} \). Since these registers are accessed through the same port, a reading order has to be specified (\( \text{cnt\_low} \) then \( \text{cnt\_high} \)). Finally, a pre-action attached to \( \text{cnt\_low} \) (write any value to the flip-flop variable) permits to reset an internal pointer to this register.
```
register cnt_low = data, pre {flip_flop = * : bit[8];
variable x = cnt_high # cnt_low : int(16)
serialized as {cnt_low; cnt_high};
```
Control-flow based serialization. The 8259A interrupt controller possesses various execution modes that depend on the hardware configuration (processor type, cascaded/single controller) [12]. Initialization of the controller is performed by writing to configuration variables defined over four initialization registers. The initialization sequence varies with the actual values of configuration variables. Additionally, three of the configuration registers (e.g., icw2, icw3, icw4) are mapped to a single port and their addressing is implicitly done by previously written configuration values. The following example shows how such an addressing mode can be specified in Devil: configuration variables are grouped together within the \( \text{init} \) structure. Writing variables of this structure into registers is ordered using tests on variable values.
```
private variable xm : bool;
register control =
base@0, set {xm = false} : bit[8];
variable IA = control : int{0..31};
// Indexed Registers 10 - 131
register I(i : int{0..31}) =
base@1, pre {IA = i} : bit[8];
register I23 = I(23), mask '......0.';
variable ACF = I23[0] : bool;
structure XS = {
variable XA = I23[2,7..4] : int(5);
variable XRAE = I23[3], set {xm = XRAE},
write trigger for true : bool;
};
// Extended Registers X0-X7,X25
register X(j : int{0..17,25}) = base@1,
pre (XS = X(j)) : bit[8];
```
Automata based addressing mode. Among the chips we have studied, the Crystal CS4236B sound chip is one of the most complex. This chip is compatible with the Windows Sound System standard [5], but possesses 18 additional registers. These registers are doubly indexed through the I23 index. Writing a specific device variable converts I23 from an extended address register into an extended data register. To convert I23 back to an address register, the control register must be written. In order to specify this automata, Devil offers the notion of private variables that are not mapped to a specific register (\( \text{xm} \) in the following example). These variables can be used as memory cells and can be updated when writing a register or a device variable. The code below shows how the extended registers of the CS4236B can be specified using Devil.
```
private variable xm : bool;
register icw1 =
write base@0, mask '....1....' : bit[8];
register icw2 = write base@1 : bit[8];
register icw3 = write base@1 : bit[8];
register icw4 =
write base@1, mask '000....' : bit[8];
structure init = {
variable singl = icw1[1] : {
SINGLE => '1', CASCADED => '0'};
variable ic4 = icw1[0] : bool;
variable microprocessor = icw4[0] : {
X8086 => '1', MCS80_85 => '0'};
} serialized as {
icw1;
icw2;
if (singl == SINGLE) icw3;
if (ic4 == true) icw4;
};
```
C loop over a variable read/write by a dedicated looping instruction (e.g., rep on the Pentium) is often more efficient. Variables with a block transfer usage have to be identified with a block keyword. For those variables, the Devil compiler generates two processor-specific block transfer stubs in addition to the single access stubs. The `ide_data` variable declaration from the IDE specification shown below illustrates the use of the block attribute.
```
variable ide_data =
ide_data, trigger, volatile, block : int(16);
```
Other features of Devil are not detailed here. These features include access post-actions, arrays, register constructors and conditional declarations depending on device modes. A complete description of Devil can be found in [17].
## 3 Property Verification
Devil has been designed to express domain-specific information about the functional interface of devices. Because this information is made explicit, Devil enables a variety of verifications that are beyond the scope of general programming languages. As a result, more errors can be caught earlier in the driver development process. In turn, debugging is easier and less time-consuming. Finally, the robustness of the driver is improved since the programmer has guarantees over the correctness of low-level interactions.
This section summarizes the properties that can be verified both when a Devil description is compiled and when the resulting interface implementation is used.
### 3.1 Verification of Devil specifications
Due to the declarative nature of the Devil language, it is possible to verify the following properties that ensure the consistency of a specification:
**Strong typing.** Devil abstractions (e.g., ports, registers, variables) are strongly typed: all uses of these abstractions can be matched against their definition to check type correctness. Types describe usage constraints for registers and variables that are read or write only. Also, various size checks can be performed: the size of data accesses on ports, the size of registers, the size of variables derived from conversion functions, the size of bit masks, and the size of bit patterns that are associated a symbolic name in enumerated types, port ranges, and bit ranges for register fragments.
**No omission.** All declared entities in a Devil specification must be used at least once. This constraint concerns port arguments in a device declaration, values of ranged port offsets, registers, and register bits (although some bits can be declared irrelevant using bit masks). Read elements of a type mapping must be exhaustive. Also, a type for reading (as well as possibly writing) must be used with a readable variable. The same holds for writing.
**No double definition.** All entities in a Devil specification must be declared at most once. This constraint concerns port arguments in a device declaration, ports, registers, types, symbolic names and bit patterns in enumerated types and variables.
**No overlapping definitions.** Port and register descriptions must not overlap. More precisely, each port must appear only once in the register definitions, except when registers are defined using disjoint pre-actions or masks. However, the same port may be used for reading from one register and writing to another. No bit of a single register can be used in the definition of two different variables.
### 3.2 Verification of interface usage
Verification of the correct usage of the generated interface can be both static and dynamic. In the latter case, run-time checks are optionally included in the code for debugging purposes.
When writing to a variable, a check can be performed to verify that the written value falls
within the range specified by the variable type. If the value is constant, the check can generally be done at compile time. However, because the type system of C is not powerful enough to express all Devil types, not all such verifications can be implemented at compile time. In this situation, checks have to be implemented in debug mode using run-time checks. Finally, run-time checks can optionally be generated after variable reads. Such checks are useful for verifying that a device behaves accordingly to its Devil specification.
Our experience in re-engineering drivers showed that dynamic checks allow the early detection of usage errors, preventing them from becoming insidious bugs. This is particularly valuable for kernel-mode drivers, which are tricky to step through with a debugger. Moreover, since the checks are automatically and systematically inserted and removed by the compiler, their use is easy and safe.
4 Comparison with Hand-Crafted Drivers
To assess our approach, we now compare the use of Devil and C. First, we analyse issues related to code development. Then, we report on a study based on mutation analysis to evaluate the robustness of Devil and C implementations. Finally, we discuss the performance of drivers that use the C library automatically generated from a Devil specification.
4.1 Driver development
To illustrate the benefits of Devil in terms of separation of concerns and readability, we compare a fragment of the original C implementation of the Logitech Busmouse driver (see Figure 2) with the use of the interface (see Figure 3) generated from the equivalent Devil specification.
In a traditional C driver, the programmer writes code that accesses the device with assembly-language-level operations (e.g., bit manipulations). For example, the C code needed to express the concatenation of the four lower bits of registers \( y_{\text{high}} \) and \( y_{\text{low}} \) is tedious. As shown in Figure 2-a, macros are often defined so as to factorize common expressions or associate names with commands. Nevertheless, it is rather difficult to understand the behavior of the device from the implementation; maintenance of this code is error-prone and not easy.
Using Devil, driver development is a two stage process: first the chip is specified in Devil, then code is written using the stubs generated from the specification. Describing the device as opposed to coding improves readability. For instance, the Devil description of the variable \( dy \) in the Busmouse specification (see line 26 of Figure 1) consists of a straightforward concatenation of two bit-fragments. The Devil specification is so close to a device description that it can be used for documentation purposes.
When writing the driver code, the programmer first has to include Devil-generated stubs and specify configuration information. For instance, in Figure 3-a, Busmouse stubs are used in debug mode and in a single device configuration (\#define DEVIL_NO_REF). Further communication with the device is encapsulated in stubs (see Figure 3-b). Therefore, the driver programmer only has to focus on operating the device using abstract values. Writing the hardware operating code becomes a very simple task, especially if the programmer can use an existing Devil specification.
4.2 Robustness
As discussed in Section 3, Devil exposes properties that can be automatically checked. This section evaluates the benefits of these checks in terms of software robustness.
Detecting bugs as early as possible is crucial during the development process. A study by DeMillo and Mathur found that simple errors (e.g., typographic errors, inattention errors) represent a significant fraction, though not the majority, of the errors in production programs. This study also revealed that such errors can remain hidden for a long time. Even though their study was concerned with the development of TeX, which differs from device drivers, these observations remain pertinent, and are even more important considering the permissive nature of a language such as C, es-
especially when used to write low-level code.
In order to evaluate the impact of Devil on driver robustness, we have estimated the number of errors that can be detected automatically by the C and Devil compilers/checkers. The error-detection coverage is computed using a mutation analysis technique [1, 8].
For a program \( P \), mutation analysis produces a set of alternate programs, each generated by modifying a single statement of \( P \), according to mutation rules. In our experiment, the mutation rules introduce errors in operators, identifiers and literal constants. Such errors are generated by inserting, replacing or removing a character from the targeted token. For example, the logical operator \( \&\& \) can be replaced by the bit operator \( \& \), the number 121 can be replaced by 21, etc. Mutation rules are defined so as to ensure that the resulting mutant is syntactically correct, and actually modifies the semantics of the program. Therefore, detection of the mutation introduced error by the compiler occurs only if the mutant violates a property of the language (e.g., C or Devil).
In a C driver, we are only interested in testing the hardware operating code. Accordingly, we manually insert tags to mark the corresponding regions in the original C code, and only apply mutations to the tagged regions. In a Devil-based driver, mutations have to be applied both to the Devil specification of the device, and to procedure calls to the generated interface (this C code is denoted by \( C_{Devil} \) in the rest of the paper).
Our experiments compare the error-detection coverage of C against the error-detection coverages of the Devil specification and \( C_{Devil} \). It should be noted that our measurements reflect the worst case for Devil for the following reasons. First, the mutation rules for C and Devil have been chosen so that C is always favored. Second, since a driver often uses a subset of a device, the Devil specification offers more mutation sites (possible errors) than the original C driver. Finally, Devil specifications should ideally come from the device manufacturer or widely
---
In our current experiments, the benefit of run-time checks in Devil generated interfaces are not taken into account.
Measurement analysis. Our study focuses on three different devices (e.g., Logitech Busmouse, NE2000 Ethernet, and IDE controllers) and their corresponding Linux 2.2-12 drivers. Table 1 presents the results of the mutation analysis. Overall, the experiments show that the probability of undetected errors is 1.6 to 5.2 times higher in C hand-crafted drivers than in Devil-based driver (Devil + C\textsubscript{Devil}). When comparing C to C\textsubscript{Devil} only (assuming that the specification is correct), the propensity of undetected errors 3.2 to 5.9 times higher in C. Finally, it can also be observed that mutation errors in Devil specifications are nearly always detected.
The first column of Table 1 represents the number of possible mutation sites ($s$). The second column shows the number of mutants (i.e., errors) which can be injected for each site ($m_s$). For example, given an integer of two digits in base ten, 50 mutants can be generated (2 for removing a digit, 30 for inserting a new digit, and 18 for replacing a digit). The third column shows, for each mutation site, the number of mutants not detected by the compiler/checker ($um_s$). To enable the comparison between C, Devil and C\textsubscript{Devil} we are interested in measuring the number of mutation sites that have undetected mutants ($s_{um}$). To compute this value, we have to balance the number of undetected mutants per site by the number of mutation sites ($s_{um} = um_s/m_s*s$). For example, consider the Logitech Busmouse C driver. It has 62 mutation sites. For each site, 36.6 mutants are generated (on average) and 26.8 are not detected by the compiler. This gives us 45.3 sites with undetected mutants.
<table>
<thead>
<tr>
<th>Device</th>
<th>Language lines</th>
<th>Number of mutation sites</th>
<th>Mutants per site</th>
<th>Undetected mutants per site</th>
<th>Mutation Sites with undetected mutants</th>
<th>Ratio to C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Logitech Busmouse</td>
<td>C = 50</td>
<td>50</td>
<td>36.6</td>
<td>26.8</td>
<td>45.3</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>DEVIL = 21</td>
<td>21</td>
<td>15.9</td>
<td>0.2</td>
<td>1.0</td>
<td>5.9</td>
</tr>
<tr>
<td></td>
<td>C\textsubscript{Devil} = 18</td>
<td>21</td>
<td>13.8</td>
<td>9.0</td>
<td>7.7</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>DEVIL + C\textsubscript{Devil}</td>
<td>102</td>
<td>10.6</td>
<td>1.2</td>
<td>8.3</td>
<td>8.2</td>
</tr>
<tr>
<td>IDE (Intel PIIX4)</td>
<td>C = 64</td>
<td>64</td>
<td>29.0</td>
<td>18.3</td>
<td>61.8</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>DEVIL = 127</td>
<td>127</td>
<td>17.1</td>
<td>1.6</td>
<td>26.6</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>C\textsubscript{Devil} = 81</td>
<td>81</td>
<td>22.0</td>
<td>7.4</td>
<td>13.3</td>
<td>4.6</td>
</tr>
<tr>
<td></td>
<td>DEVIL + C\textsubscript{Devil}</td>
<td>319</td>
<td>17.9</td>
<td>2.9</td>
<td>39.9</td>
<td>1.6</td>
</tr>
<tr>
<td>Ethernet (NE2000)</td>
<td>C = 204</td>
<td>204</td>
<td>14.7</td>
<td>12.6</td>
<td>212.4</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>DEVIL = 144</td>
<td>144</td>
<td>15.0</td>
<td>1.1</td>
<td>33.7</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>C\textsubscript{Devil} = 134</td>
<td>134</td>
<td>35.7</td>
<td>12.5</td>
<td>66.4</td>
<td>3.2</td>
</tr>
<tr>
<td></td>
<td>DEVIL + C\textsubscript{Devil}</td>
<td>714</td>
<td>27.2</td>
<td>4.7</td>
<td>99.8</td>
<td>2.1</td>
</tr>
</tbody>
</table>
Table 1: Language Error-Detection Coverage Analysis
4.3 Performance
It is well-recognized that the performance of drivers is critical for the overall system performance. Furthermore, as demonstrated by Thekkath and Levy for high-performance RPCs [18], the performance of the hardware operating code has a significant impact on the overall driver performance. While Devil can improve readability and robustness of driver hardware operating code, its usefulness depends on the efficiency of the generated code: using Devil must not induce significant execution overhead.
In order to evaluate the benefit and impact of Devil on driver development, we are re-engineering various Linux drivers and testing them on a bi-processor PC. Among the drivers and devices in a Unix system, we chose to implement first the IDE and the accelerated X11 drivers for two reasons: (i) they are representative of performance intensive drivers and they illustrate totally different device access behavior.
In the rest of this section, we first identify
*The PC is a DELL Precision 210 with the following configuration: two Pentium II 450 MHz, Intel PIIX4 PCI chipset, Maxtor model 91000D8 UDMA2 19.5Gb disk with 512Kb cache, 3Dlabs Permedia2 graphic controller.*
the possible penalties induced by Devil, and then we compare the performance of the IDE and accelerated X11 Devil-based drivers with the original ones.
**Micro-analysis** Interface procedures generated by the Devil compiler contain I/O as well as bit-shift and bit-mask instructions. These procedures are optimized by the Devil compiler and implemented as pre-processor macros or inlined functions. Therefore, there is no execution overhead for a single Devil interface procedure as compared to hand-crafted C instructions.
In one situation, we observed that Devil could induce an execution penalty. Accessing independent device variables (*i.e.*, variables not grouped in a structure) defined over a single register, requires multiple Devil interface calls. Each additional call induces additional I/O, as compared to a hand-crafted driver. Nevertheless, as we found in our re-engineering of the IDE and Permedia2 driver, such variables are often parameters and rarely affect the performance of the critical path.
IDE driver Table 2 compares the performance of a Devil-based IDE driver with that of the original C driver. IDE throughput measurements were obtained using the standard Linux `hdparm` utility. We wrote two Devil specifications for this driver: a specification of the IDE controller and a specification of the Intel PIIX4 PCI busmaster IDE.
We have run the IDE driver in both UltraDMA-2 and several PIO modes, varying the size of I/O (16 or 32 bits) and the number of sectors transferred per interrupt. In DMA mode, Devil induces 6 additional I/O operations to prepare the command. Because of the long duration of the DMA transfer, there is no impact on the available throughput. In the PIO modes, there are 3 additional I/O operations to prepare the command, plus 2 for each interrupt (#s denotes the total number of sectors accessed). When using a C loop over a single variable read, we measured a 10% throughput penalty. When using block transfer stubs that use a `rep` instruction, we did not observe an impact on the available throughput.
Permedia2 X11 driver Tables 3 and 4 show the performance Devil-based X11 driver for the 3Dlabs Permedia2 graphics controller. Throughput measurements were obtained using the `xbench` utility. We have modified the 3Dlabs X11 server, which is based on a Xfree86-3.3.6 implementation. Although the Permedia2 chip provides acceleration for both 2D and 3D, the X11 server does not support 3D operations. Additionally, to minimize device-dependant code, many 2D primitives are implemented in software in Xfree86. In fact, hardware acceleration is only used for implementing the `fill rectangle` and `screen area copy` primitives.
Unlike many I/O devices, the Permedia2 controller maps registers into the memory address space. In fact, processor accesses are decoded by the controller and stored in a FIFO. Before accessing the chip, the driver must wait for free entries in the FIFO. This wait loop in-
<table>
<thead>
<tr>
<th>Transfer mode</th>
<th>Sectors per interrupt</th>
<th>I/O Size in bits</th>
<th>I/O Operations</th>
<th>Throughput in Mb/s</th>
<th>I/O Operations</th>
<th>Throughput in Mb/s</th>
<th>Devil/Stand. throughput ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>DMA</td>
<td>14</td>
<td></td>
<td>14.25</td>
<td></td>
<td>14.25</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>PIO</td>
<td>16</td>
<td>32</td>
<td>7 + #s(1 + 128)</td>
<td>8.17</td>
<td>10 + #s(1 + 128)</td>
<td>7.36</td>
<td>90 %</td>
</tr>
<tr>
<td></td>
<td>16</td>
<td>32</td>
<td>7 + #s(1 + 256)</td>
<td>8.89</td>
<td>10 + #s(1 + 256)</td>
<td>7.28</td>
<td>89 %</td>
</tr>
<tr>
<td></td>
<td>16</td>
<td>32</td>
<td>7 + #s(1 + 256)</td>
<td>4.42</td>
<td>10 + #s(1 + 256)</td>
<td>3.91</td>
<td>88 %</td>
</tr>
<tr>
<td>8</td>
<td>32</td>
<td>16</td>
<td>7 + #s(1 + 128)</td>
<td>6.93</td>
<td>10 + #s(3 + 128)</td>
<td>6.36</td>
<td>91 %</td>
</tr>
<tr>
<td></td>
<td>32</td>
<td>16</td>
<td>7 + #s(1 + 256)</td>
<td>4.06</td>
<td>10 + #s(3 + 256)</td>
<td>3.63</td>
<td>89 %</td>
</tr>
</tbody>
</table>
Table 2: IDE Linux driver comparative performance results (using C loops)
<table>
<thead>
<tr>
<th>Display Mode (bits/pixel)</th>
<th>Rectangle Size (pixels)</th>
<th>Standard Driver</th>
<th>Devil Driver</th>
<th>Devil/Stand.</th>
<th>Throughput Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>10x10</td>
<td>3(#w) + 15</td>
<td>1(#w) + 17</td>
<td>95.35</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>384.72</td>
<td>384.89</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>376.22</td>
<td>376.22</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>2x2</td>
<td>3(#w) + 15</td>
<td>1(#w) + 17</td>
<td>95.35</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>336.70</td>
<td>332.99</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>210.32</td>
<td>210.34</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>224.21</td>
<td>224.22</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>24</td>
<td>2x2</td>
<td>2(#w) + 10</td>
<td>2(#w) + 10</td>
<td>94.94</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>235.19</td>
<td>234.74</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>363.94</td>
<td>363.95</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>244.21</td>
<td>244.24</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>32</td>
<td>2x2</td>
<td>3(#w) + 15</td>
<td>1(#w) + 17</td>
<td>95.35</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>251.22</td>
<td>250.54</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>304.66</td>
<td>304.66</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>899.68</td>
<td>899.69</td>
<td>100 %</td>
<td></td>
</tr>
</tbody>
</table>
Table 3: Comparative Performance of Permedia2 Xfree86 Driver: Rectangle Test
<table>
<thead>
<tr>
<th>Display Mode (bits/pixel)</th>
<th>Copy Size (pixels)</th>
<th>Standard Driver</th>
<th>Devil Driver</th>
<th>Devil/Stand.</th>
<th>Throughput Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>2x2</td>
<td>3(#w) + 15</td>
<td>1(#w) + 17</td>
<td>94.94</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>123.584</td>
<td>123.000</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>106.62</td>
<td>106.38</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>224.21</td>
<td>224.24</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>2x2</td>
<td>3(#w) + 15</td>
<td>1(#w) + 17</td>
<td>95.35</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>80.994</td>
<td>80.964</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>30.02</td>
<td>30.02</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>4.38</td>
<td>4.38</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>24</td>
<td>2x2</td>
<td>2(#w) + 9</td>
<td>1(#w) + 9</td>
<td>94.94</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>77.443</td>
<td>77.055</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>17.16</td>
<td>17.16</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>2.38</td>
<td>2.38</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td>32</td>
<td>2x2</td>
<td>2(#w) + 9</td>
<td>1(#w) + 9</td>
<td>94.94</td>
<td>99 %</td>
</tr>
<tr>
<td></td>
<td>10x10</td>
<td>69.762</td>
<td>69.804</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>100x100</td>
<td>17.01</td>
<td>17.01</td>
<td>100 %</td>
<td></td>
</tr>
<tr>
<td></td>
<td>400x400</td>
<td>1.11</td>
<td>1.11</td>
<td>100 %</td>
<td></td>
</tr>
</tbody>
</table>
Table 4: Comparative Performance of Permedia2 Xfree86 Driver: Screen Copy Test
produces one I/O operation per iteration. In Tables 3 and 4, #w denotes the number of iterations per wait loop. In the driver we modified, 2 or 3 wait loops are performed per primitive call.
The time for execution of a drawing command by the Permedia2 controller is proportional to the number of drawn pixels and their depth. Therefore, the overhead induced by Devil is more perceptible for shortest commands. The worst case is reached for 2x2 pixel commands in 8 or 16 bit mode, where Devil induces a performance penalty of up to 6%. For primitive calls involving more than 100 pixels (which are the most common in practice), 99% to 100% of the performance of the original server is obtained (always 100% in 24 bit mode).
5 Related Work
Our work on device drivers started with a study of graphic display adaptors for a X11 server. We developed a language, called GAL, aimed at specifying device drivers in this context [19]. Although successful as a proof of concept, GAL covered a very restricted domain.
The goal of the UDI project is to make device drivers source-portable across OS platforms. To do so, they have normalized the API between the OS and the lower part of device drivers [14]. Besides showing the timeliness of our work, UDI focuses only on the high-level
5 Related Work
Our work on device drivers started with a study of graphic display adaptors for a X11 server. We developed a language, called GAL, aimed at specifying device drivers in this context [19]. Although successful as a proof of concept, GAL covered a very restricted domain.
The goal of the UDI project is to make device drivers source-portable across OS platforms. To do so, they have normalized the API between the OS and the lower part of device drivers [14]. Besides showing the timeliness of our work, UDI focuses only on the high-level
part of drivers and their interaction with the OS.
Windows-specific driver generators like Blue-Water System's WinDK [4] and NuMega's DriverWorks [6] provide a graphical interface for specifying the main features of a driver. They produce a driver skeleton that consists of invocations of coarse-grained library functions. To our knowledge, no existing driver generators cover the communication with the device.
Languages for specifying digital circuits and systems have existed for many years. The VHDL standard [11], widely used in this domain, is one of the most expressive. It addresses several aspects of chip design such as documentation, simulation and synthesis. VHDL provides both high-level and low-level abstractions: arrays and loops are supported, as well as bit-vector literals and bit extraction. However, all VHDL abstractions focus on the inner workings of circuits, not their high-level programming interface. As a consequence, chip interfaces are not explicitly denoted, and VHDL compilers perform limited consistency checks. Interestingly, VHDL allows attaching arbitrary strings to variables. Using them to add interface-specific information is possible, but would require a normalized syntax and compiler support, which in some way amounts to embedding Devil concepts in VHDL.
The New Jersey Machine-Code Toolkit [15] helps programmers write applications that process machine code at an assembly-language level of abstraction. Guided by a instruction set specification, the toolkit generates the code for reading or generating binary. Some simple verifications are also done at the specification level.
6 Conclusion and Future Work
This paper has presented a new approach to developing hardware operating code that is based on an IDL named Devil. This IDL enables hardware communication to be described using high-level, domain-specific constructs instead of being written with assembly-language-like operations. Raising the implementation level of this layer of a device driver dramatically reduces the risk of errors. Devil has shown to be expressive enough to specify a wide variety of devices such as the DMA, interrupt, Ethernet, IDE disk, sound, mouse and video controllers.
Because Devil significantly raises the level of abstraction of communication with the hardware, Devil specifications are more readable, maintainable and re-usable than equivalent C code.
We have developed a compiler that checks the consistency of a Devil specification and automatically generates low-level code that is mostly comparable to hand-crafted code. We have assessed our approach by conducting experiments aimed at comparing hardware operating code in C or Devil for robustness and performance. We have demonstrated that our approach enables hardware operating code to be more robust than C, with mostly comparable performance.
Our future work aims to improve the performance of the output of our Devil compiler. Specifically, we want to enhance performance by factorizing and scheduling device communications and by better exploiting special-purpose assembly-level instructions. The key advantage of introducing optimizations at the compiler level is that these advanced techniques are transparently available to any Devil programmer. As a result, our work reduces the need to have a highly experienced programmer to write hardware operating code since part of this expertise is captured by the compiler.
We are currently building a public domain library of Devil specifications for common devices such as those found in PCs. Our purpose is to setup a WWW repository that would help dissemination of expertise about hardware and facilitate the development of device drivers.
Acknowledgment.
We thank Julia Lawall from DIKU and the other members of the Compose group for helpful comments on earlier versions of this paper. We also thank Timothy Roscoe and the anonymous reviewers for their valuable inputs.
This work has been partly supported by France Telecom under the CTI contract 991B726, the French Ministry of Research and Technology under the Phenix contract 99S0362, and the French Ministry of Education and Research.
Availability
The Devil compiler, Devil specifications and Devil-based drivers mentioned in the paper are available at the following web page http://www.irisa.fr/compose/devil.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00350223/file/osdi00-merillon.pdf", "len_cl100k_base": 12759, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46769, "total-output-tokens": 14172, "length": "2e13", "weborganizer": {"__label__adult": 0.0008111000061035156, "__label__art_design": 0.0008254051208496094, "__label__crime_law": 0.0005054473876953125, "__label__education_jobs": 0.0007872581481933594, "__label__entertainment": 0.0001361370086669922, "__label__fashion_beauty": 0.0003998279571533203, "__label__finance_business": 0.00031685829162597656, "__label__food_dining": 0.0007047653198242188, "__label__games": 0.001598358154296875, "__label__hardware": 0.03863525390625, "__label__health": 0.0008220672607421875, "__label__history": 0.0004544258117675781, "__label__home_hobbies": 0.0003349781036376953, "__label__industrial": 0.0015230178833007812, "__label__literature": 0.0003504753112792969, "__label__politics": 0.0004072189331054687, "__label__religion": 0.0011472702026367188, "__label__science_tech": 0.07232666015625, "__label__social_life": 8.535385131835938e-05, "__label__software": 0.0072479248046875, "__label__software_dev": 0.86767578125, "__label__sports_fitness": 0.0005979537963867188, "__label__transportation": 0.0018949508666992188, "__label__travel": 0.00032830238342285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55694, 0.05383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55694, 0.52416]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55694, 0.86837]], "google_gemma-3-12b-it_contains_pii": [[0, 1015, false], [1015, 4050, null], [4050, 7871, null], [7871, 11474, null], [11474, 15958, null], [15958, 20171, null], [20171, 24354, null], [24354, 28069, null], [28069, 32149, null], [32149, 34403, null], [34403, 38525, null], [38525, 42887, null], [42887, 48556, null], [48556, 52487, null], [52487, 55694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1015, true], [1015, 4050, null], [4050, 7871, null], [7871, 11474, null], [11474, 15958, null], [15958, 20171, null], [20171, 24354, null], [24354, 28069, null], [28069, 32149, null], [32149, 34403, null], [34403, 38525, null], [38525, 42887, null], [42887, 48556, null], [48556, 52487, null], [52487, 55694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55694, null]], "pdf_page_numbers": [[0, 1015, 1], [1015, 4050, 2], [4050, 7871, 3], [7871, 11474, 4], [11474, 15958, 5], [15958, 20171, 6], [20171, 24354, 7], [24354, 28069, 8], [28069, 32149, 9], [32149, 34403, 10], [34403, 38525, 11], [38525, 42887, 12], [42887, 48556, 13], [48556, 52487, 14], [52487, 55694, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55694, 0.17169]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
64612139e563d5f075cc5275de7e572d023c4dc2
|
Introduction to Axiomatic Semantics
Lecture 10-11
ECS 240
Review
• **Operational semantics**
- relatively simple
- many flavors
- adequate guide for an implementation of the language
- not compositional
• **Denotational semantics (didn’t cover)**
- mathematical
- canonical
- compositional
• **Operational ⇔ denotational**
• **We would also like a semantics that is appropriate for arguing program correctness**
Axiomatic Semantics
• An axiomatic semantics consists of
- A language for stating assertions about programs
- Rules for establishing the truth of assertions
• Some typical kinds of assertions:
- This program terminates
- If this program terminates, the variables x and y have the same value throughout the execution of the program,
- The array accesses are within the array bounds
• Some typical languages of assertions
- First-order logic
- Other logics (temporal, linear)
History
• Program verification is almost as old as programming (e.g., “Checking a Large Routine”, Turing 1949)
• In the late 60s, Floyd had rules for flow-charts and Hoare for structured languages
• Since then, there have been axiomatic semantics for substantial languages, and many applications
Hoare Said
• “Thus the practice of proving programs would seem to lead to solution of three of the most pressing problems in software and programming, namely, reliability, documentation, and compatibility. However, program proving, certainly at present, will be difficult even for programmers of high caliber; and may be applicable only to quite simple program designs.”
C.A.R Hoare,
“An Axiomatic Basis for Computer Programming”,
1969
Dijkstra Said
• “Program testing can be used to show the presence of bugs, but never to show their absence!”
Hoare Also Said
• “It has been found a serious problem to define these languages [ALGOL, FORTRAN, COBOL] with sufficient rigor to ensure compatibility among all implementations. ... one way to achieve this would be to insist that all implementations of the language shall satisfy the axioms and rules of inference which underlie proofs of properties of programs expressed in the language. In effect, this is equivalent to accepting the axioms and rules of inference as the ultimately definitive specification of the meaning of the language.”
Other Applications of Axiomatic Semantics
• The project of defining and proving everything formally has not succeeded (at least not yet)
• Proving has not replaced testing and debugging (and praying)
• Applications of axiomatic semantics:
- Proving the correctness of algorithms (or finding bugs)
- Proving the correctness of hardware descriptions (or finding bugs)
- “extended static checking” (e.g., checking array bounds)
- Documentation of programs and interfaces
Assertions for IMP
• The assertions we make about IMP programs are of the form:
\{ A \} c \{ B \}
with the meaning that:
- If A holds in state \( \sigma \) and \( \langle c, \sigma \rangle \downarrow \sigma' \)
- then B holds in \( \sigma' \)
• A is called pre
\underline{condition} and B is called post\underline{condition}
• For example:
\{ y \leq x \} z := x; z := z +1 \{ y < z \}
is a valid assertion
• These are called \underline{Hoare triple} or \underline{Hoare assertions}
Assertions for IMP (II)
• \{A\} c \{B\} is a partial correctness assertion. It does not imply termination
• \[A\] c \[B\] is a total correctness assertion meaning that
If \(A\) holds in state \(\sigma\)
then there exists \(\sigma'\) such that \<(c, \sigma) \downarrow \sigma'\>
and \(B\) holds in state \(\sigma'\)
• Now let’s be more formal
- Formalize the language of assertions, \(A\) and \(B\)
- Say when an assertion holds in a state
- Give rules for deriving Hoare triples
The Assertion Language
- We use first-order predicate logic on top of IMP expressions
\[ A :: = \text{true} \mid \text{false} \mid e_1 = e_2 \mid e_1 \geq e_2 \]
\[ \mid A_1 \land A_2 \mid A_1 \lor A_2 \mid A_1 \implies A_2 \mid \forall x.A \mid \exists x.A \]
- Note that we are somewhat sloppy and mix the logical variables and the program variables
- Implicitly, for us all IMP variables range over integers
- All IMP boolean expressions are also assertions
Semantics of Assertions
• We introduced a language of assertions, we need to assign meanings to assertions.
• Notation $\sigma \vdash A$ to say that an assertion holds in a given state.
- This is well-defined when $\sigma$ is defined on all variables occurring in $A$.
• The $\vdash$ judgment is defined inductively on the structure of assertions.
• It relies on the denotational semantics of arithmetic expressions from IMP.
Semantics of Assertions
• Formal definition:
\( \sigma \vDash \text{true} \quad \text{always} \)
\( \sigma \vDash e_1 = e_2 \quad \text{iff} \ [e_1] \sigma = [e_2] \sigma \)
\( \sigma \vDash e_1 \geq e_2 \quad \text{iff} \ [e_1] \sigma \geq [e_2] \sigma \)
\( \sigma \vDash A_1 \land A_2 \quad \text{iff} \ \sigma \vDash A_1 \ \text{and} \ \sigma \vDash A_2 \)
\( \sigma \vDash A_1 \lor A_2 \quad \text{iff} \ \sigma \vDash A_1 \ \text{or} \ \sigma \vDash A_2 \)
\( \sigma \vDash A_1 \Rightarrow A_2 \quad \text{iff} \ \sigma \vDash A_1 \text{ implies } \sigma \vDash A_2 \)
\( \sigma \vDash \forall x. A \quad \text{iff} \ \forall n \in \mathbb{Z}. \sigma[x:=n] \vDash A \)
\( \sigma \vDash \exists x. A \quad \text{iff} \ \exists n \in \mathbb{Z}. \sigma[x:=n] \vDash A \)
Semantics of Assertions
- Now we can define formally the meaning of a partial correctness assertion
\[ \models \{ A \} \subseteq \{ B \} : \forall \sigma \in \Sigma. (\sigma \models A \land <c,\sigma> \downarrow \sigma') \Rightarrow \sigma' \models B \]
- ... and the meaning of a total correctness assertion
\[ \models [A] \subseteq [B] \iff \forall \sigma \in \Sigma. (\sigma \models A \land <c,\sigma> \downarrow \sigma') \Rightarrow \sigma' \models B \]
\[ \land \]
\[ \forall \sigma \in \Sigma. \sigma \models A \Rightarrow \exists \sigma' \in \Sigma. <c,\sigma> \downarrow \sigma' \]
Deriving Assertions
• Now we have the formal mechanism to decide when \{A\} \sqsubseteq \{B\}
- But it is not satisfactory
- Because \models \{A\} \sqsubseteq \{B\} is defined in terms of the operational semantics, we practically have to run the program to verify an assertion
- And also it is impossible to effectively verify the truth of a \(\forall x. A\) assertion (by using the definition of validity)
• So we define a symbolic technique for deriving valid assertions from other valid assertions
Derivation Rules for Hoare Triples
• We write ⊢ \{A\} \trianglerighteq \{B\} when we can derive the triple using derivation rules
• One derivation rule for each command in the language
• Plus, the rule of consequence
\[ \vdash A' \Rightarrow A \quad \vdash \{A\} \trianglerighteq \{B\} \quad \vdash B \Rightarrow B' \]
\[ \vdash \{A'\} \trianglerighteq \{B'\} \]
Derivation Rules for Hoare Logic
- One rule for each syntactic construct:
\[
\begin{align*}
\vdash \{A\} \text{skip} \{A\} & & \vdash \{[e/x]A\} x := e \{A\} \\
\vdash \{A\} c_1 \{B\} & \vdash \{B\} c_2 \{C\} & \vdash \{A\} c_1; c_2 \{C\} \\
\vdash \{A \land b\} c_1 \{B\} & \vdash \{A \land \neg b\} c_2 \{B\} & \vdash \{A\} \text{if b then } c_1 \text{ else } c_2 \{B\} \\
\vdash \{A \land b\} c \{A\} & \vdash \{A\} \text{while b do } c \{A \land \neg b\}
\end{align*}
\]
Hoare Rules
- For some constructs multiple rules are possible:
\[ \vdash \{A\} x := e \{\exists x_0.[x_0/x]A \land x = [x_0/x]e\} \]
(This was the “forward” axiom for assignment)
\[ \vdash A \land b \Rightarrow C \quad \vdash \{C\} c \{A\} \quad \vdash A \land \neg b \Rightarrow B \]
\[ \vdash \{A\} \text{ while } b \text{ do c } \{B\} \]
- Exercise: these rules can be derived from the previous ones using the consequence rules
Example: Assignment
- Assume that \( x \) does not appear in \( e \)
Prove \( \{\text{true}\} \ x := e \ {\ x = e} \)
- First the assignment rule
\[
\begin{array}{c}
\vdash \{e = e\} \ x := e \ {\ x = e} \\
\text{because} \ [e/x](x = e) \equiv e = [e/x]e \equiv e = e
\end{array}
\]
- Then with the consequence rule:
\[
\begin{array}{c}
\vdash \text{true} \Rightarrow e = e \\
\vdash \{e = e\} \ x := e \ {\ x = e} \\
\vdash \{\text{true}\} \ x := e \ {\ x = e}
\end{array}
\]
The Assignment Axiom (Cont.)
• Hoare said: “Assignment is undoubtedly the most characteristic feature of programming a digital computer, and one that most clearly distinguishes it from other branches of mathematics. It is surprising therefore that the axiom governing our reasoning about assignment is quite as simple as any to be found in elementary logic.”
• How about aliasing?
- If x and y are aliased then
\[
\{ \text{true} \} x := 5 \{ \text{x + y = 10} \}
\]
is true
Example: Conditional
\[ D_1 :: \vdash \{ \text{true} \land y \leq 0 \} \ x := 1 \ \{ x > 0 \} \]
\[ D_2 :: \vdash \{ \text{true} \land y > 0 \} \ x := y \ \{ x > 0 \} \]
\[ \vdash \{ \text{true} \} \text{ if } y \leq 0 \text{ then } x := 1 \text{ else } x := y \ \{ x > 0 \} \]
- \( D_1 \) is obtained by consequence and assignment
\[ \vdash \{ 1 > 0 \} \ x := 1 \ \{ x > 0 \} \]
\[ \vdash \text{true} \land y \leq 0 \Rightarrow 1 > 0 \]
\[ \vdash \{ \text{true} \land y \leq 0 \} \ x := 1 \ \{ x > 0 \} \]
- \( D_2 \) is also obtained by consequence and assignment
\[ \vdash \{ y > 0 \} \ x := y \ \{ x > 0 \} \]
\[ \vdash \text{true} \land y > 0 \Rightarrow y > 0 \]
\[ \vdash \{ \text{true} \land y > 0 \} \ x := y \ \{ x > 0 \} \]
Example: Loop
• We want to derive that
\[ \vdash \{ x \leq 0 \} \text{ while } x \leq 5 \text{ do } x := x + 1 \{ x = 6 \} \]
• Use the rule for while with invariant \( x \leq 6 \)
\[
\begin{align*}
\vdash x \leq 6 \land x \leq 5 & \Rightarrow x + 1 \leq 6 \\
\vdash \{ x + 1 \leq 6 \} x := x + 1 \{ x \leq 6 \}
\end{align*}
\]
\[ \vdash \{ x \leq 6 \land x \leq 5 \} x := x + 1 \{ x \leq 6 \} \]
\[ \vdash \{ x \leq 6 \} \text{ while } x \leq 5 \text{ do } x := x + 1 \{ x \leq 6 \land x > 5 \} \]
• Then finish-off with consequence
\[
\begin{align*}
\vdash x \leq 0 & \Rightarrow x \leq 6 \\
\vdash x \leq 6 \land x > 5 & \Rightarrow x = 6 \\
\vdash \{ x \leq 6 \} \text{ while } ... \{ x \leq 6 \land x > 5 \}
\end{align*}
\]
\[ \vdash \{ x \leq 0 \} \text{ while } ... \{ x = 6 \} \]
Another Example
• Verify that
\[ \vdash \{A\} \text{ while true do } c \{ B\} \]
holds for any \(A\), \(B\) and \(c\)
• We must construct a derivation tree
\[
\begin{align*}
\vdash A \Rightarrow \text{true} & \quad \vdash \{\text{true} \land \text{true}\} \ c \ \{\text{true}\} \\
\vdash \text{true} \land \text{false} \Rightarrow B & \quad \vdash \{\text{true}\} \text{ while true do } c \{\text{true} \land \text{false}\} \\
\vdash \{A\} \text{ while true do } c \{B\} & \\
\end{align*}
\]
• We need an additional lemma:
\[ \forall A. \forall c. \vdash \{A\} \ c \ \{\text{true}\} \]
- How do you prove this one?
Using Hoare Rules. Notes
• Hoare rules are mostly syntax directed
• There are three wrinkles:
- When to apply the rule of consequence?
- What invariant to use for while?
- How do you prove the implications involved in consequence?
• The last one can rely on theorem proving
- This turns out to be doable
- Loop invariants turn out to be the hardest problem
Where Do We Stand?
- We have a language for asserting properties of programs
- We know when such an assertion is true
- We also have a symbolic method for deriving assertions
Soundness of Axiomatic Semantics
• Formal statement
\[ \text{If } \vdash \{ A \} c \{ B \} \text{ then } \models \{ A \} c \{ B \} \]
or, equivalently
\[ \text{For all } \sigma, \text{ if } \sigma \models A \text{ and } D :: <c, \sigma> \Downarrow \sigma' \text{ and } H :: \vdash \{ A \} c \{ B \} \text{ then } \sigma' \models B \]
• How can we prove this?
- By induction on the structure of \( c \)?
• No, problems with while and rule of consequence
- By induction on the structure of \( D \)?
• No, problems with rule of consequence
- By induction on the structure of \( H \)?
• No, problems with while
- By simultaneous induction on the structure of \( D \) and \( H \)
Simultaneous Induction
- Consider two structures D and H
- Assume that $x < y$ iff $x$ is a substructure of $y$
- Define the ordering
\[(d, h) < (d', h') \text{ iff } d < d' \text{ or } d = d' \text{ and } h < h'\]
- Called lexicographic ordering
- Just like the ordering in a dictionary
- This is a well founded order and leads to simultaneous induction
- If $d < d'$ then $h$ can actually be larger than $h'$!
- It can even be unrelated to $h'$!
Soundness of the Consequence Rule
- **Case:** last rule used in $H :: \vdash \{ A \} \Rightarrow \{ B \}$ is the consequence rule:
\[
\vdash A \Rightarrow A' \quad H_1 :: \vdash \{ A' \} \Rightarrow \{ B' \} \quad \vdash B' \Rightarrow B
\]
\[
\vdash \{ A \} \Rightarrow \{ B \}
\]
- From soundness of the first-order logic derivations we have $\sigma \models A \Rightarrow A'$, hence $\sigma \models A'$
- From IH with $H_1$ and $D$ we get that $\sigma' \models B'$
- From soundness of the first-order logic derivations we have that $\sigma' \models B' \Rightarrow B$, hence $\sigma' \models B$, q.e.d.
Soundness of the Assignment Axiom
• Case: the last rule used in $H :: \vdash \{ A \} c \{ B \}$ is the assignment rule
$\vdash \{[e/x]B\} x := e \{B\}$
• The last rule used in $D :: <x := e, \sigma> \downarrow \sigma'$ must be
$D_1 :: <e, \sigma> \downarrow n$
$\vdash \{[e/x]B\} x := e \{B\}$
• We must prove the substitution lemma:
If $\sigma \models [e/x]B$ and $<e, \sigma> \downarrow n$ then $\sigma[x := n] \vdash B$
Soundness of the While Rule
• Case: last rule used in \( H : \vdash \{ A \} c \{ B \} \) was the while rule:
\[
H_1 :: \vdash \{ A \land b \} c \{ A \}
\]
\[
\vdash \{ A \} \text{ while } b \text{ do } c \{ A \land \neg b \}
\]
• There are two possible rules at the root of \( D \).
- We do only the complicated case
\[
D_1 :: <b, \sigma> \downarrow \text{true} \quad D_2 :: <c, \sigma> \downarrow \sigma' \quad D_3 :: <\text{while } b \text{ do } c, \sigma'> \downarrow \sigma''
\]
\[
<\text{while } b \text{ do } c, \sigma> \downarrow \sigma''
\]
Soundness of the While Rule (Cont.)
Assume that $\sigma \models A$
To show that $\sigma'' \models A \land \neg b$
- By property of booleans and $D_1$ we get $\sigma \models b$
- Hence $\sigma \models A \land b$
- By IH on $H_1$ and $D_2$ we get $\sigma' \models A$
- By IH on $H$ and $D_3$ we get $\sigma'' \models A \land \neg b$, q.e.d.
- Note that in the last use of IH the derivation $H$ did not decrease
- See Winskel, Chapter 6.5 for a soundness proof with denotational semantics
Completeness of Axiomatic Semantics
Weakest Preconditions
Completeness of Axiomatic Semantics
• Is it true that whenever \( \models \{A\} \subseteq \{B\} \) we can also derive \( \vdash \{A\} \subseteq \{B\} \)?
• If it isn’t then it means that there are valid properties of programs that we cannot verify with Hoare rules
• Good news: for our language the Hoare triples are complete
• Bad news: only if the underlying logic is complete
(whenever \( \models A \) we also have \( \vdash A \))
- this is called relative completeness
Proof Idea
- Dijkstra’s idea: To verify that \( \{ A \} c \{ B \} \)
a) Find out all predicates \( A' \) such that \( \vdash \{ A' \} c \{ B \} \)
- call this set \( \text{Pre}(c, B) \)
b) Verify for one \( A' \in \text{Pre}(c, B) \) that \( A \Rightarrow A' \)
- Assertions can be ordered:
\[
\begin{array}{ccc}
\text{false} & \Rightarrow & \text{true} \\
\text{strong} & & \text{weak} \\
\text{weakest precondition: WP}(c, B)
\end{array}
\]
- Thus: compute \( \text{WP}(c, B) \) and prove \( A \Rightarrow \text{WP}(c, B) \)
Proof Idea (Cont.)
- Completeness of axiomatic semantics:
\[ \text{If } \vdash \{ A \} c \{ B \} \text{ then } \vdash \{ A \} c \{ B \} \]
- Assuming that we can compute \( wp(c, B) \) with the following properties:
1. \( wp \) is a precondition (according to the Hoare rules)
\[ \vdash \{ wp(c, B) \} c \{ B \} \]
2. \( wp \) is the weakest precondition
\[ \text{If } \vdash \{ A \} c \{ B \} \text{ then } \vdash A \Rightarrow wp(c, B) \]
\[ \vdash A \Rightarrow wp(c, B) \quad \vdash \{ wp(c, B) \} c \{ B \} \]
\[ \vdash \{ A \} c \{ B \} \]
- We also need that whenever \( \vdash A \) then \( \vdash A ! \)
**Weakest Preconditions**
- Define $wp(c, B)$ inductively on $c$, following Hoare rules:
\[
\begin{align*}
\{A\} & c_1 \{C\} & \{C\} c_2 \{B\} \\
& \{A\} c_1; c_2 \{B\} \\
\end{align*}
\]
\[
wp(c_1; c_2, B) = wp(c_1, wp(c_2, B))
\]
\[
\begin{align*}
\{[e/x]B\} & x := e \{B\} \\
\end{align*}
\]
\[
wp(x := e, B) = [e/x]B
\]
\[
\begin{align*}
\{A_1\} & c_1 \{B\} & \{A_2\} c_2 \{B\} \\
& \{ E \Rightarrow A_1 \land \neg E \Rightarrow A_2 \} \text{ if } E \text{ then } c_1 \text{ else } c_2 \{B\} \\
\end{align*}
\]
\[
wp(\text{if } E \text{ then } c_1 \text{ else } c_2, B) = E \Rightarrow wp(c_1, B) \land \neg E \Rightarrow wp(c_2, B)
\]
Weakest Preconditions for Loops
• We start from the equivalence
\[
\text{while } b \text{ do } c = \text{ if } b \text{ then } c; \text{ while } b \text{ do } c \text{ else skip}
\]
• Let \( w = \text{while } b \text{ do } c \) and \( W = \text{wp}(w, B) \)
• We have that
\[
W = b \Rightarrow \text{wp}(c, W) \land \neg b \Rightarrow B
\]
• But this is a recursive equation!
- We know how to solve these using domain theory
• We need a domain for assertions
A Partial-Order for Assertions
• What is the assertion that contains least information?
- true - does not say anything about the state
• What is an appropriate information ordering?
\[ A \sqsubseteq A' \iff \models A' \Rightarrow A \]
• Is this partial order complete?
- Take a chain \( A_1 \sqsubseteq A_2 \sqsubseteq \ldots \)
- Let \( \bigwedge A_i \) be the infinite conjunction of \( A_i \)
\[ \sigma \models \bigwedge A_i \iff \text{for all } i \text{ we have that } \sigma \models A_i \]
- Verify that \( \bigwedge A_i \) is the least upper bound
• Can \( \bigwedge A_i \) be expressed in our language of assertions?
- In many cases yes, we’ll assume yes for now
Weakest Precondition for WHILE
- Use the fixed-point theorem
\[ F(A) = b \Rightarrow wp(c, A) \land \neg b \Rightarrow B \]
- Verify that \( F \) is both monotonic and continuous
- The least-fixed point (i.e. the weakest fixed point) is
\[ wp(w, B) = \land F^i(\text{true}) \]
- Notice that unlike for denotational semantics of IMP we are not working on a flat domain!
Weakest Preconditions (Cont.)
• Define a family of wp’s
- $wp_k(\text{while } e \text{ do } c, B) = \text{ weakest precondition on which the loop if it terminates in } k \text{ or fewer iterations, it terminates in } B$
- $wp_0 = \neg E \Rightarrow B$
- $wp_1 = E \Rightarrow wp(c, wp_0) \land \neg E \Rightarrow B$
- ...
• $wp(\text{while } e \text{ do } c, B) = \bigwedge_{k \geq 0} wp_k = \text{lub } \{wp_k \mid k \geq 0\}$
• Weakest preconditions are
- Impossible to compute (in general)
- Can we find something easier to compute yet sufficient?
Verification Conditions
Not Quite Weakest Preconditions
- Recall what we are trying to do:
\[ \text{false} \Rightarrow \text{true} \]
\( \text{Pre}(s, B) \)
\( \text{strong} \uparrow \Rightarrow \text{weakest} \uparrow \text{weak} \)
\( A \)
verification condition: \( WP(c, B) \)
\( \text{weakest} \)
\( \text{precondition} \)
\( \text{VC}(c, B) \)
- \textbf{We shall construct a verification condition: } \( VC(c, B) \)
- The loops are annotated with loop invariants!
- \( VC \) is guaranteed stronger than \( WP \)
- But hopefully still weaker than \( A \): \( A \Rightarrow VC(c, B) \Rightarrow WP(c, B) \)
Verification Conditions
• Factor out the hard work
- Loop invariants
- Function specifications
• Assume programs are annotated with such specs.
- Good software engineering practice anyway
• We will assume that the new form of the while construct includes an invariant:
\[
\text{while}_{I} b \text{ do } c
\]
- The invariant formula must hold every time before \( b \) is evaluated
Verification Condition Generation (1)
- Mostly follows the definition of the wp function
\[
\begin{align*}
VC(\text{skip}, B) &= B \\
VC(\text{c}_1; \text{c}_2, B) &= VC(\text{c}_1, VC(\text{c}_2, B)) \\
VC(\text{if } b \text{ then } \text{c}_1 \text{ else } \text{c}_2, B) &= b \Rightarrow VC(\text{c}_1, B) \land \neg b \Rightarrow VC(\text{c}_2, B) \\
VC(x := e, B) &= [e/x]B \\
VC(\text{while } b \text{ do } \text{c}, B) &= ?
\end{align*}
\]
Verification Condition Generation for WHILE
\[
\text{VC(while}_I \text{ e do c, B)} = \\
I \land (\forall x_1 \ldots x_n. I \Rightarrow (e \Rightarrow \text{VC}(c, I) \land \neg e \Rightarrow B))
\]
- \(I\) holds on entry
- \(I\) is preserved in an arbitrary iteration
- \(B\) holds when the loop terminates in an arbitrary iteration
- \(I\) is the loop invariant (provided externally)
- \(x_1, \ldots, x_n\) are all the variables modified in \(c\)
- The \(\forall\) is similar to the \(\forall\) in mathematical induction:
\[
P(0) \land \forall n \in \mathbb{N}. P(n) \Rightarrow P(n+1)
\]
VC and Invariants
• Consider the Hoare triple:
\{x \leq 0\} \text{ while } x \leq 5 \text{ do } x := x + 1 \{x = 6\}
• The VC for this is:
\begin{align*}
x \leq 0 & \Rightarrow I(x) \land \forall x. (I(x) \Rightarrow (x > 5 \Rightarrow x = 6) \land \\
& \quad x \leq 5 \Rightarrow I(x+1))
\end{align*}
• Requirements on the invariant:
- Holds on entry \quad \forall x. x \leq 0 \Rightarrow I(x)
- Preserved by the body \quad \forall x. I(x) \land x \leq 5 \Rightarrow I(x+1)
- Useful \quad \forall x. I(x) \land x > 5 \Rightarrow x = 6
• Check that \(I(x) = x \leq 6\) satisfies all constraints
Memory Aliasing
Hoare Rules: Assignment
• When is the following Hoare triple valid?
\[ \{ A \} \quad *x = 5 \quad \{ \star x + \star y = 10 \} \]
• \( \star y = 5 \) or \( x = y \)
• The Hoare rule for assignment would give us:
\[
[5/\star x] (*x + \star y = 10)
= 5 + \star y = 10
= \star y = 5 \quad \text{(we lost one case)}
\]
• How come the rule does not work?
Handling Program State
• **We cannot have side-effects in assertions**
- While creating the VC we must remove side-effects!
- But how to do that when lacking precise aliasing information?
• **Important technique: Postpone alias analysis**
• **Model the state of memory as a symbolic mapping from addresses to values:**
- If $E$ denotes an address and $M$ a memory state then:
- $\text{sel}(M,E)$ denotes the contents of the memory cell
- $\text{upd}(M,E,V)$ denotes a new memory state obtained from $M$ by writing $V$ at address $E$
Hoare Rules: Side-Effects
- To model writes correctly we use memory expressions
- A memory write changes the value of memory
\[
\{ B[\text{upd}(\mu, E_1, E_2)/\mu]\} *E_1 := E_2 \{B\}
\]
- Important technique: treat memory as a whole
- And reason later about memory expressions with inference rules such as (McCarthy):
\[
\text{sel}(\text{upd}(M, E_1, E_2), E_3) = \begin{cases}
E_2 & \text{if } E_1 = E_3 \\
\text{sel}(M, E_3) & \text{if } E_1 \neq E_3
\end{cases}
\]
Memory Aliasing
• Consider again: \{ A \} *x := 5 \{ *x + *y = 10 \}
• We obtain:
\[ A = [\text{upd}(\mu, x, 5)/\mu] (*x + *y = 10) \]
\[ = [\text{upd}(\mu, x, 5)/\mu] (\text{sel}(\mu, x) + \text{sel}(\mu, y) = 10) \]
\[ = \text{sel}(\text{upd}(\mu, x, 5), x) + \text{sel}(\text{upd}(\mu, x, 5), y) = 10 \quad (*) \]
\[ = 5 + \text{sel}(\text{upd}(\mu, x, 5), y) = 10 \]
\[ = \text{if } x = y \text{ then } 5 + 5 = 10 \text{ else } 5 + \text{sel}(\mu, y) = 10 \]
\[ = x = y \text{ or } *y = 5 \quad (**) \]
• To (*) is theorem generation
• From (*) to (**) is theorem proving
Mutable Records - Two Models
- Let \( r : \text{RECORD} \ f_1 : T_1; \ f_2 : T_2 \ \text{END} \)
- Records are reference types
- **Method 1**
- One “memory” for each record
- One index constant for each field. We postulate \( f_1 \neq f_2 \)
- \( r.f_1 \) is \( \text{sel}(r,f_1) \) and \( r.f_1 := E \) is \( r := \text{upd}(r,f_1,E) \)
- **Method 2**
- One “memory” for each field
- The record address is the index
- \( r.f_1 \) is \( \text{sel}(f_1,r) \) and \( r.f_1 := E \) is \( f_1 := \text{upd}(f_1,r,E) \)
Next Time
- ESC/Java
|
{"Source-Url": "https://web.cs.ucdavis.edu/~su/teaching/ecs240-w17/lectures/lecture10-11.pdf", "len_cl100k_base": 8381, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 84847, "total-output-tokens": 10773, "length": "2e13", "weborganizer": {"__label__adult": 0.0003838539123535156, "__label__art_design": 0.0003695487976074219, "__label__crime_law": 0.00046133995056152344, "__label__education_jobs": 0.002685546875, "__label__entertainment": 7.861852645874023e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.0002503395080566406, "__label__food_dining": 0.0004544258117675781, "__label__games": 0.0007448196411132812, "__label__hardware": 0.0009326934814453124, "__label__health": 0.00042891502380371094, "__label__history": 0.00027251243591308594, "__label__home_hobbies": 0.0001405477523803711, "__label__industrial": 0.0006475448608398438, "__label__literature": 0.0005879402160644531, "__label__politics": 0.00032973289489746094, "__label__religion": 0.0005655288696289062, "__label__science_tech": 0.0296783447265625, "__label__social_life": 0.00012552738189697266, "__label__software": 0.004924774169921875, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0003190040588378906, "__label__transportation": 0.000762939453125, "__label__travel": 0.00021386146545410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24625, 0.01125]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24625, 0.44173]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24625, 0.74074]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 431, null], [431, 922, null], [922, 1221, null], [1221, 1661, null], [1661, 1771, null], [1771, 2314, null], [2314, 2793, null], [2793, 3287, null], [3287, 3800, null], [3800, 4269, null], [4269, 4701, null], [4701, 5485, null], [5485, 6085, null], [6085, 6594, null], [6594, 6962, null], [6962, 7439, null], [7439, 7876, null], [7876, 8363, null], [8363, 8857, null], [8857, 9609, null], [9609, 10407, null], [10407, 11031, null], [11031, 11401, null], [11401, 11577, null], [11577, 12280, null], [12280, 12738, null], [12738, 13348, null], [13348, 13778, null], [13778, 14335, null], [14335, 14827, null], [14827, 14885, null], [14885, 15366, null], [15366, 15919, null], [15919, 16560, null], [16560, 17259, null], [17259, 17737, null], [17737, 18424, null], [18424, 18802, null], [18802, 19373, null], [19373, 19397, null], [19397, 20016, null], [20016, 20415, null], [20415, 20864, null], [20864, 21464, null], [21464, 22082, null], [22082, 22098, null], [22098, 22464, null], [22464, 23010, null], [23010, 23486, null], [23486, 24075, null], [24075, 24604, null], [24604, 24625, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 431, null], [431, 922, null], [922, 1221, null], [1221, 1661, null], [1661, 1771, null], [1771, 2314, null], [2314, 2793, null], [2793, 3287, null], [3287, 3800, null], [3800, 4269, null], [4269, 4701, null], [4701, 5485, null], [5485, 6085, null], [6085, 6594, null], [6594, 6962, null], [6962, 7439, null], [7439, 7876, null], [7876, 8363, null], [8363, 8857, null], [8857, 9609, null], [9609, 10407, null], [10407, 11031, null], [11031, 11401, null], [11401, 11577, null], [11577, 12280, null], [12280, 12738, null], [12738, 13348, null], [13348, 13778, null], [13778, 14335, null], [14335, 14827, null], [14827, 14885, null], [14885, 15366, null], [15366, 15919, null], [15919, 16560, null], [16560, 17259, null], [17259, 17737, null], [17737, 18424, null], [18424, 18802, null], [18802, 19373, null], [19373, 19397, null], [19397, 20016, null], [20016, 20415, null], [20415, 20864, null], [20864, 21464, null], [21464, 22082, null], [22082, 22098, null], [22098, 22464, null], [22464, 23010, null], [23010, 23486, null], [23486, 24075, null], [24075, 24604, null], [24604, 24625, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24625, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24625, null]], "pdf_page_numbers": [[0, 59, 1], [59, 431, 2], [431, 922, 3], [922, 1221, 4], [1221, 1661, 5], [1661, 1771, 6], [1771, 2314, 7], [2314, 2793, 8], [2793, 3287, 9], [3287, 3800, 10], [3800, 4269, 11], [4269, 4701, 12], [4701, 5485, 13], [5485, 6085, 14], [6085, 6594, 15], [6594, 6962, 16], [6962, 7439, 17], [7439, 7876, 18], [7876, 8363, 19], [8363, 8857, 20], [8857, 9609, 21], [9609, 10407, 22], [10407, 11031, 23], [11031, 11401, 24], [11401, 11577, 25], [11577, 12280, 26], [12280, 12738, 27], [12738, 13348, 28], [13348, 13778, 29], [13778, 14335, 30], [14335, 14827, 31], [14827, 14885, 32], [14885, 15366, 33], [15366, 15919, 34], [15919, 16560, 35], [16560, 17259, 36], [17259, 17737, 37], [17737, 18424, 38], [18424, 18802, 39], [18802, 19373, 40], [19373, 19397, 41], [19397, 20016, 42], [20016, 20415, 43], [20415, 20864, 44], [20864, 21464, 45], [21464, 22082, 46], [22082, 22098, 47], [22098, 22464, 48], [22464, 23010, 49], [23010, 23486, 50], [23486, 24075, 51], [24075, 24604, 52], [24604, 24625, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24625, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
7c2fbfdc862f593528d07675aa564a732ceef921
|
Image Processing with CUDA
Jia Jun Tse
University of Nevada, Las Vegas, jiajtse@gmail.com
Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations
Part of the Computer Sciences Commons
Repository Citation
https://digitalscholarship.unlv.edu/thesesdissertations/1699
This Thesis is brought to you for free and open access by Digital Scholarship@UNLV. It has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact digitalscholarship@unlv.edu.
IMAGE PROCESSING WITH CUDA
by
Jia Tse
Bachelor of Science,
University of Nevada, Las Vegas
2006
A thesis submitted in partial fulfillment of
the requirements for the
Master of Science Degree in Computer Science
School of Computer Science
Howard R. Hughes College of Engineering
The Graduate College
University of Nevada, Las Vegas
August 2012
THE GRADUATE COLLEGE
We recommend the thesis prepared under our supervision by
Jia Tse
entitled
Image Processing with Cuda
be accepted in partial fulfillment of the requirements for the degree of
Master of Science in Computer Science
School of Computer Science
Ajoy K. Datta, Committee Chair
Lawrence L. Larmore, Committee Member
Yoohwan Kim, Committee Member
Venkatesan Muthukumar, Graduate College Representative
Thomas Piechota, Ph. D., Interim Vice President for Research and Graduate Studies and Dean of the Graduate College
August 2012
Abstract
This thesis puts to the test the power of parallel computing on the GPU against the massive computations needed in image processing of large images. The GPU has long been used to accelerate 3D applications. With the advent of high level programmable interfaces, programming to the GPU is simplified and is being used to accelerate a wider class of applications. More specifically, this thesis focuses on CUDA as its parallel programming platform.
This thesis explores on the possible performance gains that can be achieved by using CUDA on image processing. Two well known algorithms for image blurring and edge detection is used in the experiment. Benchmarks are done between the parallel implementation and the sequential implementation.
I would like to express my deepest sincere gratitude to my adviser Dr. Ajoy K. Datta for sticking with me through this entire time. He is one of the best cs professors at UNLV, and I consider myself fortunate to be one of his students. His patience and guidance is what made this thesis possible.
I would also like to thank Dr. Larmore, Dr. Kim and Dr. Muthukumar for their time in reviewing my report and their willingness to serve on my committee.
I thank my family and friends for their unconditional support in finishing this thesis.
Jia Tse
University of Nevada, Las Vegas
August 2012
Contents
Abstract iii
Acknowledgements iv
Contents v
List of Tables vii
List of Figures viii
Listing ix
1 Introduction 1
2 CUDA 3
2.1 GPU Computing and GPGPU 3
2.2 CUDA architecture 8
2.3 CUDA Programming Model 10
2.4 CUDA Thread Hierarchy 15
2.5 CUDA Memory 23
2.6 Limitations of CUDA 25
2.7 Common CUDA APIs 26
3 Image Processing and CUDA
3.1 Gaussian Blur ...................................................... 30
3.2 Sobel Edge Detection ............................................. 31
3.3 Gaussian Blur Implementation .................................... 32
3.3.1 Implementation ............................................... 33
3.3.2 Breaking Down CUDA ........................................ 37
3.4 Sobel Edge Detection Implementation ............................ 38
3.4.1 Implementation ............................................... 38
4 Results .......................................................... 43
5 Conclusion and Future Work ...................................... 45
Appendix A: Glossary ............................................... 47
Bibliography .......................................................... 50
Vita ................................................................. 55
List of Tables
4.1 Results of the Gaussian Blur ........................................ 43
4.2 Results of the Sobel Edge Detection ............................ 44
## List of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.1</td>
<td>GPU vs CPU on floating point calculations</td>
<td>5</td>
</tr>
<tr>
<td>2.2</td>
<td>CPU and GPU chip design</td>
<td>5</td>
</tr>
<tr>
<td>2.3</td>
<td>Products supporting CUDA</td>
<td>7</td>
</tr>
<tr>
<td>2.4</td>
<td>GPU Architecture. TPC: Texture/processor cluster; SM: Streaming Multiprocessor; SP: Streaming Processor</td>
<td>8</td>
</tr>
<tr>
<td>2.5</td>
<td>Streaming Multiprocessor</td>
<td>9</td>
</tr>
<tr>
<td>2.6</td>
<td>The compilation process for source file with host & device code</td>
<td>11</td>
</tr>
<tr>
<td>2.7</td>
<td>CUDA architecture</td>
<td>11</td>
</tr>
<tr>
<td>2.8</td>
<td>CUDA architecture</td>
<td>12</td>
</tr>
<tr>
<td>2.9</td>
<td>Execution of a CUDA program</td>
<td>14</td>
</tr>
<tr>
<td>2.10</td>
<td>Grid of thread blocks</td>
<td>16</td>
</tr>
<tr>
<td>2.11</td>
<td>A grid with dimension (2,2,1) and a block with dimension (4,2,2)</td>
<td>18</td>
</tr>
<tr>
<td>2.12</td>
<td>A 1-dimensional 10 x 1 block</td>
<td>19</td>
</tr>
<tr>
<td>2.13</td>
<td>Each thread computing the square of its own value</td>
<td>20</td>
</tr>
<tr>
<td>2.14</td>
<td>A device with more multiprocessors will automatically execute a kernel grid in less time than a device with fewer multiprocessors</td>
<td>21</td>
</tr>
<tr>
<td>2.15</td>
<td>Different memory types: Constant, Global, Shared and Register memory</td>
<td>24</td>
</tr>
<tr>
<td>3.1</td>
<td>Discrete kernel at (0,0) and $\sigma = 1$</td>
<td>31</td>
</tr>
</tbody>
</table>
Listing
2.1 Sample source code with Host & Device code ..................................... 13
2.2 Memory operations in a CUDA program .............................................. 14
2.3 Invoking a kernel with a 2 x 2 x 1 grid and a 4 x 2 x 2 block ................. 17
2.4 A program that squares an array of numbers ....................................... 18
2.5 Copying data from host memory to device memory and vice versa ............ 24
3.1 Sequential and Parallel Implementation of the Gaussian Blur ................. 33
3.2 This calls a CUDA library to allocate memory on the device to d_pixels ....... 37
3.3 Copies the contents of the host memory to the device memory referenced by d_pixels 37
3.4 CUDA calls to create/start/stop the timer .......................................... 37
3.5 Declares block sizes of 16 x 16 for 256 threads per block ..................... 37
3.6 This tells us that we want to have a w/16 x h/16 size grid .................... 37
3.7 Invokes the device method d_blur passing in the parameters.................. 37
3.8 Finding the current pixel location ..................................................... 37
3.9 This forces the threads to synchronize before executing further instructions.. 38
3.10 This saves the image to a PGM file .................................................. 38
3.11 Sequential and Parallel Implementation of the Sobel Edge Detection ......... 38
Chapter 1
Introduction
Graphics cards are widely used to accelerate gaming and 3D graphics applications. The GPU (Graphics Processing Unit) of a graphics card is built for compute-intensive and highly parallel computations. With the prevalence of high level APIs (CUDA - Compute Unified Device Architecture), the power of the GPU is being leveraged to accelerate more general purpose and high performance applications. It has been used in accelerating database operations[1], solving differential equations[2], and geometric computations[3].
Image processing is a well known and established research field. It is a form of signals processing in which the input is an image, and the output can be an image or anything else that undergoes some meaningful processing. Altering an image to be brighter, or darker is an example of a common image processing tool that is available in basic image editors.
Often, processing happens on the entire image, and the same steps are applied to every pixel of the image. This means a lot of repetition of the same work. Newer technology allows better quality images to be taken. This equates to bigger files and longer processing time. With the advancement of CUDA, programming to the GPU is simplified. The technology is ready to be used as a problem solving tool in the field of image processing.
This thesis shows the vast performance gain of using CUDA for image processing. Chapter
two gives an overview of the GPU, and gets into the depths of CUDA, its architecture and its programming model. Chapter three consists of the experimental section of this thesis. It provides both the sequential and parallel implementations of two common image processing techniques: image blurring and edge detection. Chapter four shows the experiment results and the thesis is concluded in chapter five.
CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by NVidia for massively parallel high-performance computing. It is the compute engine in the GPU and is accessible by developers through standard programming languages. CUDA technology is proprietary to NVidia video cards.
NVidia provides APIs in their CUDA SDK to give a level of hardware extraction that hides the GPU hardware from developers. Developers no longer have to understand the complexities behind the GPU. All the intricacies of instruction optimizations, thread and memory management are handled by the API. One benefit of the hardware abstraction is that this allows NVidia to change the GPU architecture in the future without requiring the developers to learn a new set of instructions.
2.1 GPU Computing and GPGPU
The Graphics Processing Unit (GPU) is a processor on a graphics card specialized for compute-intensive, highly parallel computation. It is primarily designed for transforming, rendering and accelerating graphics. It has millions of transistors, much more than the Central Processing Unit
(CPU), specializing in floating point arithmetic. Floating point arithmetic is what graphics rendering is all about. The GPU has evolved into a highly parallel, multithreaded processor with exceptional computational power. The GPU, since its inception in 1999, has been a dominant technology in accelerated gaming and 3D graphics application.
The main difference between a CPU and a GPU is that a CPU is a serial processor while the GPU is a stream processor. A serial processor, based on the Von Neumann architecture executes instructions sequentially. Each instruction is fetched and executed by the CPU one at a time. A stream processor on the other hand executes a function (kernel) on a set of input data (stream) simultaneously. The input elements are passed into the kernel and processed independently without dependencies among other elements. This allows the program to be executed in a parallel fashion.
Due to their highly parallel nature, GPUs are outperforming CPUs by an astonishing rate on floating point calculations (Figure 2.1)[4]. The main reason for the performance difference lies in the design philosophies between the two types of processors (Figure 2.2)[4]. The CPU is optimized for high performance on sequential operations. It makes use of sophisticated control logic to manage the execution of many threads while maintaining the appearance of a sequential execution. The large cache memories used to reduce access latency and slow memory bandwidth also contribute to the performance gap.
The design philosophy for GPUs on the other hand is driven by the fast growing video game industry that demands the ability to perform massive floating-point calculations in advanced video games. The motivation is to optimize the execution of massive number of threads, minimize control logic, and have small memory caches so that more chip area can be dedicated to floating-point
calculations. This trade-off makes the GPU less efficient at sequential tasks designed for the CPU.
Recognizing the huge potential performance gains, developers hungry for high performance began using the GPU for non graphics purposes. Improvements in the programmability of graphics hardware further drove GPU programming. Using high-level shading languages such as DirectX, OpenGL and Cg, various data parallel algorithms can be mapped onto the graphics API. A traditional graphics shader is hardwired to only do graphical operations, but now it is used in everyday general-purpose computing. Researchers have discovered that the GPU can accelerate certain problems by over an order of magnitude over the CPU. Using the GPU for general purpose computing creates a phenomenon known as GPGPU.
GPGPU is already being used to accelerate applications over a wide range of cross-disciplinary fields. Many applications that process large data sets take advantage of the parallel programming model by mapping its data elements to parallel processing threads. Purcell and Carr illustrates how this mapping is done for ray-tracing[5][6]. Similarly, this concept can be applied to other fields. The GPU is also being adopted in accelerating database operations[1][7][8][9][10][11]. Work has been done using the GPU for geometric computations[3][12][13][14], linear algebra[15], solving partial differential equations[2][16] and solving matrices[17][18]. As the GPU’s floating-point processing performance continues to outpace the CPU, more data parallel applications are expected to be done on the GPU.
While the GPGPU model has its advantages, programmers initially faced many challenges in porting algorithms from the CPU over to the GPU. Because the GPU was originally driven and designed for graphics processing and video games, the programming environment was tightly constrained. The programmer requires a deep understanding of the graphics API and GPU architecture. These APIs severely limit the kind of applications that can be written on this platform. Expressing algorithms in terms of vertex coordinates and shader programs increased programming complexity. As a result, the learning curve is steep and GPGPU programming is not widespread.
Higher-level language constructs are built to abstract the details of shader languages. The Brook
Specifications is created in 2003 by Stanford as an extension of the C language to efficiently incorporate ideas of parallel computing into a familiar language[19]. In 2006 a plethora of platforms including Microsoft’s Accelerator[20], the RapidMind Multi-Core Development Platform[21] and the PeakStream Platform[22] emerge. RapidMind is later acquired by Intel in 2009 and Peakstream is acquired by Google in 2007. By 2008 Apple released OpenCL[23], and AMD released its Stream Computing software development kit (SDK) that is built on top of the Brook Specifications. Microsoft released DirectCompute as part of its DirectX 11 package. NVidia released its Compute Unified Device Architecture (CUDA) as part of its parallel computing architecture. Popular commercial vendors such as Adobe and Wolfram are releasing cuda-enabled versions of their products (Figure 2.3)[24].
It is important to note that GPU processing is not meant to replace CPU processing. There are simply algorithms that run more efficiently on the CPU than on the GPU. Not everything can be executed in a parallel manner. GPUs, however, offer an efficient alternative for certain types of problems. The prime candidates for GPU parallel processing are algorithms that have components that require a repeated execution of the same calculations, and those components must be able to be executed independently of each other. Chapter 3 explores image processing algorithms that fit this paradigm well.
2.2 CUDA architecture
A typical CUDA architecture consists of the components as illustrated in Figure 2.4[25]. The Host CPU, Bridge and System memory are external to the graphics card, and are collectively referred to as the host. All remaining components forms the GPU and the CUDA architecture, and are collectively referred to as the device. The host interface unit is responsible for communication such as responding to commands, and facilitating data transfer between the host and the device.
The input assembler collects geometric primitives and outputs a stream to the work distributors[25]. The work distributors forward the stream in a round robin fashion to the Streaming Processor Array (SPA). The SPA is where all the computation takes place. The SPA is an array of Texture/Processor Clusters (TPC) as shown in Figure 2.4[25]. Each TPC contains a geometry controller, a Streaming Multiprocessor (SM) controller (SMC), a texture unit and 2 SMs. The texture unit is used by the SM as a third execution unit and the SMC is used by the SM to implement external memory load, store and atomic access. A SM is a multiprocessor that executes vertex, geometry and other shader programs as well as parallel computing programs. Each SM contains 8 Streaming Processors (SP), and 2 Special Function Units (SFU) specializing in floating point functions such as square root and
transcendental functions and for attribute interpolations. It also contains an instruction cache, a constant cache, a multithreaded instruction fetch and issue unit (MT) and shared memory (Figure 2.5)[25]. Shared memory holds shared data between the SPs for parallel thread communication and cooperation. Each SP contains its own MAD and MUL units while sharing the 2 SFU with the other SPs.
Figure 2.5: Streaming Multiprocessor
A SM can execute up to 8 thread blocks, one for each SP. It is capable of efficiently executing hundreds of threads in parallel with zero scheduling overhead. The SMs employ the Single-Instruction, Multiple-Thread (SIMT) architecture to manage hundreds of concurrent threads[26]. GTX-200 series is equipped with 16 KB of shared memory per SM. In the GeForce 8-series GPU, each SP can handle up to 96 concurrent threads for a total of 768 threads per SM[27]. On a GPU with 16 SMs, up to 12,288 concurrent threads can be supported.
2.3 CUDA Programming Model
CUDA programming is a type of heterogeneous programming that involves running code on two different platforms: a host and a device. The host system consists primarily of the CPU, main memory and its supporting architecture. The device is generally the video card consisting of a CUDA-enabled GPU and its supporting architecture.
The source code for a CUDA program consists of both the host and device code mixed in the same file. Because the source code targets two different processing architectures, additional steps are required in the compilation process. The NVidia C Compiler (NVCC) first parses the source code and creates two separate files: one to be executed by the host and one for the device. The host file is compiled with a standard C/C++ compiler which produces standard CPU object files. The device file is compiled with the CUDA C Compiler (CUDACC) which produces CUDA object files. These object files are in an assembly language known as Parallel Thread eXecution or PTX files. PTX files are recognized by device drivers that are installed with NVidia graphics cards. The two resulting file set is linked and a CPU-GPU executable is created (Figure 2.6)[28]. As shown in Figure 2.7[28] & 2.8[29], this type of architecture allows the flexibility for developers who are familiar with other languages to leverage the power of CUDA without having to learn a brand new language.
Figure 2.6: The compilation process for source file with host & device code
Figure 2.7: CUDA architecture
NVCC separates host from device code by identifying specific keywords that represents instructions for the device. Methods/Functions that are designed to execute on the device are called kernels. Kernels are typically executed by thousands to millions of threads to take advantage of data parallelism. Since all threads are executing the same code, this falls into the well known paradigm of Single Program Multiple Data (SPMD) widely used in parallel computing systems[30]. SPMD is an asynchronous version of another technique known as Single-Instruction Multiple-Data (SIMD). In SIMD, multiple processors execute the same program instructions (a function) on different data. The key difference between SIMD and SPMD is that SIMD executes the program instructions in locksteps. Every processor executes the identical instruction at any given time. SPMD however removes that restriction. This allows the possibility of having branching in the program instruction where the instructions executed by each processor is not always the same.
Listing 2.1 shows an example of a typical C program involving CUDA. _global_ is a C extension that defines a kernel. The kernel is invoked inside the main function by using the <<< ... >>> syntax. dimblock and dimGrid defines the number of threads and its configuration when it executes in the kernel. Each thread that executes the kernel is assigned a unique thread id. A particular thread within the kernel can be identified by the combination of its blockIdx, blockDim and threadIdx.
This allows for the control of having different threads do different work.
Listing 2.1: Sample source code with Host & Device code
```c
// Kernel Definition
__global__ void MatAdd(float A[N][N], float B[N][N], float C[N][N])
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (i < N && j < N) {
C[i][j] = A[i][j] + B[i][j];
}
}
int main()
{
// Kernel Invocation
dim3 dimBlock(16, 16);
dim3 dimGrid((N + dimBlock.x - 1) / dimBlock.x, (N + dimBlock.y - 1) / dimBlock.y);
MatAdd <<< dimGrid, dimBlock >>> (A, B, C);
}
```
A CUDA program starts execution on the the host (Figure 2.9)[31]. When it encounters the kernel, it will launch the kernel and continues execution on the CPU without waiting for the completion of the kernel. The groups of threads created as a result of the kernel invocation is collectively referred to as a grid. The grid terminates when the kernel terminates. Currently in CUDA, only one kernel can be executed at a time. If the host encounters another kernel while a previous kernel is not yet complete, the CPU will stall until the kernel is complete. The next-generation architecture FERMI allows for the concurrent execution of multiple kernels.
In CUDA, the host and devices have separate memory spaces. Variables and data in the host memory is not directly accessible by the GPU. The data allocated on the host must first be transferred to the device memory using the CUDA API. Similarly, the results from the device must be transferred back to the host. Memory management techniques must be applied on both platforms. Listing 2.2[31] shows a snippet of operations dealing with memory on the host and device. *cudaMalloc*, *cudaMemcpy*, *cudaFree* are all CUDA APIs that allocates memory, copies memory, and frees memory respectively on the device.
```c
void MatrixMulOnDevice(float* M, float* N, float* P, int Width) {
int size = Width * Width * sizeof(float);
// 1. Load M and N to device memory
cudaMalloc(Md, size);
cudaMemcpy(Md, M, size, cudaMemcpyHostToDevice);
cudaMalloc(Nd, size);
cudaMemcpy(Nd, N, size, cudaMemcpyHostToDevice);
// Allocate P on the device
cudaMalloc(Pd, size);
}
```
Figure 2.9: Execution of a CUDA program
2.4 CUDA Thread Hierarchy
Threads on the device are automatically invoked when a kernel is being executed. The programmer determines the number of threads that best suits the given problem. The thread count along with the thread configurations are passed into the kernel. The entire collection of threads responsible for an execution of the kernel is called a grid (Figure 2.10)[4].
A grid is further partitioned and can consist of one or more thread blocks. A block is an array of concurrent threads that execute the same thread program and can cooperate in achieving a result. In Figure 2.10[4], the blocks are organized into a 2 x 3 array. A thread block can be partitioned into one, two or three dimensions, facilitating calculations dealing with vectors, matrices or fields. Each block has its own unique block identifier. All threads within a block can cooperate with each other. They can share data by reading and writing to shared memory, and they can synchronize their execution by using `__syncthreads()`. `__syncthreads` acts as a barrier so that all threads of the same block must wait for all threads to execute before moving forward. This ensures that all threads
have finished executing a phase of their execution in the kernel before moving on to the next phase. `synchthread` is commonly used inside the kernel to coordinate read and write phases to shared memory. Since the data in the memory is shared, all threads must write first and read second.
Threads of different blocks cannot communicate with each other. In fact, thread blocks are required that they can be executed independently of other blocks, whether in series or in parallel. Like blocks, threads within a block can be strategically structured as well. Figure 2.10 shows a 3 x 4 array of threads within block (1,1). All blocks must contain the same number of threads and thread structure. Each block can have a maximum of up to 512 threads. The programmer has the freedom to structure the threads in any different combinations of up to three dimensions (512 x 1, 16 x 8 x 2, etc) as long as the total number of threads do not exceed 512. The organization of blocks and threads can be established and passed to the kernel when it is invoked by the host. this configuration is maintained throughout the entire execution of the kernel.
Block and grid dimensions can be initialized by the to type `dim3`, which is a essentially a struct with x, y, z fields. Listing 2.3 creates a 2 x 2 x 1 grid and each block has a dimension of 4 x 2 x 2. The threading configuration is then passed to the kernel. The resulting hierarchy can be graphically represented as shown in Figure 2.11[31]. Within the kernel, these information are stored as built-in variables. `blockDim` holds the dimension information of the current block. `blockIdx` and `threadIdx` provides the current block and thread index information. All `blockIdx`, `threadIdx`, `gridDim`, and `blockDim` have 3 dimensions: x, y, z. For example, block (1,1) has `blockIdx.x = 1` and `blockIdx.y = 1`.
Listing 2.3: Invoking a kernel with a 2 x 2 x 1 grid and a 4 x 2 x 2 block
```c
1 dim3 dimBlock(4,2,2);
2 dim3 dimGrid(2,2,1);
3 KernelFunction <<< dimGrid, dimBlock >>>
```
One of the main functionality of `blockId` and `threadId` is to distinguish themselves from other threads. One common usage is to determine which set of data a thread is responsible for. Listing 2.4 is a simple example of squaring all elements of a 1-dimensional array of size 10. To do that we create a 1-dimensional grid, containing a 1-dimensional 10 x 1 block. When the `square_array` kernel is called, it generates a threading configuration resembling Figure 2.12; a 1-dimensional array of 10 threads.
Listing 2.4: A program that squares an array of numbers
```c
__global__ void square_array (float *a, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < N) {
a[idx] = a[idx] * a[idx];
}
```
Figure 2.11: A grid with dimension (2,2,1) and a block with dimension (4,2,2)
Figure 2.12: A 1-dimensional 10 x 1 block
The code in the kernel identifies a thread by using the `blockIdx.x`, `blockDim.x` and `threadIdx.x`. In this case, `blockIdx.x = 0`, `blockDim.x = 10` and `threadIdx.x` ranges from 0,9 inclusive depending on which thread executes the kernel. Figure 2.13 is the result of executing kernel `square_array`. Each thread is responsible for computing the square of the value stored in the array at index equal to the thread id. It is easily seen that each thread can operate independently of each other. Mapping thread Ids to array indices is a common practice in parallel processing. A similar technique is used in mapping to matrices and fields.
Figure 2.13: Each thread computing the square of its own value
One limitation on blocks is that each block can hold up to 512 threads. In trivial cases where each thread is independent of other threads (such as `square_array` in the example above) the grid can simply be augmented to contain more blocks. Grid dimensions are limited to 65535 x 65535 x 1 blocks. For situations where each thread is dependent of other threads such as the computation of a dot product that exceeds 512 in length, a more sophisticated technique is required. The programmer needs to be creative and craft a design that allow threads to be mapped to larger regions, and at the same time not overlap the work of other threads. Taking the `square_array` example, if the problem deals with 1024 elements, each thread can be responsible for data at indices `threadIdx` and `threadIdx + blockDim.x`, where `blockDim.x = 512`.
Once a kernel is launched, the corresponding grid and block structure is created. The blocks are then assigned to a SM by the SMC (see CUDA architecture). Each SM executes up to 8 blocks concurrently. Remaining blocks are queued up until a SM is free. The SMCs are smart enough to monitor resource usage and not assign blocks to SMs that are deficient of resources. This ensures that all SMs are functioning to its maximum capacity. As shown in Figure 2.14[4], the more SM a graphics card has, the more concurrent blocks can be executed. Although each block can contain up to 512 threads, and each SM can execute up to a maximum of 8 concurrent blocks, it is not true that at any given time a SM can execute 4096 concurrent threads. Resources are required to maintain the thread and block IDs and its execution state. Due to hardware limitations the SM can
only manage up to 768 concurrent threads. However, those threads can be provided to the SM in any configuration of blocks. If a graphics card have 16 SM, then the GPU can be executing up to 12,288 threads concurrently.

**Figure 2.14:** A device with more multiprocessors will automatically execute a kernel grid in less time than a device with fewer multiprocessors
To manage and execute hundreds of concurrent threads efficiently, the SM uses a processor architecture known as Single-Instruction, multiple-thread (SIMT). The SIMT instruction unit subdivides threads within a block into groups of 32 parallel thread units call warps. Since a SM can handle up to 768 concurrent threads, it can support up to 24 warps. However, the SM's hardware is designed
to execute only one warp at a time. The reason it is assigned up to 8 warps is to mask long latency operations such as memory access. When an instruction executed by a thread in a warp requires it to wait, the warp is placed in a queue while the SM continues to execute other warps that are available. The SMC employs a priority scheme in assigning warps to the SM. A warp is a construct developed for thread scheduling within the SM. Although warps are not part of the CUDA language specification, it is beneficial to understand what warps are and how it is used. This knowledge provides an edge in optimizing performance of CUDA applications.
All threads of a warp are designed to execute the same block of code in lock steps. When an instruction is issue, the SIMT unit selects a warp that is ready to execute. Full efficiency is achieved when all 32 threads can execute that instruction simultaneously. However, threads are free to branch and execute independently. If a particular thread of the warp diverges from the group based on a conditional branch, the warp will execute each branch serially. While a group of threads are executing a branch, all threads not part of that branch will be disabled. When all threads finish executing their respective branches, the warp will converge back to its original execution path. The SM manages branching threads by using a branch synchronization stack. The branching of threads in a warp is known as thread divergence, and should be avoided since it serializes execution. Divergence only occurs within warps. Different warps are executed independent of each other regardless of the path it takes.
A warp always contains consecutive threads of increasing thread Ids, and is always created the same way. The programmer can take advantage of this fact and use designs that minimizes thread divergence.
SIMT is very similar to the SIMD and SPMD models described earlier. Like SIMD, SIMT allows all threads execute the same instruction. However, similar to SPMD, the SIMT architecture is flexible enough to allow threads to follow different execution paths based on conditional branches. SIMT differs with SPMD in that SIMT refers to the management of threads within a warp where as SPMD focuses on the larger scale of a kernel. The SIMT model greatly increases the set of
algorithms that can be run on this parallel architecture. The SIMT architecture is user friendly
in that the programmer can ignore the entire SIMT behavior and the idea of warps. However,
substantial performance gain can be achieved if thread divergence is avoided.
2.5 CUDA Memory
The typical flow of a CUDA program starts by loading data into host memory and from there transfer
to device memory. When an instruction is executed, the threads can retrieve the data needed from
device memory. Memory access however can be slow and have limited bandwidth. With thousands
of threads making memory calls, this potentially can be a bottle neck and thus, rendering the SMs
idled. To ease traffic congestion, CUDA provides several types of memory constructs that improve
execution efficiency.
There are 4 major types of device memories: global, constant, shared and register memory
(Figure 2.15)[31]. Global memory has the highest access latency among the three. A global variable
is declared by using the keyword `__device__`. It is the easiest to use and requires very little strategy.
It can easily be read and written to by the host using CUDA APIs and it can be easily accessed by
the device. As Listing 2.5 shows, the first step is to allocate global memory by using the `cudaMalloc`
function. Then the data in the host is copied to the device by the `cudaMemcpy`
function and the
constant `cudaMemcpyHostToDevice` indicates that the transfer is from host to device. After the
computation is done, the same step is applied to move the data back to the host. Finally the global
memory allocated on the device is freed by the `cudaFree()` function. The only constraint on usage
of global memory is that it is limited by memory size. Data in global memory lasts for the duration
of the entire application and is accessible by any thread across any grid. Global memory is the only
way for threads from different blocks to communicate with each other. However, during execution
of a single grid, there is no way to synchronize threads from different blocks. Therefore, for practical
purposes, global memory is more useful for information from one kernel invocation to be saved and
used by a future kernel invocation.
Figure 2.15: Different memory types: Constant, Global, Shared and Register memory
Listing 2.5: Copying data from host memory to device memory and vice versa
1 cuMemloc((void **) &a_d, size); //Allocate array on device
2 cudaMemcpy(a_d, a_h, size, cudaMemcpyHostToDevice);
3 ...
4 cudaMemcpy(a_h, a_d, size, cudaMemcpyDeviceToHost);
5 cuMemloc(a_d); //Frees memory on the device.
Constant memory is very similar to global memory. In fact, these are the only two memory that the host can read and write to. The main difference from global memory is that constant memory is read-only to the device because it is designed for faster parallel data access. Data is stored in global memory but are cached for efficient access. It allows for high-bandwidth, short-latency access when
all threads simultaneously read from the same location. A constant variable is declared by using the keyword `__constant__`. Like global memory, constant memory also lasts for the entire duration of the application.
Shared memory is an on-chip memory that the host cannot access. This type of memory is allocated on a block level and can only be accessed by threads of that block. Shared memory is the most efficient way for threads of the same block to cooperate, usually by synchronizing read and write phases. It is much faster than using global memory for information sharing within a block. Shared memory is declared by using the keyword `__shared__`. It is typically used inside the kernel. The contents of the memory last for the entire duration of the kernel invocation.
The last type of memory is register memory. Registers are allocated to each individual thread, and are private to each thread. If there are 1 million threads declaring a variable, 1 million versions will be created and stored in their registers. Once the kernel invocation is complete, that memory is released. Variables declared inside a kernel (that are not arrays, and without a keyword) are automatically stored in registers. Variables that are arrays are stored in global memory, but since the variables are declared inside a kernel, the scope is still at the kernel level. Arrays inside a kernel is seldomly needed.
### 2.6 Limitations of CUDA
One of the limitations of the early CUDA architecture is the lack of support for recursion. Mainly a hardware limitation, the the stack and overhead for recursion was too heavy to support. This limitation has been overcome in devices with CUDA compute capability greater than 2.0, which is a new architecture code name *FERMI*.
Another limitation is its compliance with the *IEEE-754* standard for binary floating point arithmetic[4]. For single-precision floating point numbers:
- Addition and Multiplication are combined into a single multiply-add operation (FMAD), which
• Division is implemented via the reciprocal
• For addition and multiplication, only round-to-nearest-even and round-towards-zero are supported via static rounding modes
• Underflowed results are flushed to zero
For double-precision floating point numbers:
• Round-to-nearest-even is the only supported IEEE rounding mode for reciprocal, division and square root.
Finally, CUDA is a proprietary architecture owned by NVidia and is available through NVidia video cards only.
2.7 Common CUDA APIs
**Function Qualifiers**
• _device_ -
– declares a function that is executed on the device, and called by the device
– do not support recursion
• _global_ -
– declares a function that is executed on the device, and called by the host
– must have void as return type
– function call is asynchronous
– do not support recursion
• _host_ - declares a function that is executed on the host, and called by the host
**Variable Type Qualifiers**
- **device**
- declares a variable on the device that resides in global memory
- has the lifetime of an application
- is accessible from all threads across all grids
- can read/write by the host and device
- **constant**
- declares a variable on the device that resides in constant memory
- has the lifetime of an application
- is accessible from all threads across all grids
- can read/write by the host and read only by the device
- **shared**
- declares a variable on the device that resides in shared memory
- has the lifetime of a block
- is accessible (read/write) from all threads within the same block
**Built-In Variables**
- **gridDim** - contains the dimension of the grid
- **blockDim** - contains the dimension of the block
- **blockIdx** - contains the index of the block
- **threadIdx** - contains the index of the thread
**Common Runtime Components**
• *dim3* Type - Used to declare a type with dimensions
• *syncthreads()* - used to synchronize threads within a kernel
• *cudaThreadSynchronize()* - used to synchronize threads between kernels
• *cudaMalloc()* - allocates memory in the device
• *cudaFree()* - frees the allocated memory in the device
• *cudaMemcpy()* - copies memory content between the host and device
For a complete reference of the CUDA API, please visit NVidia’s website.
Chapter 3
Image Processing and CUDA
Image processing is a type of signals processing in which the input is an image, and the output can be an image or anything else that undergoes some meaningful processing. Converting a colored image to its grayscale representation is an example of image processing. Enhancing a dull and worn off fingerprint image is another example of image processing. More often than not, image processing happens on the entire image, and the same steps are repeatedly applied to every pixel of the image. This programming paradigm is a perfect candidate to fully leverage CUDAs massive compute capabilities.
This section will compare the performance differences between software that are run on a sequential processor (CPU) and a parallel processor (GPU). The experiment will consist of performing various image processing algorithms on a set of images. Image processing is ideal for running on the GPU because each pixel can be directly mapped to a separate thread.
The experiment will involve a series of image convolution algorithms. Convolutions are commonly used in a wide array of engineering and mathematical applications. A simple highlevel explanation is basically taking one matrix (the image) and passing it through another matrix (the convolution matrix). The result is your convoluted image. The matrix can also be called the filter.
3.1 Gaussian Blur
Image smoothing is a type of convolution most commonly used to reduce image noise and detail. This is generally done by applying the image through a low pass filter. The filter will retain lower frequency values while reducing high frequency values. The image is smoothed by reducing the disparity between pixels by its nearby pixels.
Image smoothing is sometimes used as a preprocessor for other image operations. Most commonly, an image is smoothed to reduce noise before an edge detection algorithm is applied. Smoothing can be applied to the same image over and over again until the desired effect is achieved.
A simple way to achieve smoothing is by using a mean filter. The idea is to replace each pixel with the average value of all neighboring pixels including itself. One of the advantages of this approach is its simplicity and speed. However, a main disadvantage is that outliers, especially ones that are farthest away from the pixel of interest can create a misrepresentation of the true mean of the neighborhood.
Another way to smooth an image is to use the Gaussian Blur[32]. The Gaussian Blur is a sophisticated image smoothing technique because it reduces the magnitude of high frequencies proportional to their frequencies. It gives less weight to pixels further from the center of the window. The Gaussian function is defined as:
\[ G(x, y) = \frac{1}{2\pi\sigma^2} e^{-\frac{x^2+y^2}{2\sigma^2}} \]
where \(\sigma\) is the standard deviation of the distribution. The discrete kernel at (0,0) and \(\sigma = 1\) is shown in Figure 3.1[33].
3.2 Sobel Edge Detection
Edge detection is a common image processing technique used in feature detection and extraction. Applying an edge detection on an image can significantly reduce the amount of data needed to be processed at a later phase while maintaining the important structure of the image. The idea is to remove everything from the image except the pixels that are part of an edge. These edges have special properties, such as corners, lines, curves, etc. A collection of these properties or features can be used to accomplish a bigger picture, such as image recognition.
An edge can be identified by significant local changes of intensity in an image[34]. An edge usually divides two different regions of an image. Most edge detection algorithms work best on an image that has the noise removal procedure already applied. The main ones existing today are techniques using differential operators and high pass filtration.
A simple edge detection algorithm is to apply the Sobel edge detection algorithm. It involves convolving the image using a integer value filter, which is both simple and computationally inex-
The Sobel filter is defined as:
\[ S_1 = \begin{bmatrix} -1 & 0 & +1 \\ -2 & 0 & +2 \\ -1 & 0 & +1 \end{bmatrix}, \quad S_2 = \begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ +1 & +2 & +1 \end{bmatrix} \]
To apply the sobel algorithm on an image, we first find the approximate derivatives with respect to the horizontal and vertical directions. Let \( A \) be the original image, \( G_x \) be the derivative approximation on the horizontal axis and \( G_y \) be the derivative approximation on the vertical axis.
\[ G_x = S_1 \cdot A \]
\[ G_y = S_2 \cdot A \]
The resulting gradient image is the combination of \( G_x \) and \( G_y \). Each pixel \( G(x, y) \) of the resulting image can be calculated by taking the magnitude of \( G_x \) and \( G_y \):
\[ G(x, y) = \sqrt{G_x^2 + G_y^2} \]
The gradients direction is calculated by:
\[ \theta = \arctan \frac{G_y}{G_x} \]
Finally, to determine whether a pixel of the original image \( A \) is part of an edge, we apply:
if \( G(x, y) > \text{threshold} \), then \( A(x, y) \) is part of an edge
### 3.3 Gaussian Blur Implementation
To compare the speedup differences between processing on the CPU vs processing on the GPU, an experiment was done using the above algorithms in both the sequential and the parallel model. Both implementations are shown in the source code (Listing 3.1).
The programs are run on an Intel Core 2 Duo, 2GHz processor with a NVidia GeForce GTX 260. The graphics card contains 192 cores at 1.2 GHz each. Each algorithm is run against images that are 266kb, 791kb, and 7.7mb in size. The images had dimensions of 512 × 512, 1024 × 768, 3200 × 2400 respectively.
### 3.3.1 Implementation
Listing 3.1: Sequential and Parallel Implementation of the Gaussian Blur
```c
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <cuda.h>
#include <cutil.h>
#include <ctime>
unsigned int width, height;
int mask[3][3] = {1,2,1,
2,3,2,
1,2,1,
};
int getPixel(unsigned char * arr, int col, int row){
int sum = 0;
for (int j=-1; j<=1; j++){
for (int i=-1; i<=1; i++){
int color = arr[(row + j) * width + (col + i)];
sum += color * mask[i+1][j+1];
}
}
return sum/15;
}
```
void h_blur(unsigned char * arr, unsigned char * result){
int offset = 2 * width;
for (int row = 2; row < height - 3; row++){
for (int col = 2; col < width - 3; col++){
result[offset + col] = getPixel(arr, col, row);
}
offset += width;
}
}
__global__ void d_blur(unsigned char * arr, unsigned char * result, int width, int height){
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
if (row < 2 || col < 2 || row >= height - 3 || col >= width - 3)
return;
int mask[3][3] = {1,2,1, 2,3,2, 1,2,1};
int sum = 0;
for (int j = -1; j <= 1; j++){
for (int i = -1; i <= 1; i++){
int color = arr[(row + j) * width + (col + i)];
sum += color * mask[i + 1][j + 1];
}
}
result[row * width + col] = sum/15;
}
int main(int argc, char** argv){
/********************* setup work ***************************
unsigned char * d_resultPixels;*/
unsigned char * h_resultPixels;
unsigned char * h_pixels = NULL;
unsigned char * d_pixels = NULL;
char * srcPath = "/Developer/GPU Computing/C/src/GaussianBlur/image/wallpaper2.pgm";
char * h_ResultPath = "/Developer/GPU Computing/C/src/GaussianBlur/output/h_wallpaper2.pgm";
char * d_ResultPath = "/Developer/GPU Computing/C/src/GaussianBlur/output/d_wallpaper2.pgm";
cutLoadPGMub(srcPath, &h_pixels, &width, &height);
int ImageSize = sizeof(unsigned char) * width * height;
h_resultPixels = (unsigned char *)malloc(ImageSize);
cudaMalloc((void**)&d_pixels, ImageSize);
cudaMalloc((void**)&d_resultPixels, ImageSize);
cudaMemcpy(d_pixels, h_pixels, ImageSize, cudaMemcpyHostToDevice);
/******************************** END setup work
******************************* */
/******************************** Host processing
*******************************
clock_t starttime, endtime, difference;
starttime = clock();
//apply gaussian blur
h_blur(h_pixels, h_resultPixels);
double interval = difference / (double)CLOCKS_PER_SEC;
printf("CPU execution time = \%f ms\n", interval * 1000);
cutSavePGMub(h_ResultPath, h_resultPixels, width, height);
/**************************** END Host processing
**************************/
dim3 block(16,16);
dim3 grid (width/16, height/16);
unsigned int timer = 0;
cutCreateTimer(&timer);
cutStartTimer(timer);
/* CUDA method */
d_blur <<< grid, block >>>(d_pixels, d_resultPixels, width,
height);
cudaThreadSynchronize();
cutStopTimer(timer);
printf("CUDA execution time = %f ms\n",cutGetTimerValue(timer));
cudaMemcpy(h_resultPixels, d_resultPixels, ImageSize,
cudaMemcpyDeviceToHost);
cutSavePGMub(d_ResultPath, h_resultPixels, width, height);
/**************************** END Device processing
**************************/
printf("Press enter to exit ...\n");
getchar();
}
3.3.2 Breaking Down CUDA
Listing 3.2: This calls a CUDA library to allocate memory on the device to \texttt{d\_pixels}
\begin{verbatim}
cudaMalloc((void**)&d_pixels, ImageSize);
\end{verbatim}
Listing 3.3: Copies the contents of the host memory to the device memory referenced by \texttt{d\_pixels}
\begin{verbatim}
cudaMemcpy(d_pixels, h_pixels, ImageSize, cudaMemcpyHostToDevice);
\end{verbatim}
Listing 3.4: CUDA calls to create/start/stop the timer
\begin{verbatim}
cutCreateTimer(&timer);
cutStartTimer(timer);
cutStopTimer(timer);
\end{verbatim}
Listing 3.5: Declares block sizes of 16 x 16 for 256 threads per block.
\begin{verbatim}
dim3 block(16,16);
\end{verbatim}
Listing 3.6: This tells us that we want to have a \( w/16 \times h/16 \) size grid.
\begin{verbatim}
dim3 grid (width/16, height/16);
\end{verbatim}
If the image we are dealing with is 256 x 256, then the grid will be 16 x 16 and will contain 256 blocks. Since each block contains 256 threads, this will amount to 65536, which is exactly the number of pixels in a 256 x 256 image.
Listing 3.7: Invokes the device method \texttt{d\_blur} passing in the parameters.
\begin{verbatim}
d_blur <<< grid, block >>>(d_pixels, d_resultPixels, width, height);
\end{verbatim}
Listing 3.8: Finding the current pixel location.
\begin{verbatim}
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
\end{verbatim}
These two lines basically determine which thread process on which pixel of the image. As calculated above, there are 65536 threads performing on 65536 pixels. Each thread should perform on its own unique pixel and avoid processing the pixels owned by other threads. Since each thread is uniquely identified by its own thread id, block id and we know the dimensions of the block, we can use the technique above to assign a unique pixel coordinate for each thread to work on.
Listing 3.9: This forces the threads to synchronize before executing further instructions.
```c
cudaThreadSynchronize();
```
Listing 3.10: This saves the image to a PGM file.
```c
cutSavePGMub(d_ResultPath, h_resultPixels, width, height);
```
### 3.4 Sobel Edge Detection Implementation
The Sobel edge detection algorithm is also implemented in both the sequential and parallel version. It is run on the same hardware and uses the same images as the one used by the Gaussian Blur experiment.
#### 3.4.1 Implementation
Listing 3.11: Sequential and Parallel Implementation of the Sobel Edge Detection
```c
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <cuda.h>
#include <cutil.h>
#include <ctime>
```
unsigned int width, height;
int Gx[3][3] = {-1, 0, 1,
-2, 0, 2,
-1, 0, 1};
int Gy[3][3] = {1,2,1,
0,0,0,
-1,-2,-1};
int getPixel(unsigned char * org, int col, int row){
int sumX, sumY;
sumX = sumY = 0;
for (int i=-1; i<= 1; i++) {
for (int j=-1; j<=1; j++) {
int curPixel = org[(row + j) * width + (col + i)];
sumX += curPixel * Gx[i+1][j+1];
sumY += curPixel * Gy[i+1][j+1];
}
}
int sum = abs(sumY) + abs(sumX);
if (sum > 255) sum = 255;
if (sum < 0) sum = 0;
return sum;
}
void h_EdgeDetect(unsigned char * org, unsigned char * result){
int offset = 1 * width;
for (int row=1; row< height-2; row++) {
for (int col=1; col<width-2; col++) {
result[offset + col] = getPixel(org, col, row);
}
offset += width;
}
}
__global__ void d_EdgeDetect(unsigned char *org, unsigned char *result, int width, int height)
{
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
if (row < 2 || col < 2 || row >= height -3 || col >= width -3 )
return;
int Gx[3][3] = {-1, 0, 1,
-2, 0, 2,
-1, 0, 1};
int Gy[3][3] = {1, 2, 1,
0, 0, 0,
-1, -2, -1};
int sumX, sumY;
sumX = sumY = 0;
for (int i=-1; i<= 1; i++){
for (int j=-1; j<=1; j++){
int curPixel = org[(row + j) * width + (col + i)];
sumX += curPixel * Gx[i+1][j+1];
sumY += curPixel * Gy[i+1][j+1];
}
}
int sum = abs(sumY) + abs(sumX);
if (sum > 255) sum = 255;
if (sum < 0) sum = 0;
result[row * width + col] = sum;
}
int main( int argc, char** argv)
{
printf("Starting program\n");
}
/******************** setup work ***************************/
unsigned char * d_resultPixels;
unsigned char * h_resultPixels;
unsigned char * h_pixels = NULL;
unsigned char * d_pixels = NULL;
char * srcPath = "/Developer/GPU Computing/C/src/EdgeDetection/image/cartoon.pgm";
char * h_ResultPath = "/Developer/GPU Computing/C/src/EdgeDetection/output/h_cartoon.pgm";
char * d_ResultPath = "/Developer/GPU Computing/C/src/EdgeDetection/output/d_cartoon.pgm";
cutLoadPGMub(srcPath, &h_pixels, &width, &height);
int ImageSize = sizeof(unsigned char) * width * height;
h_resultPixels = (unsigned char *)malloc(ImageSize);
cudaMalloc((void**)&d_pixels, ImageSize);
cudaMalloc((void**)&d_resultPixels, ImageSize);
cudaMemcpy(d_pixels, h_pixels, ImageSize, cudaMemcpyHostToDevice);
/******************** END setup work *****************************/
/******************** Host processing ***************************/
clock_t starttime, endtime, difference;
printf("Starting host processing\n");
starttime = clock();
h_EdgeDetect(h_pixels, h_resultPixels);
endtime = clock();
printf("Completed host processing\n");
difference = (endtime - starttime);
double interval = difference / (double)CLOCKS_PER_SEC;
printf("CPU execution time = %f ms\n", interval * 1000);
cutSavePGMub(h_ResultPath, h_resultPixels, width, height);
/*************************************** END Host processing
/************************************** Device processing
dim3 block(16,16);
dim3 grid (width/16, height/16);
unsigned int timer = 0;
cutCreateTimer(&timer);
printf("Invoking Kernel\n");
cutStartTimer(timer);
/* CUDA method */
d_EdgeDetect <<< grid, block >>>(d_pixels, d_resultPixels, width, height);
cudaThreadSynchronize();
cutStopTimer(timer);
printf("Completed Kernel\n");
printf("CUDA execution time = %f ms\n", cutGetTimerValue(timer));
cudaMemcpy(h_resultPixels, d_resultPixels, ImageSize,
cudaMemcpyDeviceToHost);
cutSavePGMub(d_ResultPath, h_resultPixels, width, height);
/*************************************** END Device processing
printf("Press enter to exit ...\n");
getchar();
}
Chapter 4
Results
The results of both executions are shown in Table 4.1 & 4.2. As shown in the results, the GPU time has a significant increase over the CPU time in all the images that were processed. Irregardless of the type of algorithm ran, the results are affirmative. Processing on the GPU has a huge edge over processing on the CPU. The rate of percent increase increases as the image size increases. This aligns with the earlier claim that CUDA processing is most effective when lots of threads are being utilized simultaneously.
<table>
<thead>
<tr>
<th>Image Size</th>
<th>GPU Time(ms)</th>
<th>CPU Time(ms)</th>
<th>Percent Increase</th>
</tr>
</thead>
<tbody>
<tr>
<td>512 x 512 Lena</td>
<td>0.67</td>
<td>16</td>
<td>2,288</td>
</tr>
<tr>
<td>1024 x 768 wallpaper2</td>
<td>0.84</td>
<td>62</td>
<td>7,280</td>
</tr>
<tr>
<td>3200 x 2400 cartoon</td>
<td>2.92</td>
<td>688</td>
<td>23,461</td>
</tr>
</tbody>
</table>
Table 4.1: Results of the Gaussian Blur
<table>
<thead>
<tr>
<th>Image Size</th>
<th>GPU Time (ms)</th>
<th>CPU Time (ms)</th>
<th>Percent Increase</th>
</tr>
</thead>
<tbody>
<tr>
<td>512 x 512 Lena</td>
<td>0.67</td>
<td>32</td>
<td>4,676</td>
</tr>
<tr>
<td>1024 x 768 wallpaper2</td>
<td>0.82</td>
<td>94</td>
<td>11,363</td>
</tr>
<tr>
<td>3200 x 2400 cartoon</td>
<td>2.87</td>
<td>937</td>
<td>32,548</td>
</tr>
</tbody>
</table>
Table 4.2: Results of the Sobel Edge Detection
The results also show that the edge detection algorithm in general is slightly less computationally expensive than the Gaussian Blur. While that difference is shown with the need for more time in the sequential algorithm, the parallel algorithm is unaffected. This further confirms that the more computation power is required, the more CUDA is utilized to its full potential.
Chapter 5
Conclusion and Future Work
Graphics cards have widely been used to accelerate gaming and 3D graphical applications. High level programmable interfaces now allow this technology to be used for general purpose computing. CUDA is the first of its kind from the NVidia tech chain. It is fundamentally sound and easy to use. This thesis gives an introduction of the type of performance gains that can be achieved by switching over to the parallel programming model.
Image processing algorithms is a category of algorithms that work well in achieving the best benefits out of CUDA. Most algorithms are such that a type of calculation is repeated over and over again in massive amounts. This is perfect for utilizing CUDA’s massive amounts of threads. Most of these algorithms can be processed independently of each other, making it ideal to spawn off threads to perform these calculations simultaneously.
In chapter 2, we give an overview of what GPGPU is, and goes into depths of the benefits of using CUDA. The chapter discusses CUDA’s architecture, including its memory model, its thread hierarchy, and programming model. We showed the type of algorithms that benefit the most out of CUDA, and how to program in order to reap the maximum of CUDA’s benefits.
In Chapter 3, we present examples to the reader of what a typical CUDA program looks like from beginning to end. It has a complete breakdown of what each method call does. The experiment is
done using two well known image processing algorithms: Gaussian Blur and Sobel Edge Detection. The implementation contains both the sequential version and the parallel version. This allows the reader to compare and contrast the performance differences between the two executions.
Chapter 3 gives the reader an idea of the type of algorithms that are well fitted for CUDA. It is an example of how a sequential algorithm can be craftily broken down such that it can be run in parallel and achieve the same results, but faster. Creative techniques like these are required to be made when programming in the parallel model.
Chapter 4 shows the results of the experiment. It provides several executions of the same algorithm against different images. It affirms the claim that the larger the data set, the better the benefits are in using CUDA. For one of the smaller test cases, the performance increase is only 22%. The gain becomes 234% when we process an image 29 times bigger.
This thesis gives an introduction to CUDA and its benefits, but it does not stop here. A lot of future work can be done. Experiments can be done by using different size grids and blocks. The experiments are likely to improve with smarter memory usages. A lot can still be explored beyond this thesis.
CUDA, though it is ready for commercial use, is still a very young product. *FERMI* is the next generation currently available that is better than CUDA. CUDA blocks can hold up to 512 threads while FERMI blocks can hold up to 1536 threads. Another advantage is that FERMI supports the execution of multiple kernels simultaneously. CUDA must execute kernels sequentially. As technology advances, there are sure to be products that are better and better.
Appendix A: Glossary
**Block** - A name for a container that represents a group of threads. Threads belong in a block, which then belongs in a grid. Blocks can be partitioned into several dimensions to make indexing the threads easier. Threads within the same block can communicate with each other.
**Central Processing Unit (CPU)** - A serial processor on a computer that is optimized for high performance on sequential operations.
**Compute Unified Device Architecture (CUDA)** - A parallel computing architecture developed by NVidia for massively parallel high-performance computing.
**Constant Memory** - Similar to global memory, except this is read-only for the device. It is optimized for faster parallel data access.
**CUDA C Compiler (CUDACC)** - This compiles the GPU file produced by the NVCC and creates CUDA object files.
**Device** - In the context of a CUDA program, the device is everything that is in the graphics card. This includes the GPU, the memory that is in the graphics card, etc.
**FERMI** - The next generation CUDA architecture that is faster and more powerful than CUDA
**General Purpose GPU (GPGPU)** - A type of computing that utilizes the computational power of the GPU in computing that are not necessarily graphics related. For example, using the GPU to solve a matrix.
Global Memory - Variables declared in the global memory space lasts for the entire duration of the application and can be accessed by anythread across any grid. Both the host and the device can read and write to this.
Graphics Processing Unit (GPU) - A stream processor on a graphics card specialized for compute-intensive, highly parallel computation.
Grid - A name for a container that represents all the threads of a single kernel execution. A grid contains a set of blocks, which contains a set of threads.
Host - In the context of a CUDA program, the host is everything that is not on the graphics card. This can be the CPU, the memory that is on the computer, etc.
Kernel - A function or method that is executed on the device.
NVidia C Compiler (NVCC) - A compiler that parses the source code (.cu) and creates two resulting files: One for processing on the GPU and one for processing on the CPU.
Parallel Thread eXecution (TPX) - A type of file that is produced by the CUDACC. These files are recognized by device drivers that are installed with NVidia graphics cards.
Register Memory - This type of memory is allocated on the thread level, and are private to each individual thread.
Shared Memory - This type of memory is on the device, and the host has no access to. It is allocated on the block level and can only be accessed by threads of that block.
Single Instruction Multiple Data (SIMD) - A type of programming paradigm in which a set of threads execute the same instructions but against a different dataset. The set of threads execute the same instructions in locksteps.
Single Instruction Multiple Thread (SIMT) - A type of architecture that is used for the management of threads. When an instruction is issued, a SIMT unit selects a group of threads
that can execute that instruction.
**Single Program Multiple Data (SPMD)** - The same as SIMD except the threads do not have to execute the same instructions in locksteps. Threads are allowed to branch in the program and execute a different set of instructions.
**Special Function Units (SFU)** - The units in a SM that specializes in floating point functions such as square root and transcendental functions.
**Streaming Multiprocessor (SM)** - This contains a group of SPs, and 2 SFUs, shared memory, and cache.
**Streaming Processor (SP)** - This is where the actual computation happens. It contains its own MAD and MUL units.
**Streaming Processor Array (SPA)** - This refers to a group of streaming processors inside the GPU. This is where all the computation takes place.
**Texture/Processor Clusters (TPC)** - This is a member of the SPA. Each TPC contains a geometry controller, a SM Controller, a texture unit and 2 SMs.
**Warp** - A construct developed for thread scheduling within the SM. A warp contains a group of threads. Thread executions are usually done in a warp group.
Bibliography
Vita
Graduate College
University of Nevada, Las Vegas
Jia Tse
Degrees:
Master of Science in Computer Science 2012
University of Nevada Las Vegas
Thesis Title: Image Processing with CUDA
Thesis Examination Committee:
Chairperson, Dr. Ajoy K. Datta, Ph.D.
Committee Member, Dr. Lawrence L. Larmore, Ph.D.
Committee Member, Dr. Yoohwan Kim, Ph.D.
Graduate Faculty Representative, Dr. Venkatesan Muthukumar, Ph.D.
|
{"Source-Url": "https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2700&context=thesesdissertations", "len_cl100k_base": 15105, "olmocr-version": "0.1.50", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 113295, "total-output-tokens": 20319, "length": "2e13", "weborganizer": {"__label__adult": 0.0005588531494140625, "__label__art_design": 0.0024318695068359375, "__label__crime_law": 0.0004718303680419922, "__label__education_jobs": 0.003719329833984375, "__label__entertainment": 0.00023949146270751953, "__label__fashion_beauty": 0.00034165382385253906, "__label__finance_business": 0.0004565715789794922, "__label__food_dining": 0.0004265308380126953, "__label__games": 0.0018033981323242188, "__label__hardware": 0.010406494140625, "__label__health": 0.0007219314575195312, "__label__history": 0.0007371902465820312, "__label__home_hobbies": 0.00021469593048095703, "__label__industrial": 0.0010700225830078125, "__label__literature": 0.0004398822784423828, "__label__politics": 0.0004398822784423828, "__label__religion": 0.0010652542114257812, "__label__science_tech": 0.396484375, "__label__social_life": 0.00012743473052978516, "__label__software": 0.01251983642578125, "__label__software_dev": 0.5634765625, "__label__sports_fitness": 0.000415802001953125, "__label__transportation": 0.0010747909545898438, "__label__travel": 0.0002617835998535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72332, 0.03243]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72332, 0.64139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72332, 0.87165]], "google_gemma-3-12b-it_contains_pii": [[0, 741, false], [741, 1091, null], [1091, 1091, null], [1091, 1642, null], [1642, 2393, null], [2393, 2988, null], [2988, 3318, null], [3318, 4233, null], [4233, 4398, null], [4398, 5557, null], [5557, 6976, null], [6976, 8403, null], [8403, 8808, null], [8808, 9920, null], [9920, 11437, null], [11437, 11818, null], [11818, 14162, null], [14162, 15633, null], [15633, 16132, null], [16132, 17009, null], [17009, 17970, null], [17970, 19392, null], [19392, 19499, null], [19499, 21024, null], [21024, 22288, null], [22288, 23316, null], [23316, 23700, null], [23700, 24495, null], [24495, 26533, null], [26533, 27347, null], [27347, 28033, null], [28033, 29791, null], [29791, 30580, null], [30580, 32900, null], [32900, 35115, null], [35115, 35894, null], [35894, 37903, null], [37903, 38846, null], [38846, 39773, null], [39773, 40222, null], [40222, 41596, null], [41596, 43179, null], [43179, 44306, null], [44306, 45649, null], [45649, 46558, null], [46558, 47568, null], [47568, 48661, null], [48661, 49435, null], [49435, 50869, null], [50869, 52109, null], [52109, 53021, null], [53021, 53978, null], [53978, 55128, null], [55128, 56062, null], [56062, 57008, null], [57008, 57805, null], [57805, 59265, null], [59265, 61000, null], [61000, 62312, null], [62312, 64090, null], [64090, 65185, null], [65185, 66371, null], [66371, 68292, null], [68292, 70077, null], [70077, 71613, null], [71613, 71912, null], [71912, 72332, null]], "google_gemma-3-12b-it_is_public_document": [[0, 741, true], [741, 1091, null], [1091, 1091, null], [1091, 1642, null], [1642, 2393, null], [2393, 2988, null], [2988, 3318, null], [3318, 4233, null], [4233, 4398, null], [4398, 5557, null], [5557, 6976, null], [6976, 8403, null], [8403, 8808, null], [8808, 9920, null], [9920, 11437, null], [11437, 11818, null], [11818, 14162, null], [14162, 15633, null], [15633, 16132, null], [16132, 17009, null], [17009, 17970, null], [17970, 19392, null], [19392, 19499, null], [19499, 21024, null], [21024, 22288, null], [22288, 23316, null], [23316, 23700, null], [23700, 24495, null], [24495, 26533, null], [26533, 27347, null], [27347, 28033, null], [28033, 29791, null], [29791, 30580, null], [30580, 32900, null], [32900, 35115, null], [35115, 35894, null], [35894, 37903, null], [37903, 38846, null], [38846, 39773, null], [39773, 40222, null], [40222, 41596, null], [41596, 43179, null], [43179, 44306, null], [44306, 45649, null], [45649, 46558, null], [46558, 47568, null], [47568, 48661, null], [48661, 49435, null], [49435, 50869, null], [50869, 52109, null], [52109, 53021, null], [53021, 53978, null], [53978, 55128, null], [55128, 56062, null], [56062, 57008, null], [57008, 57805, null], [57805, 59265, null], [59265, 61000, null], [61000, 62312, null], [62312, 64090, null], [64090, 65185, null], [65185, 66371, null], [66371, 68292, null], [68292, 70077, null], [70077, 71613, null], [71613, 71912, null], [71912, 72332, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72332, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72332, null]], "pdf_page_numbers": [[0, 741, 1], [741, 1091, 2], [1091, 1091, 3], [1091, 1642, 4], [1642, 2393, 5], [2393, 2988, 6], [2988, 3318, 7], [3318, 4233, 8], [4233, 4398, 9], [4398, 5557, 10], [5557, 6976, 11], [6976, 8403, 12], [8403, 8808, 13], [8808, 9920, 14], [9920, 11437, 15], [11437, 11818, 16], [11818, 14162, 17], [14162, 15633, 18], [15633, 16132, 19], [16132, 17009, 20], [17009, 17970, 21], [17970, 19392, 22], [19392, 19499, 23], [19499, 21024, 24], [21024, 22288, 25], [22288, 23316, 26], [23316, 23700, 27], [23700, 24495, 28], [24495, 26533, 29], [26533, 27347, 30], [27347, 28033, 31], [28033, 29791, 32], [29791, 30580, 33], [30580, 32900, 34], [32900, 35115, 35], [35115, 35894, 36], [35894, 37903, 37], [37903, 38846, 38], [38846, 39773, 39], [39773, 40222, 40], [40222, 41596, 41], [41596, 43179, 42], [43179, 44306, 43], [44306, 45649, 44], [45649, 46558, 45], [46558, 47568, 46], [47568, 48661, 47], [48661, 49435, 48], [49435, 50869, 49], [50869, 52109, 50], [52109, 53021, 51], [53021, 53978, 52], [53978, 55128, 53], [55128, 56062, 54], [56062, 57008, 55], [57008, 57805, 56], [57805, 59265, 57], [59265, 61000, 58], [61000, 62312, 59], [62312, 64090, 60], [64090, 65185, 61], [65185, 66371, 62], [66371, 68292, 63], [68292, 70077, 64], [70077, 71613, 65], [71613, 71912, 66], [71912, 72332, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72332, 0.03994]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
0d0a3780dc286d449da90dd66329769ee46e1ee0
|
How a Geographically Distributed Software Team Managed to Negotiate Successfully using Chat Technology
ABSTRACT
Negotiation is best accomplished in collocated settings, and negotiation in geographically distributed settings is prone to failure with a risk of conflicts. Investigating distributed software development, we were surprised to discover that a software development team, located in different parts of Brazil, was able to negotiate successfully and reach an agreement to change from ticket-oriented processes towards release-oriented processes for bug fixing activities using only chat technology. In this paper, we explore how the chat technology allowed the distributed software team (including both vendor and client team members) to successfully negotiate and reach agreement about adopting and implementing a new collaborative workflow in the governmental IT-project. Our research method is based upon an ethnographically informed empirical study of the software development involved in a Brazilian software company. Thus, the data collected shows that the chat technology provided a platform for the team to engage informally in important discussions across locations. The chat technology allowed participants to navigate both within and across diverse subgroups (collocated client-developers; distributed client-developer, and distributed developers-developers), which supported successful subgroup dynamics avoiding the risk of conflicts emerging from faultlines.
INTRODUCTION
Software projects are often done in distributed settings, where clients and the software development team are geographically distributed. Despite the geographical distance participants often work in closely-coupled work arrangements (ESBENSEN; BJØRN, 2014; CRAMTON, 2001; JENSEN, 2014), structured by different types of agile methodologies (ESBENSEN; BJØRN, 2014; ŠMITE; MOE; ÅGERFALK, 2010). Such projects depend upon participants ability to navigate, coordinate, and communicate using diverse collaborative technologies (BJØRN et al. 2014; BJØRN; HERTZUM, 2006; BODEN et al., 2014; MARK et al., 2002) in which the majority of the interaction is accomplished, i.e., chat group, online forums, video conferences, document repositories, and emails (CHRISTENSEN; BJØRN, 2014; GUO et al., 2009; SEGENREICH, 2008; DABBISH et al., 2005; HERBSLEB et al., 2002). While the interaction in software projects is multiple and diverse, we, in this paper, are particularly interested in the negotiation activities within distributed software team.
Negotiation is a critical activity for software developers, where participants discuss and reach agreement about how and why certain details and structures are to be organized and implemented in certain ways and continue to be an activity throughout the whole project lifecycle (CHRISTENSEN; BJØRN, 2014). In geographically distributed settings, negotiations activities are facilitated and mediated by cooperative technologies (JOWETT, 2015; LI; ROSSON, 2014). However, technology-based negotiation activities have been identified as being prone to failure in geographically distributed settings. In this sense, researchers have pointed to working across time zones, culture, and professional language are some of the reasons for the challenges (MARK et al., 2002; OLSON; OLSON, 2000; VALLEY; MOAG; BAZERMAN, 1998). Given these insights from prior research, we were surprised to find that in our empirical case, where we studied a Brazilian software development team, that consisting of team members from both the vendor and the client managed to successfully negotiate. Moreover, that team implemented a new collaborative work structure using primarily text-based chat group technology.
Chat technology is of core interest to the Computer-supported Cooperative Work (CSCW) community, and the potential for using such technologies (e.g., Skype or Slack) in organizations of high complexity have been identified as an important research agenda (RIEMER; FRÖSSLER; KLEIN, 2007). Chat technology provides low-cost accessibility to team members across geography and time (MORAES; CABELLO, 2017; HSIUNG, 2000; ANDERSON; KANUKA, 1997). Moreover, chat technology can potentially facilitate closely-coupled interaction and communication within and across organizations (FAYARD; DESANCTIS, 2005; CLÉMENT; BAKER; MACINTYRE, 2003). By supporting ‘lightweight’ communication, chat technology provides alternative ways for participants to discover co-workers’ availability, which potentially can trigger opportunistic communication supporting some degree of team context and facilitate cooperative inquiry to the entire team (HERBSLEB et al., 2002). Successful use of chat technology depends on participants abilities to establish and develop norms, context, common language, and problem definitions across all (MALHOTRA et al., 2001). However, negotiation activities –
especially cross-organizational negotiation, where financial and political considerations exist, make the opportunity to develop common language and shared context difficult to implement, and thus developing new technologies supporting negotiation across geography continues to be a challenge (BJØRN; HERTZUM, 2005; OLSON; OLSON, 2000). Therefore, this paper aims to explore how the chat technology allowed the distributed software team, considering both vendor and client team members, to successfully negotiate and reach agreement about adopting and implementing a new collaborative workflow in the governmental IT-project. In this sense, the research question we explore in this paper is: How did the geographically distributed software development team successfully negotiate and establish a new workflow structure changing their work arrangement, using primarily group chat technology? To answer that, we performed an ethnographically informed empirical study of the software development involved in a Brazilian software company, and we collected data observing chat groups which had fifty-five negotiation cases. Based upon our empirical findings, we find that the negotiation succeeds, not just because the team developed norms and common language, but because the chat group technology facilitated grounding activities (CLARK; BRENNAN, 1991) both within and across diverse sets of subgroups involved in the negotiation, namely client-developer at the same location; client-developer across location; developer-developer across location. When geographical distributed teams are composed of collocated subgroups, there is a tendency that such subgroups coalesce into smaller unites especially if the demographic attributes align with collocated subgroups, and such setups risk producing faultlines (CRAMTON; HINDS, 2004). We found that the software developers overcame the risk of faultlines in their negotiations, because of the affordances of chat technology allowed them to navigate across and within the diverse subgroups breaking down the barrier of demographic attributes and organizational belonging. The participants through cultural language exchange (ROBINSON, 1991) managed to create and navigate permanent records of decisions manifested through shared digital objects in the chat group technology. Thus, the chat group technology supporting synchronously interaction facilitating a dynamic negotiation context comprising of both informal and formal language exchange simultaneously.
The remainder of the paper is organized into six sections. Following by this introduction, we present the theoretical background of this study, then our research method, followed by the results of our analysis. Finally, we discuss our findings and provide our conclusion.
CHALLENGES FOR COOPERATIVE NEGOTIATION ACROSS GEOGRAPHICAL DISTANCE
Collaboration within geographically distributed teams is core concern for CSCW research, since its inception and there is a long cannon of research papers which have explored the challenge of distributed collaboration for the design of cooperative technologies in all kind of perspectives and in different domains (HINDS; RETELNY; CRAMTON, 2015; BODEN et al., 2014; OLSON; OLSON, 2000). One core domain for the research on distributed teams is software development (BJØRN et al., 2014), since geographically distributed software development has become the norm rather than the exception for how the work is organized when
we design IT systems (HERBSLEB, 2007). Core challenges for distributed software development have been identified as linking to temporal constraints (HERBSLEB; PAULISH; BASS, 2005), to coordination challenges (CHRISTENSEN; BJØRN, 2014), as well as to commitment and trust (SØDERBERG; KRISHNA; BJØRN, 2013). While the technological development has improved the conditions for distributed software development, one core challenge remains as to the challenge of creating common ground related to the project at hand as well as to how to collaborate (BJØRN et al., 2014).
Common ground is established through grounding in conversations, where participants provide evidence and references supporting their argumentation through aspects provided by the face-to-face shared context characterized by various aspects such as co-presence, visibility, audibility, and simultaneity (CLARK; BRENNAN, 1991). This means that whether it is possible to create common ground in distributed settings depends tremendously upon the affordances of the technology (e.g., chat, video conference, document repositories) supporting the interaction and the coordination of work (BJØRN; NGWENYAMA, 2009; HINDS; WEISBAND, 2003; ARMSTRONG; COLE, 2002; CRAMTON, 2001). Thus, to create common ground concerning the project and the process requires participants to have a fundamental basis, in this case, a shared context. That shared context can engage in the negotiations and discussions required to take important decisions facilitated by informal language constructs (ROBINSON; KOVALAINEN; AURAMÄKI, 2000). Finding ways to establish a shared context by which negotiation can take place supporting the distributed software development projects using technology is not trivial.
Shared context and risk of faultlines
When two or more people interact collocated, they automatically share a physical context providing rich cues such as facial expressions etc., which supports the conversation (MATTHIESEN; BJØRN, 2016; RANGANATHAN et al., 2002). A shared context can emerge, when team members share common professional language and vocabulary relevant for their work processes, work cultures, and use of digital tools potentially reducing the risk of conflicts (HINDS; MORTENSEN, 2005). However, people involved in geographically distributed projects, does not automatically can create a shared context (SCHILIT; HILBERT; TREVOR, 2002) potentially missing important contextual information, thus increasing the difficulty in identifying and solving problems, which again increase the likelihood of emerging conflicts (HINDS; MORTENSEN, 2005). Frequent interaction has been pointed to as essential for negotiation and conflict resolution CHRISTENSEN; BJØRN, 2014; HINDS; MORTENSEN, 2005; HINDS; BAILEY, 2003). However, a high number of messages that came up into communication tools risk to unshared context once depersonalizing the interaction (SPROULL; KIESLER, 1992). While technology mediated text-based interaction generates less social presence and lack social cues compared with face-to-face conversation (POSTMES; SPEARS; LEA, 1998), the more fundamental challenge is the lack of shared context creating contextual differences. Thus, that shared context is hard to be articulated and identified during the text-based chat and consequentially cause misunderstanding among the participants (HINDS; WEISBAND, 2003). It suggests that virtual teams are likely to experience more conflict in negotiation and coordinating tasks than a
collocated team (HINDS; BAILEY, 2000). Indeed, increase social presence by establishing a shared context is relevant if we are to support technology-mediated interaction between subgroups of collocated and distributed teams. Such context-aware technology is a class of communication tools, which addresses people’s knowledge context to leverage the communicative understanding (SCHILIT; HILBERT; TREVOR, 2002).
While the majority of the literature on shared context and negotiation focus on teams where all participants are geographically distributed, the situation in distributed software development is often that not every individual are geographically distributed. Instead distributed software development is often based in a situation of distributed subgroups, where several developers are collocated while subgroups are geographically distributed. When you have geographically distributed subgroups, there is a risk of faultlines. Faultlines refer to conceptual dividing lines which split a group into at least two relatively homogeneous subgroups based on group members’ demographic alignment different individual attributes that impact on group processes further outcomes both performance and emotional experience (THATCHER; PATEL, 2012; BEZRUKOVA et al., 2009; SHEN; GALLIVAN; TANG, 2008). Thus, subgroup formation influences the performance of the whole group above and beyond what can be predicted by diversity alone (THATCHER; PATEL, 2012). For instance, a faultline may occur based on education level or work experience starting entirely different dynamics in a group, i.e., group members create subgroups relatively homogeneous based on informational characteristics of individuals that are directly job-related important – in this case a faultline category is information-based (THATCHER; PATEL, 2012; BEZRUKOVA et al., 2009). When team members experience problematic subgroups dynamics, it is difficult to overcome the geographically distance (CRAMTON, 2001). To create and establish task cohesion which can counter the risk of faultlines, geographically distributed teams must develop shared norms, roles, and procedures, by which they can experience accuracy of mutual comprehension (e.g., shared context). Moreover, those teams have shared expectation regarding the common goal, how to organize the interdependency and mutual trust, as well as the frequency of communication among members (LOCKWOOD; 2017, ARMSTRONG; COLE, 2002). Therefore, successful subgroup dynamics must reduce the risk of faultlines generated by time, national/regional culture, and geographical distance, in order to integrate teams from different locations providing means to sound negotiations.
Chat technology supporting negotiations
In collocated or distributed projects, communication occurs through synchronous and asynchronous means. Asynchronous communication is considered appropriated when activities have low complexity while synchronous communication is most applicable when complex activities are involved (RIOPPELLE et al., 2003). However, in distributed teams frequently synchronous interactions are embedded in a broader context of asynchronous interactions and how the informal activities are carried out by the participants (OLSON; OLSON, 2000). Chat technology refers to the type of technology which allow participants to interact asynchronously through text-based interaction such as Messenger, Skype, WhatsApp, and Slack. We are currently witnessing how chat technology in
increasingly being introduced into the workplace. The usage of chat technology is thus entering the workplaces and thus becomes part of shaping the form of communication which take place in organizations. With the introduction of chat technology, we also see a decrease in the use of email, phone calls, and other means of communication (GREIF; MILLEN, 2003). In distributed software development, software developers have used chat technology in bug fixing reducing the effort of articulation work (TENÓRIO; PINTO; BJØRN, 2018), and to coordinate their activities (BODEN et al., 2014). Chat technology offers to the software developers, new advantages for their communication, since they can be modified, reviewed, and share the complete conversation over time (VAN DER ZWAARD; BANNINK, 2014). Although miscommunications cannot easily be solved when using textual interaction (FORD et al., 2017; TERUI, K.; HISHIYAMA, 2014), the ability of the chat to send short messages using informal language offers a mean for agile communication, and messages can be saved and, occasionally, retrieved and forwarded to other groups or individuals (GREIF; MILLEN, 2003). Also, the permanent nature of chat messages can form common discussion point for participants (ROBINSON, 1991). However, formal language is not always feasible, since depending upon context, i.e., a rapidly changing project environment requires an informal communication in supported by informal language use (DE VRIES; LAGO, 2010; ÅGERFALK, FITZGERALD; HOLMSTRÖM, 2005; CLERC; HERBSLEB et al., 2000). Nonetheless, both formal and informal dialogue can obstruct the conversations, if messages are shared outside the attended audience (ROBINSON; KOVALAINEN; AURAMÄKI, 2000). Professionals who share similar perspectives through professional language and knowledge makes it easier to develop common language and norms that can form a basis of communication within the distributed team (OAKLEY, 1999). This allows for healthy interactions between distributed team members facilitated by the informal language usage (HINDS; MORTENSEN, 2005). Therefore, previous researches point out how chat technology can facilitate communication at the workspace. However, our interest here is focused on how chat technology supports negotiations in geographically distributed software development team of vendors and clients.
**RESEARCH METHOD**
Our research is based upon an ethnographically informed (RANDALL; HARPER; ROUNCEFIELD, 2007) empirical study of the software development involved in a Brazilian software company. We studied the work involved in organizing the collaborative work, focusing in on the use of technological artefacts (BLOMBERG, J.; KARASTI, 2013). In particular, we were interested in how the software development team used chat technology to support the collaboration across geographical sites of design (BJØRN; BOULUS-RØDJE, 2015). In this work, we followed a software team whose worked on a governmental IT-project: E-Account. E-Account is an information system designed to support a Brazilian municipality in organizing, monitoring, and controlling public accounts. Our interest is not the content of the E-Account project, but rather the way the software development team collaborated. In total twenty-three developers were involved in the E-Account project, we focus on how these software developers who represented both the vendor and the client negotiated using chat technology. We refer to the vendor company as BrazilSoft.
Empirical settings
The E-Account project is a Brazilian governmental IT-project in which the teams are geographically dispersed with a temporal distance of +3 hours from the vendor site to the client site. The project started in 2011 and is part of a larger information system web-Gov, which went online by the end of 2012. The web-Gov information system is designed to support the public administration of one capital in the north of Brazil, which has around 420,000 inhabitants. Currently, the web-Gov has been running live for six years and has approximately 1,200 users all municipality employees. However, the system is continuously being expanded, reconfigured, and re-designed. Thus, the web-Gov as an IT-project can be seen as ongoing infrastructure activities, which shapes how the municipality function based upon insights from the users. Furthermore, new functionalities will be made available, so the system is not only used by municipality employees but to serve 190,000 citizens in their interaction with the government.
E-account project
The E-Account is an example of a mixed operation, combining offshoring and outsourcing. While the company BrazilSoft is located in a city in the south of Brazil, the client is 3,573 kilometers to the north of the country. BrazilSoft has an offshoring operation at the client site, with a team composed of five employees, including one operation manager, one project manager, and three developers. Furthermore, there are two BrazilSoft partner-firms in outsourcing operation to support the E-Account. They are responsible for keeping the client-infrastructure and developing the web-Gov web services. The BrazilSoft local team has twenty-five employees, among than manager operation, project managers, developers, and testers. Thus, the BrazilSoft is considered a medium-sized software development company in which connects more than fifty people in the project. The communication among BrazilSoft’s local team, distributed team, and the client is primarily organized in distributed settings supported by chat technology, in particular, eleven Skype chat groups and five WhatsApp groups. Each chat group has a concrete purpose and is related to a specific topic such as technical support, request changes, administration issues, contract terms, and work coordination. The client participates in some groups chat, while others are exclusive to BrazilSoft employees.
Data collection and analysis
Data were collected through interviews and observations of the interaction in the groups chat. We conducted five face-to-face interviews with vendor stakeholders (e.g., directors, project managers, and developers) during May 2017. All interviews were in Portuguese and recorded with the consent of the interviewees. During the interviews, the use of chat group technology kept appearing as critical for the negotiation practices within the team, and we decided to explore this further. In total eleven Skype chat group forums were created by BrazilSoft each aimed at interacting with clients. We obtained permission to participate as ‘observer’ in four Skype chat groups for four months. Thus, we were able to collect the complete interaction in the four groups chat. Our data analysis
was done in two steps. First, we listened, transcribed, and codified the interviews using ‘Express Scribe’. Second, we collected the chats scripts of the observed chat groups, which were then were transported into the Express Scribe for analyzing and codification. Both interviews and chat data were codified identifying themes in the conversation aiming to identify interesting interaction aspects. Through this process, we began to notice how the users were applying the chat technology to support their negotiations. Thus, we decided to focus on the instances of the data, where the client and the vendor were negotiating different aspects of their work such as tickets, releases, bugs, validations, and workflows. In total, we had eighteen pages of chat group transcriptions over ninety days referring to the two most active groups chat (Ticket Chat Group and BrazilSoft Private Chat Group). Table 1 gives and overview of the interaction in the two chat groups.
We observed that the chat groups had fifty-five cases where they negotiated various aspects, and in forty-three of these instances they succeed in reaching an agreement (see Table 1). All these negotiations where done using only chat technology, and no other types of technology such as email or phone were used. Thus, the interesting aspect from our perspective is that the software developers were able to negotiate successfully just only text-based chat, i.e., no video, email, or other types of technology was used. Each of the negotiations demonstrate similar patterns therefore, in the next section, we present our findings focusing on one example, to demonstrate our empirical observation.
Table 1 - Negotiations observed in the chat groups
<table>
<thead>
<tr>
<th>Items</th>
<th>Ticket Chat Group</th>
<th>Private Chat Group</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Conversations</td>
<td>504</td>
<td>678</td>
<td>1182</td>
</tr>
<tr>
<td>Observed Negotiations</td>
<td>20</td>
<td>35</td>
<td>55</td>
</tr>
<tr>
<td>Successful Negotiations</td>
<td>14</td>
<td>29</td>
<td>43</td>
</tr>
<tr>
<td>Participants</td>
<td>11</td>
<td>9</td>
<td>20</td>
</tr>
</tbody>
</table>
Source: The authors
RESULTS
The web-Gov information system has been in use for six years, and the mixed-vendor/client software team in the E-Account project was created to continually identify and collect new user requirements or bugs in the system, which were to be analyzed and potentially implemented and finally be additional functionality in the production environment. The organization of the work in E-Account is ‘ticket-oriented’, which means that the coordination of activities is structured by tickets. This entails that all new tasks are organized into tickets, which are then prioritized according to the client urgencies. So, the prioritized ticket list is the main coordination tool for the software developers. In order to organize the work, all new user requirements are included into the software management repository called Redmine. Redmine is a web-based open-source software management application designed to coordinate requirements. Thus, each requirement is created as ‘tickets’ into the Redmine repository. The client (the municipality) is responsible for accessing and creating tickets in the Redmine, including describing
each requirement and defining its priority. BrazilSoft’s developers then access the Redmine to identify requirements, while assigning themselves as responsible for particular tickets. The BrazilSoft project manager also access the Redmine on a regular basis monitoring the status of all tickets. When a ticket is done, the developer record this in the Redmine, and the client validates the ticket, and, if approved, inform BrazilSoft developers to include the ticket into the web-Gov production environment. However, over the last years, the ticket quantity has increased considerably, and BrazilSoft have experienced several clients claims regarding delays to include validated tickets into the production environment.
Despite the ticket control embedded in the Redmine, the ticket-oriented process was continually failing, since the client frequently forgot to validate tickets or the vendor forgetting to include them in the production environment. These events increased customer’s complaints regarding unavailable features in the web-Gov system and increased the tension in the vendor-client relationship. Attempting to avoid client claims, 18 months before our research, the vendor introduced chat group technology in order to monitor the ticket-oriented process. The vendor intention was to streamline the coordination of the tickets. Concretely, the vendor notifies the client of the tickets, which require validation before including them in the production environment. The ‘ticket chat group’ was successful in the first three months, however issues began to arise. Communication breakdowns took the form of the client forgetting to report in the ‘ticket chat group’ which tickets were validated and thus ready for inclusion in the web-Gov production environment. Delays became a large problem, and due to the contractual structure between the vendor and client, delayed tickets would mean that the vendor had to pay fines to the client. The increase numbers of fines in the project became a stress-point for the client-vendor relationship and generating conflicts among the cross-organizational team. The conflict was openly available to everybody, since it took place in the ‘ticket chat group’, exposing the problems to all participants. We observed forty-seven messages exchanged in the chat group concerning the issues of delayed validations and fines. Below we zoom in of the core exchanges. Following quotation exemplify the issues between the client and John, the project manager at the vendor site.
Client: “How come that ticket [ID-number] isn’t yet in the production environment.”
John (BrazilSoft site): “Because the ticket has not been validated by you yet.”
Client: “Did you ask me [to validate the ticket] through notification features in the ticket chat group?”
John (BrazilSoft site): “No, I forgot, sorry. But you could look at Redmine. See the ticket [a picture was posted in the chat group]. What you see here is that there is a red alert [see the screen shot]. This red alert means, we are waiting for you to validate the ticket before we can proceed.”
Client: “This practice is not what we agreed on. We decided that our work routine for validation of ticket by us but go through the ticket chat group. You and the others MUST notify me in the request
A few days later, after the discussion above, Peter, a project manager at the client site, suggested internally at a face-to-face meeting with the client to replace the current ticket-oriented process with a release-oriented process. Peter argues that the adoption of the release-oriented process would ‘pack’ a set of tickets into one release and it would facilitate its validation. Consequently, the messages exchanged in the ‘ticket chat group’ regarding ticket validation would also be reduced once a release contains a set of tickets rather than individual tickets. Potentially, conflicts concerning ticket would be avoided. However, such a change would require quite some differences to the way the work was organized, but contractually but also processes oriented. Thus, a longer negotiation concerns the possibility to change the process was initiated. This negotiation took place in two chat groups, and it all began in the ‘ticket chat group’.
Peter (client site): “Hi guys. Yesterday, I had a meeting with the infrastructure team and [client name], where we discussed our workflow. Thus, who will decide what to include in the production environment. The decision is ultimately the system manager [client name]. However, I suggested to update our current work routine replacing ticket-oriented process by a release-oriented process. They liked the idea however, we need to discuss this idea, and how to proceed.”. Ticket chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
What happens in the above quotation is that Peter, one of the core software developers, whose is located at the client site explains, how he has been discussing a potential new way of organizing the workflow in the team. More importantly, he also suggests concrete changes and supports it by referring to that the client has approved of the idea. Consequentially, a team member from the client site also writes a message in the chat group, supporting Peters idea, further demonstrating that the client supports the idea.
Client: “Hello, everyone. As [name of the project manager at client site] wrote, we are excited to adopt the release-oriented process. As far as I know, this will make our validation process much easier. We are looking forward to adopting it.”. Ticket chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
So, we have a situation, where people collocated in the project have had important discussions about the workflow, and moreover, it is important to notice that while both above participants are collocated at the client geographical locations. Indeed, they represent two different parties, namely the client and the vendor. Meanwhile, at the vendor’s geographical location, the idea for change is not fully embraced. However, to have such a discussion internally within the vendor team, before including the client, the project manager created a new discussion forum in ‘BrazilSoft private chat group’ in which John, the software developer in BrazilSoft, but working remotely from the client, resisted the idea of changing the workflow towards release-oriented process.
John (BrazilSoft site): “I’m tough about this situation [replacing ticket-oriented by release-oriented] because we risk increasing our delay
since the client then want to include additional new functionalities for each production release. Currently, the client already delayed their validation of new functionalities, so if we adopt this new process, the delay could increase because they will wait to include a new set of functionalities together in the production environment. Maybe it does not avoid the complaints about the existing delay.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
What is interesting here, is that the discussion moved from the ‘ticket chat group’ to the ‘BrazilSoft private chat group’. The private chat forum allowed the vendors to negotiate within BrazilSoft excluding the vendor however still including all BrazilSoft employees also the ones located at the vendor sites. The negotiation continues, and Peter attempts to convince John that the opportunity to move towards a release-oriented process is appropriate and supporting the software developers in BrazilSoft.
Peter (client site): “I agree with you, but if we adopt release-oriented process, everything that is done within the release goes to the production together in a short time. Moreover, we always wanted to adopt release-oriented process. It is great chance for us.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The synchronous interaction continues and John resists Peter’s argument to adopt the release-oriented process. He refers to the timing of the change and how it might drastically change their current workflow causing problems. John explains how such a change is not trivial, but instead involves complex changes to their existing workflow review processes.
John (BrazilSoft site): “I agree, but I think we shouldn’t do this now. I think that is a bad idea because it changes drastically our current work routine which demands a review of our work flow.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Following this interaction, multiple different opinions and concerns are presented in the chat group. Evidently, the interaction leads to a conflict within the vendor team between the project managers John and Peter (both working for BrazilSoft, however geographically located at different sites). The main issue is the impact which the potential change will have on their workflow review process. Trying to resolve the issue, BrazilSoft’s operation manager enter the negotiation.
Operation Manager (BrazilSoft site): “Hey guys. Currently, they do this! I think it doesn’t have a significant impact on our current work flow. Just few adjustments.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The operation manager tried to make the issue less controversial. Moreover, another vendor software developer also enters the discussion supporting both the operation manager and argue to make the change in the release process. In this negotiation, it is important that the chat technology allow people to enter the negotiation over time, and thus the ‘BrazilSoft private chat group’ provides a shared context supporting discussion and negotiation across the geographically
dispersed developers. Thus, we now have a case, where people on both geographical sites agree and support the change. However, it is important to notice that John (who still resist the change) is a core employee and his opinion matters, even though other developers approve of the change, as shown below.
Developer (BrazilSoft site): “They’ll continue doing what they always do. I don’t think that is a problem to adopt release-oriented now. It’ll facilitate our work reducing the current validation problems.”. BrazilSoft private group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Furthermore, the operation chat manager also decides to modify his first opinion to support John, by saying that once they have made this change, it will be impossible to return to the previous workflow. Thus, they should be entirely sure that it is a good idea to replace the ticket-oriented process with the release-oriented workflow.
Operation Manager (BrazilSoft site): “What they need to understand is that once adopted release-oriented there is no how to get back. I mean, everything in it must be validated as release. [...] It is because there is no way to separate the codes after being integrated. That would improve our work routine.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Peter, who was the one starting the whole discussion then copy and paste the message from the client, which originally was posted in the other skype group, namely the ‘ticket chat group’. He follows the pastes message by arguing that the ticket-oriented workflow process is currently not working. The issue is that client frequently forgets, which tickets must be validated, thus loses control over the process and everybody gets delayed.
Peter (client site): “I reinforce that a ticket-oriented is not is not good for us because them [client] is not validating each ticket due the high-ticket quantity. Thus, they are forgetting to validate each ticket due it is hard to control. This is the reason why the release-oriented process can figure it out. I’d also like to highlight that at the client meeting, yesterday, everyone [client names] agreed with this change commenting that it can be good for all of us. In addition, in our ticket chat group [client name] wrote: ‘[message pasted from ticket chat group]’. Then, we shouldn’t lose this opportunity to change and improve our process.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The local project manager then agrees to the new process and his colleagues to adopt release-oriented delivery. Nonetheless, he suggested that a workflow process was designed, presented, and approved by all aiming to make clear the new process to the client.
John (BrazilSoft site): “Okay, I agree only if we design a workflow formalizing this process [release-oriented. And they [client] need to approve the workflow proposed by us. The workflow will be our guarantee of the agreement. I can design and send the workflow to them.”
Operation Manager (BrazilSoft site): “I agree.”
Peter (client site): “OK. [emoticon with smile]”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The project manager at client site sent a ‘like’ sign in the group in which he agreed with the idea. The next day, the negotiation of the release-oriented versus ticket-oriented work structure moved from the private chat group to ‘public’ ticket chat group in which the project manager at vendor site sent a message accepting the change.
John (BrazilSoft site): “Ok [client] we agree, and we’ll design the release-oriented process in a workflow to be approved for you all. I’ll send the workflow soon.”
Client: “Good news! I’m looking forward to seeing the workflow.”
Peter (client site): “(Y) [Thumb up emoticon]”. Ticket chat group, via Skype, Jun 23th, 2017 (translated from Portuguese)
On July 14th, 2017, John shares a document describing the first version of the workflow in BrazilSoft’s private chat group and invites participants to validate it. The operation manager and the administrative manager (whose also participates the group) suggested few adjustments. John sent a second version two hours later, which is approved by all participants in BrazilSoft’s private chat group. Afterwards, the local project manager sent the final version of the workflow in the inclusive ‘ticket chat group’, where the client approved it a few hours later.
Analyzing the negotiation process in the chat groups, we observed that the discussion moved dynamically between the subgroups – from the restricted ‘ticket chat group’ to the inclusive ‘BrazilSoft’s private chat group’. While the negotiations might on the surface seem as a discussion on the work process, it was, in fact, also a demonstration of a power struggling between the two geographically distributed project managers. Once Peter works at the client site on a daily basis, he also took the liberty to suggest a workflow change without consulting with John’s working at the vendor site. Thus, when John first learn that Peter has made a proposal to the client on a drastic change for the client-vendor relationship, without consulting him. So, John became critical and the potential conflict began to arise, which was initiated in the ‘public’ chat group but moved into the private’s vendor chat group. Moreover, even though chat technology is an asynchronous interaction, we perceive in all the messages in the above examples almost went as a synchronous interaction. So, while chat technology fundamentally is an asynchronous technology, it allowed Peter and John to negotiate the workflow changes promptly with the participation of other colleagues, who give their opinions voluntarily. What made the negotiation successful was that the chat technology enabled the participants in both formal and information language exchanges, where they constantly could move between levels of negotiations. In this way, the double-language level (i.e., informal and formal language) allowed the participants to develop a shared context which supported multiple people in navigating across subgroup, language levels, utilizing the permanent record created by the chat technology.
DISCUSSION
From the software vendor perspective, the issue about the workflow change produced a delicate situation. Concretely, Peter, the project manager at the client site, had initiated an unauthorized negotiation with the client, without first checking the vendor’s opinion. Thus, by initiating the negotiation with the client, Peter also impacted the client expectation towards the vendors’ interest in moving towards release-oriented processes. This situation meant that it was important for Peter to convince John that the release-oriented process was the way to go, since if he failed, he would have to face the client and explain how this process change was not good with the risk of losing face.
Luckily for Peter, the software development team manage to successfully negotiate and solve their challenges concerning how to organize the collaborative process, despite being geographically distributed interacting using the online chat group. Prior research has pointed out to how negotiation and miscommunications cannot easily be solved using primarily textual interaction (FORD et al., 2017; TERUI, K.; HISHIYAMA, 2014; BJØRN; HERTZUM, 2006; VALLEY; MOAG; BAZERMAN, 1998), due to the lack of implicit clues and spatial references, which supports the creation of a shared context (HINDS; BAILEY, 2000; SPROLL; KIESLER, 1992). When people are collocated, they are able to use gestures and facial expressions to indicate through feedback loops how they are interpreting the situation supporting negotiations. In this sense, the question becomes what made the negotiation a success despite the lack of feedback and contextual information? How did the chat group technology allow for the distributed software developers to reach an agreement? Our data extend prior CSCW research on negotiation protocols for work (ESBENSEN; BJØRN, 2014) and the use of chat technology in organizations (RIEMER; FRÖSSLER; KLEIN, 2007) in several ways.
Firstly, our data show that the textual and permanent nature of the chat group technology was crucial for supporting the negotiation between the vendor and the client. Prior research on chat technology (JOWETT, 2015; LI; ROSSON, 2014; HSIUNG, 2000, IM; CHEE, 2006) also supports this finding, since they point out that keeping conversation history is an essential feature in chat technology. By saving the complete conversation history it is possible for users to access and analyze prior conversations (VAN DER ZWAARD; BANNINK, 2014) supporting reflective behavior and potential re-submission of past interaction in new conversations. While participants might choose to exclude or delete past messages in certain chat interactions, such action will be registered in the conversation history and made visible available to all participants in the chat group. Analyzing our data, we observed how the vendor’s private chat group made use of the permanent records by copying and pasting previous client-messages from the other chat forum to reinforce argumentation. The permanent record was not only used as a way to review the past interaction, but also to document past behavior facilitating a shared context supporting the negotiations – as in pointing out explicitly what the object of concern entails. By pasting in quotations from earlier, the participants were able to ‘gesture’ and ‘pointing’ towards the area of concern, thus supporting grounding activities (CLARK; BRENNAN, 1991; SEGENREICH, 2008) in the conversation.
Secondly, we found that the synchronous interaction embedded in chat technology supported the negotiation. The participants pointed out that
navigating substantial email conversations is often problematic, and it is difficult to comprehend and follow the different lines of interaction fully. Furthermore, prior work has demonstrated how email technology lack feedback to the sender from the receiver increasing risk of misunderstandings (BJØRN; NGWENYAMA, 2009). For instance, it is not possible to know whether their emails have been seen by others and also whether they are, actually, doing something about it. In the chat technology, you can see whether others have seen the messages and identify who is available, and even more importantly – others can monitor the interaction of others, without interfering directly (TENÓRIO; PINTO; BJØRN, 2018). Thus, the ways in which the chat technology through the permanent features, the informal language, and then also supporting the reviewability of others to monitor the interaction in ‘synchronous’ way of others facilitated the successful negotiation.
Thirdly, our data shows that the chat technology made it possible for the participants to interact informally, compared to their otherwise formal textual interactions in their email use. While the permanent feature of emails requires participants to interact using formal language to ensure accurate interpretation, the permanent features of the chat technology were very different. In the chat technology, participants were allowed to informally interact developing a cultural language (i.e., double-language level) of interaction and interpretation (ROBINSON, 1991), in which ‘items’ of concern were transformed from formal interpretation to a common understanding (OAKLEY, 1999; ROBINSON, 1991). This was evident in the situations, where we saw how the participants did not spend any time nor effort on using formal contextual language in their messages. Instead, participants jump right into the issues of concerns. While the formal communication (e.g., emails) is driven by the highly specific context (LOCKWOOD, 2017), the chat interaction facilitates informal interaction. Thus, chat technology supported the participants in grounding activities in the negotiation. During our interviews, participants mentioned several times that they perceived the chat technology to be fast, which was related to the informal language supporting ‘direct talk’ (HINDS; MORTENSEN, 2005; ROBINSON, 1991). In addition, the participants considered it comfortable to use the chat technology, since it allowed them ‘to query one’s entire team at once’ (HERBSLEB et al., 2002).
Finally, we found that chat technology help to reduce the risk of subgroup dynamics causing faultlines (CRAMTON; HINDS, 2004). When teams are composed of geographically distributed subgroups, where demographic attribute align, there is a risk of creating faultlines complicating collaboration. A risk which is further strengthen in cases, where other types of distinct features confirm the differences across sites, such as like nationality or seniority (MATTHIESEN; BJØRN, 2016). Chat technology made it possible for participants to divide their interaction into parallel groups of interactions, which each created and shaped subgroups in different ways – both across and within geographical locations. In our case, the participants divided their interaction into two main chat groups: the inclusive ‘ticket chat group’ and the excluding ‘BrazilSoft’s private chat group’. However, by having these pre-defined forums, with existing pre-defined participants and purposes, users did not have to consider who to send potential information to each time they were sending a message. They did not risk forgetting to add others or include the wrong audiences for their messages. Instead, the pre-determined nature of participation made it possible for participants to utilize the permanent nature, the informal language, and the reviewability and navigation of the conversations in a fast and informal way, making the negotiation similar as to if the participants had
been collocated. In this way, the chat technology allowed the participants to navigate and organize subgroups, while supporting collaboration across subgroups, thus reducing the risk of faultlines. Table 2 summarizes our relevant findings which supported a successful negotiation within a Brazilian distributed software development team. Those findings answer our research question once chat technology enables textual and permanent nature of the conversation, the synchronous interaction embedded, informal interaction among the participants, and, finally, to reduce the risk of faultlines. Therefore, our findings can drive new the design of cooperative technologies supporting geographically distributed collaboration.
<table>
<thead>
<tr>
<th>Findings</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Textual and permanent nature</td>
<td>Textual and permanent nature of the chat technology was crucial for supporting the negotiation between the vendor and the client.</td>
</tr>
<tr>
<td>Synchronous interaction</td>
<td>The synchronous interaction embedded in chat technology supported the negotiation.</td>
</tr>
<tr>
<td>Informal language</td>
<td>Chat technology made it possible for the participants informally interact when compared to their otherwise regular textual interactions, in particular, email.</td>
</tr>
<tr>
<td>Reduce faultlines</td>
<td>Chat technology helps to reduce the risk of subgroup dynamics causing faultlines.</td>
</tr>
</tbody>
</table>
Source: The authors
CONCLUSION
This study investigated a successful negotiation within a Brazilian distributed software development team using chat technology. We found that the chat technology facilitated negotiation by providing a shared context, synchronous interaction embedded in asynchronous functionality, combined with reviewability supporting navigation by the participants. Analyzing the two chat groups and interviewing their participants, we observed that permanent nature, informal language, navigation, and pre-defined features of subgroups were salient for the success of the negotiation and resolving a potential critical conflict between two core software developers who were geographically distributed. We argue that chat technology has clear strengths in terms of supporting critical interaction within organizations, thus, when we, as CSCW researchers, are to explore and design cooperative technologies supporting geographically distributed collaboration. Therefore, we consider the feature of chat technology and how such features can be generally embedded into the multiple cooperative technologies supporting distributed collaboration both within and outside of the software development domain.
ACKNOWLEDGMENT
We would like to thank Cesumar Institute of Science, Technology, and Innovation (Instituto Cesumar de Ciência, Tecnologia e Inovação – ICETI), Maringá, Paraná, Brazil and MGA Public Management (MGA Gestão Pública Ltda.), Maringá, Paraná, Brazil.
REFERENCES
BJØRN, P.; ESBENSEN, M.; JENSEN, R. E.; MATTHIESEN, S. Does Distance Still Matter? Revisiting the CSCW Fundamentals on
GUO, ZI.; D’AMBRA, J.; TURNER, T.; ZHANG, H. Improving the Effectiveness of Virtual Teams: A Comparison of Video-Conferencing and Face-to-Face Communication in China. IEEE
Recebido: 06 ago 2018.
DOI: 10.3895/rts.v15n37.8655
Direito autoral: Este artigo está licenciado sob os termos da Licença Creative Commons-Atribuição 4.0 Internacional.
|
{"Source-Url": "https://periodicos.utfpr.edu.br/rts/article/download/8655/6290", "len_cl100k_base": 10536, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 51921, "total-output-tokens": 15892, "length": "2e13", "weborganizer": {"__label__adult": 0.00034165382385253906, "__label__art_design": 0.00039887428283691406, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.0034313201904296875, "__label__entertainment": 9.846687316894533e-05, "__label__fashion_beauty": 0.00013530254364013672, "__label__finance_business": 0.00151824951171875, "__label__food_dining": 0.00033783912658691406, "__label__games": 0.0006127357482910156, "__label__hardware": 0.000514984130859375, "__label__health": 0.0004329681396484375, "__label__history": 0.00021696090698242188, "__label__home_hobbies": 8.982419967651367e-05, "__label__industrial": 0.00030541419982910156, "__label__literature": 0.00035643577575683594, "__label__politics": 0.0002760887145996094, "__label__religion": 0.0002841949462890625, "__label__science_tech": 0.015655517578125, "__label__social_life": 0.00025725364685058594, "__label__software": 0.0214080810546875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0001926422119140625, "__label__transportation": 0.0004138946533203125, "__label__travel": 0.0002338886260986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65301, 0.03549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65301, 0.13998]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65301, 0.91028]], "google_gemma-3-12b-it_contains_pii": [[0, 1660, false], [1660, 5067, null], [5067, 8534, null], [8534, 12036, null], [12036, 15525, null], [15525, 19026, null], [19026, 22263, null], [22263, 25576, null], [25576, 28866, null], [28866, 32120, null], [32120, 35325, null], [35325, 38368, null], [38368, 41586, null], [41586, 45196, null], [45196, 49185, null], [49185, 52193, null], [52193, 54037, null], [54037, 55851, null], [55851, 57646, null], [57646, 59222, null], [59222, 60993, null], [60993, 62655, null], [62655, 64425, null], [64425, 65301, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1660, true], [1660, 5067, null], [5067, 8534, null], [8534, 12036, null], [12036, 15525, null], [15525, 19026, null], [19026, 22263, null], [22263, 25576, null], [25576, 28866, null], [28866, 32120, null], [32120, 35325, null], [35325, 38368, null], [38368, 41586, null], [41586, 45196, null], [45196, 49185, null], [49185, 52193, null], [52193, 54037, null], [54037, 55851, null], [55851, 57646, null], [57646, 59222, null], [59222, 60993, null], [60993, 62655, null], [62655, 64425, null], [64425, 65301, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65301, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65301, null]], "pdf_page_numbers": [[0, 1660, 1], [1660, 5067, 2], [5067, 8534, 3], [8534, 12036, 4], [12036, 15525, 5], [15525, 19026, 6], [19026, 22263, 7], [22263, 25576, 8], [25576, 28866, 9], [28866, 32120, 10], [32120, 35325, 11], [35325, 38368, 12], [38368, 41586, 13], [41586, 45196, 14], [45196, 49185, 15], [49185, 52193, 16], [52193, 54037, 17], [54037, 55851, 18], [55851, 57646, 19], [57646, 59222, 20], [59222, 60993, 21], [60993, 62655, 22], [62655, 64425, 23], [64425, 65301, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65301, 0.07186]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
42cf3bacda211585a2fbfc3e2911cc4b4876873b
|
[REMOVED]
|
{"Source-Url": "https://riuma.uma.es/xmlui/bitstream/handle/10630/9634/FASE2015.pdf?sequence=1", "len_cl100k_base": 9250, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 45770, "total-output-tokens": 12152, "length": "2e13", "weborganizer": {"__label__adult": 0.00023806095123291016, "__label__art_design": 0.0003440380096435547, "__label__crime_law": 0.00023472309112548828, "__label__education_jobs": 0.0004091262817382813, "__label__entertainment": 5.829334259033203e-05, "__label__fashion_beauty": 0.00011712312698364258, "__label__finance_business": 0.00017631053924560547, "__label__food_dining": 0.0002377033233642578, "__label__games": 0.0003628730773925781, "__label__hardware": 0.0005741119384765625, "__label__health": 0.00030875205993652344, "__label__history": 0.00017726421356201172, "__label__home_hobbies": 6.109476089477539e-05, "__label__industrial": 0.0002435445785522461, "__label__literature": 0.00020956993103027344, "__label__politics": 0.00021076202392578125, "__label__religion": 0.0002925395965576172, "__label__science_tech": 0.01529693603515625, "__label__social_life": 6.794929504394531e-05, "__label__software": 0.00897216796875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00019478797912597656, "__label__transportation": 0.000308990478515625, "__label__travel": 0.0001710653305053711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43993, 0.03901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43993, 0.3307]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43993, 0.84241]], "google_gemma-3-12b-it_contains_pii": [[0, 2547, false], [2547, 5981, null], [5981, 9038, null], [9038, 12419, null], [12419, 15518, null], [15518, 17442, null], [17442, 18923, null], [18923, 22404, null], [22404, 25517, null], [25517, 27728, null], [27728, 31003, null], [31003, 34295, null], [34295, 37683, null], [37683, 40517, null], [40517, 43698, null], [43698, 43993, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2547, true], [2547, 5981, null], [5981, 9038, null], [9038, 12419, null], [12419, 15518, null], [15518, 17442, null], [17442, 18923, null], [18923, 22404, null], [22404, 25517, null], [25517, 27728, null], [27728, 31003, null], [31003, 34295, null], [34295, 37683, null], [37683, 40517, null], [40517, 43698, null], [43698, 43993, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43993, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43993, null]], "pdf_page_numbers": [[0, 2547, 1], [2547, 5981, 2], [5981, 9038, 3], [9038, 12419, 4], [12419, 15518, 5], [15518, 17442, 6], [17442, 18923, 7], [18923, 22404, 8], [22404, 25517, 9], [25517, 27728, 10], [27728, 31003, 11], [31003, 34295, 12], [34295, 37683, 13], [37683, 40517, 14], [40517, 43698, 15], [43698, 43993, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43993, 0.08743]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
072613f9d8cd37b5b4adbfda3168418186efe8ec
|
Procedural City Generator
MSc Master’s Project
Praveen Kumar Ilangovan
i7834000
My Sincere Thanks to
Jon Macey
Peter Comninos
Phil Spicer
Peter Claes
Nicholas Hampshire
Michael Cahsmore
Udhay Shankar
Sundararajan Srinivasakannan
and all my fellow course mates
Graham and other sidefx forum members
Table of Contents:
Abstract 6
1.0 Introduction 7
2.0 Previous works in this field 7
3.0 Technical Background 9
3.1 L-Systems in City Generation 9
3.2 Alternative approach to City Generation 13
3.2.1 Sampling Technique 13
3.2.2 Voronoi Pattern 14
3.2.3 Subdivision Technique 15
4.0 Procedural City Generator 15
4.1 Assets and nodes 16
4.2 Terrain Generation 17
4.2.1 Grey Scale height map as input 17
4.2.2 Contour map as input 18
4.2.3 Reason to create “Dist” python sop 19
4.2.4 Creating Grey scale map 20
4.2.5 Feeding the details of the water bodies on terrain 22
4.3 Road Network Generation 23
4.3.1 Generation Process 23
4.3.2 IPK_Roadsampler 24
4.3.3 Selection of Road points 24
4.3.4 Computing the sample points 25
4.3.5 Checking with the water bodies 25
4.3.6 NURBS Curve generation 28
4.3.7 IPK_RoadSegment 29
4.4 Street and Plot Generation 29
4.5 Building Distribution 31
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.5.1 IPK_Instancer</td>
<td>31</td>
</tr>
<tr>
<td>4.5.2 IPK_BuildNetwork</td>
<td>32</td>
</tr>
<tr>
<td>4.5.3 IPK_BgeoDistributor</td>
<td>32</td>
</tr>
<tr>
<td>4.5.4 IPK_Scaler</td>
<td>33</td>
</tr>
<tr>
<td>5.0 Conclusion</td>
<td>35</td>
</tr>
<tr>
<td>6.0 Problem faced</td>
<td>35</td>
</tr>
<tr>
<td>7.0 Future Improvements</td>
<td>36</td>
</tr>
<tr>
<td>References</td>
<td>36</td>
</tr>
</tbody>
</table>
List of pictures:
L-System plant 11
City created using CityEngine 13
Grey Scale map 17
Contour map 18
Trace contour geometry without Dist node 19
Trace contour geometry with Dist node 20
Grey scale Map created from a contour map using the asset 21
Terrain generated from the IPK_Terrain asset 22
Pond Map 22
Terrain with areas allocated for ponds. (Primitives in blue) 23
Diagrammatic representation of pushing the sampled point out of the illegal area 26
Diagrammatic representation of avoiding a road from intersecting the illegal area 27
Roads on the terrain with the proposed bridges 28
Street Patterns 30
Subdivided Street Pattern 31
Rescaling the building to fit it within the plot 33
City generated using the asset 34
City generated using the asset 35
Abstract:
The objective of this project is to generate a set of digital assets (HDA) for Houdini which can be used to create an accessible and interactive tool to automatically generate a realistic and detailed city layout suitable for use in real time rendering. To accomplish the task, HOM – Houdini Object Model, an API (Application Programming Interface) which lets user get information from and control Houdini using the Python scripting language and the existing powerful Houdini nodes has been used.
1.0. Introduction:
Contemporary computer games are often situated in large urban environments. Animated movies and some feature films are also required to create a digital city for their visual and special effects shots. It necessitates a time consuming, complex and expensive process of content creation which involves modelling the terrain, road network, street patterns, vegetation and other associated features. Meeting the customers need in quality, realism and scale makes the process more complicated. As a result of this, the time and money that could have been spent in improving the game play or adding innovative features are lost on content creation. A potential solution to this problem is creating everything by procedural methods. Previous researches has showed that procedural techniques like Fractals, L-systems, Noises (Perlin noise, Voronoi noise, etc) can be used to recreate natural phenomena like plants, textures, etc.
The key aspect of procedural techniques is that it characterises the entity, be it geometry, texture or anything for that matter, in terms sequential instructions rather than a static block of data. These instructions can then be called on whenever an instance of the asset is initialized and the various characters can be parameterized to allow the generation of instances to be unique from other instances. A typical example is creating 3D primitives say, cuboids with random heights.
Now it is quite obvious why Houdini was chosen as a platform for this project. The node based environment of Houdini makes it completely procedural.
2.0 Previous Works in this field:
Procedural techniques like Fractals, L-Systems etc were once largely applied to the generation of natural objects like vegetation and textures. Only recently the researchers have turned their attention to their application in the context of man made phenomena especially in the application of recreating a city. In the recent years, many research papers and stand alone applications have been developed to generate a 3D city.
Yoah I H Parish and Pascal Muller of Switzerland came out with a stand alone application called “CityEngine” which they presented at ACM SIGGRAPH 2001, paper titled Procedural Modelling of cities. Their application is capable of creating an
urban environment from scratch, based on a hierarchical set of comprehensible rules that can be extended on the basis of user needs. [1]
Their system makes use of various image maps like elevation map, land/water/vegetation map, population density map, zone map, street map, height map, etc to create a city. The system also invokes two different types of L-Systems, one to create streets and the other one to create buildings. User can generate different types of streets and buildings by manipulating the rules of the L-System which governs the production of streets and buildings.
Sun and Baciu proposed an alternative method to create virtual city in their paper titled “Template based generation of road network for city modelling” in 2002. Their proposed system uses simple templates and a population adaptive template to create a virtual city. Like the previous application, the system requires a lot of inputs in the form of 2D image maps. A color image map which contains geographical information (land/water/vegetation) of a place, an elevation map which contains the height (altitude) information of a place and the population density map of a place are the must needed inputs for the system. This system concentrates only on the generation of road network of a city. Although the output of this system reflects the patterns found in cities, it lacks the complexity and the scale of a real city road network. [2]
Watson et al applied an agent based technique to generate the city in their application titled “CityBuilder”. This system is built on the NetLogo™ platform which is a multi agent programmable modelling environment based on the Logo programming language. It is designed to provide users with a platform to explore the emergent behaviour. The CityBuilder system not only models the road network and the street pattern but also simulates the growth and development of them over a time period. [3]
Kelly and McCabe presented a paper titled “CityGen: An interactive system for procedural city generation” in the Fifth International Conference on Game Design and Technology in 2006. CityGen is an interactive application that provides a complete integrated workspace for city generation and divides the whole city generation process into three stages. They are Primary Road Generation, Secondary Road Generation and Building Generation. To create primary roads, they use a sampling algorithm for computing road trajectories that follow underlying terrain in a
natural and convincing way and the secondary roads are generated by a subdivision algorithm. The buildings are generated using “string grammars”. [4]
3.0 Technical Background:
As mentioned above, techniques like Fractals and L-Systems form the basis of almost all the procedural city generators, although there are some exceptions.
3.1 L-Systems in City Generation:
Fractal, the word was coined by a mathematician Benoît Mandelbrot in 1975 and was derived from the Latin *fractus* meaning "broken" or "fractured." A fractal is generally "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole," a property called self-similarity. [5]
Fractals are considered to be infinitely complex since they appear similar at all level of magnification. They are used to generate mountain, clouds, blood vessels, snowflakes, etc. Being a procedural technique, a fractal shape is generated by recursive algorithm. The number of recursion defines the detail of the fractal shape. These are however limited to self similar structures and are often superseded by a more flexible method called L-Systems.
An L-system or Lindenmayer system is a parallel string rewriting system, namely a variant of a formal grammar, most famously used to model the growth processes of plant development, but also able to model the morphology of a variety of organisms. L-systems can also be used to generate self-similar fractals such as iterated function systems. L-systems were introduced and developed in 1968 by the Hungarian theoretical biologist and botanist from the University of Utrecht, Aristid Lindenmayer (1925–1989). [6]
In general, rewriting is a technique for defining complex object by successively replacing parts of a simple object using a set of rewriting rules or productions. An L-system is based on a set of production rules. Each string consists of a number of different modules which are interpreted as commands. The parameters for these commands are stored within the modules. The components of an L-System are
**Variables** – V – set of strings or symbols that can be replaced in each production based on the rule
**Constants** – S – set of strings or symbols that remain constant throughout the production.
**Axiom** - ω - set of variables and constants that represent the initial state of an L-System.
**Rules** – P – set of rules that explains and governs the way the variables can be replaced with the combination of constants and other variables. [7]
**Examples of a simple L-System:**
These examples were taken from the book, “Algorithmic beauty of plants” written by
A system G is defined by four components as explained in the previous passage and therefore G can be written as a set of four components.
\[ G = \{V, S, ω, P\} \]
Where, \( V = \{a, b\} \)
\[ ω = a \]
\[ P_1 = a \rightarrow ab \]
\[ P_2 = b \rightarrow ba \]
**Initial Generation (n = 0):** a
**Next Generation (n = 1):** ab
** (n = 2): abba
** (n = 3): abbabaab
This example shows how the length of string grows in each generation based on the production rules. The above example explained L-System in its theoretical form. To see it visually, here comes an example.
\[ G = \{V, S, ω, P\} \]
Where, $V = \{X, F\}$
$S = \{+, -, [\, , ]\}$
$\omega = X$
$P : P1 = X \rightarrow F-[[X]+X]+F[FX]-X$
$P2 = F \rightarrow FF$
Other details are angle is 22.5 degrees and number of generation is 5.
F means draw Forward
- means turn left by the given angle (here 22.5 degrees)
+ means turn right by the given angle (here 22.5 degrees)
[ means store the current position and angle
] restore the position and angle with the corresponding values
Fig 1: L-System plant
For quite a long time, this string rewriting technique was employed only for the generation of natural phenomena like plants, micro-organisms, fractals, etc. It was Parish and Muller of Switzerland who came out with the most significant and innovative method of using these formal grammars for generating the man made phenomena like roads, buildings, etc. Their system invokes two different types of L-System namely self sensitive L-System and stochastic parametric L-System.
Self sensitive L-System is an extended form of L-system which “CityEngine”, the stand alone application developed by Parish and Muller uses to generate the road and street network. This system takes the existing shape into account before every generation and so they can be grouped under context sensitive L-System. Inputs for
this L-System are a set of 2D image maps. Geographical information on the elevation of
the land, water boundaries are obtained from the elevation and land/water/vegetation map and the socio statistical data like population density of the
region, street pattern type of region, etc are obtained from socio statistical maps like
population density map, street network map, etc.
Once the required data are fed to the system, road generation application starts
developing the network. Road generation is accomplished through the use of two rule
sets. They are Global goals and local Constraints. Initial tentative road segments are
plotted by the rule set defined in the Global goals which are then refined by the local
constraints that reflects the practical constraints of the real world. User can manage
the development of road segments by manipulating the rule sets and also specifying
extra parameters like smoothening angle of road edges, road width, etc.
The land area is subdivided after the generation of road network to form the
allotments where the buildings will be generated by a type of L-System called
stochastic parametric L-System. For every allotment one building is generated. They
are generated by manipulating the arbitrary ground plan. The modules of the L-system
consist of transformation modules (scale and move), an extrusion module, branching
and termination modules, and geometric templates for roofs, antennae, etc. The final
shape of the building is determined by its ground plan which is transformed by
interpreting the output of the L-system. The output of the L-system is fed to another
parser, which translates the resulting string into geometry readable by the
visualization systems.
Using extended L-System complex and detailed city can be developed. But the
disadvantage in this system is the iterative nature of L-System. As the iteration
increases, the number of variables that has to be replaced and the complexity of the
system increase exponentially. This makes the computation really very expensive.
Every time a new constraint is added, many rules have to be rewritten. This makes
extensibility a difficult task. User is also expected to have high level of expertise in
framing L-System grammars to build the city which in turn reduces the accessibility
of this system.
3.2 Alternative approach to City Generation:
This project doesn’t use L-Systems to generate the city, instead, uses a least computationally expensive process called sampling technique for road network generation and a voronoi pattern generator for the street network generation. The sampling technique employed in the generation of road network is adopted from a research paper titled “Citygen: An interactive system for procedural city generation”, authored by Kelly and McCabe presented in the *Fifth International Conference on Game Design and Technology, 2007*. Voronoi pattern is used to generate the street network for the city and a simple subdivision algorithm is implemented to create the plots where the buildings will be placed.
3.2.1 Sampling Technique:
The sampling technique used in this project for the generation of road network is not an exact implementation from the source research paper. It has been modified to suit the needs of this project. For instance, the sampling technique given in the research paper is bidirectional that is, the sample points are plotted simultaneously from the source and the destination point and terminates in the middle. But the algorithm implemented in this project is unidirectional. A unidirectional approach is used to reduce the computation and keep the workflow simple.
A road is generated starting from a source point and sampling a set of points at regular intervals to define a path to the destination. The number of samples (n) to be plotted between the source and the destination and the maximum deviation angle ($\theta$) of a sample point that can be plotted from the source point are specified by the user.
Say, Number of Samples = n
Distance between the source and the destination point = D
Distance between the two sample points (d) = D / n
Each sample point travels a distance $d$ and a random angle is chosen based on the user input and a quaternion is formed to calculate the position of the deviated point. A quaternion is formed based on the axis which is perpendicular to the source point and the angle chosen randomly in the range specified by the user and multiplied with the current sample position which is now distance “$d$” away from the source/another sample point and collinear with the source and the destination point. Once the number of samples as mentioned by the user is plotted, the segment points are tested for any intersection with the illegal areas like ponds and other water bodies and then fed to a NURBS curve generating function which generates a curve between the source and the destination point taking these sample points as control points.
The checking algorithm to find whether a sample point is within an illegal area and moving it away from that area if it is within that area are explained in detail in the later part of this thesis.
3.2.2 Voronoi Pattern:
Voronoi Diagrams were demonstrated as a method of procedural generation by S. Worley in his paper titled “A cellular texture basis function”, in which he detailed an algorithm that partitions the space into a random array of cells creating cellular looking cells. This technique is widely used in the procedural generation of textures like tree bark, skin, cobblestone, sun baked mud, etc.
Voronoi diagram can be defined (as in mathworld.wolfram.com) as the process of partitioning of a plane with $n$ points into convex polygons such that each polygon contains exactly one generating point and every point in a given polygon is closer to its generating point than to any other. A Voronoi diagram is sometimes also known as a Dirichlet tessellation. The cells are called Dirichlet regions, Thiessen polytopes, or Voronoi polygons.
In procedural city generating context, the voronoi diagram is used to create cellular patterns which appear similar to the street patterns of a city. The idea of using voronoi pattern in a city generator was first proposed in a research paper titled “Template based generation of road network for city modelling” authored by Sun, Baciu et al in 2002. According to their method, the road network for a city can be developed from the voronoi pattern. User feeds a population density map of a city and their system is capable of extracting the density points from the map which is then fed
into a voronoi pattern generator. The generator uses these points as the attractor point and attracts all the points closer to it. The edges or cell boundaries of the resulting voronoi diagram are used to create the interconnected road network.
A similar technique is used in this project to generate the secondary roads and the street patterns. Based on the distribution of points fed to the voronoi generator, a variety of street patterns like the radial pattern one can see in the cities like Paris and Rome, organic pattern one can see in cities like Delhi, Chennai and raster pattern as in Newyork and manhattan can be generated.
3.2.3 Subdivision Technique:
Once the road network and the street patterns are generated for the city, the next step is the plot creation to place the buildings. Each cell formed as a result of voronoi pattern generator is subdivided to create plots. The plots are the bases on which the buildings will be placed latter. The level of subdivision of each cell can be specified by the user. By specifying the level of subdivision, the size of the plot and the density of the plots in a street vary which differentiates the various zones of a city. If the plots are small and closely packed, the street appears like a residential area. If the plots are big and closely packed, the street appears like a commercial area and an industrial area can be generated by having bigger plots than the commercial area and maintaining sparse density distribution of plots.
4.0 Procedural City Generator:
The procedural city generator developed in this project is an accessible and interactive Houdini tool comprising a set of digital assets to automatically generate a realistic and detailed city layout. Creating a city using this tool in Houdini is a four step process. The city generation starts of with generating the terrain of the city followed by the primary or highway road network generation. The third step is the street and plot generation and the final step being the distribution of the buildings onto the plots generated.
The rest of the thesis is going to be a detailed explanation of how the tool accomplishes each step to successfully generate a realistic and detailed city layout which will be suitable for use in real time rendering. The tool makes use of the
powerful node based Houdini environment along with powerful API (HOM – Houdini Object Model) to generate the procedural city.
This procedural city generating tool is a set of 8 digital assets (HDA) and 6 Python SOP nodes.
4.1 Assets and nodes:
As mentioned earlier, the city generation is a four step process. The custom developed assets and python nodes used in each step of the city generating process is listed in this passage.
Step 1: Terrain Generation
Asset → IPK_Terrain, IPK_Contour_Heightmap
Python nodes → IPK_Contour, Dist
Step 2: Road Network Generation
Asset → IPK_Road, IPK_RoadSegment
Python nodes → IPK_Roadsampler
Step 3: Street and Plot Generation:
Asset → IPK_Streetgen, IPK_Orgpattern, IPK_Radpattern
Step 4: Building Distribution:
Asset → IPK_Buildnetwork
Python node → IPK_Bgeodistributor, IPK_Scaler, newnode.
4.2 Terrain Generation:
Terrain generation is the first step in the process of generating a 3D city. The input for the terrain generator can either be a contour map or a greyscale height map which is widely known as elevation map.
4.2.1 Grey Scale height map as input:
If the input is a greyscale height map, create an instance of IPK_Terrain in the Sop level inside a geo node. The asset has the option to set the size of the city that is going to be generated. On setting the proper city size, a 2D greyscale height map is loaded in the parameters tab where it is asked for. On clicking the “Apply Height map” option, the grid will be converted into a terrain.
The actual operation that is being performed inside the asset which converts the grid into a terrain based on a 2D image map is transferring the grey value in the image map as a translating value in the positive Y axis to each point on the grid. Houdini has a built-in function called “pic( )” which does this. The white pixels return a value of 1 while the black pixel returns 0. The intermediate grey values return a float value between 0 and 1. The values can be multiplied with a height factor to increase the height of the terrain.
Fig 3: Grey Scale map
4.2.2 Contour map as input:
If the input is a contour map, then it has to be converted into a grey scale height map before feeding that to IPK_Terrain to generate the terrain. Converting contour map into a grey scale height map is a very easy process in this procedural city generator.
Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines.
Feed the contour map as an input to IPK_Contour_Heightmap asset. The technique behind converting a contour map to a height map is nothing but converting each contour line in the map as a primitive and filling it with a grey value based on the altitude at that point. Trace sop is used to read the image information in the COP-Level and convert that into a set of primitives in sop level.
Fig 4: Contour map
4.2.3 Reason to create “Dist” python sop:
As mentioned above, the map is inverted in the COP-Level and it is traced using a trace sop in Houdini and displayed in the viewport. The way the trace sop converts the image into primitives was not as expected. Each contour line in the map was converted into two primitives while tracing. As a result, the traced geometry had a lot of primitives than the number of contour lines.

Fig 5: Trace contour geometry without Dist node
To overcome this, the traced geometry is fed into a python sop called Dist. The python sop clears up the traced geometry and returns it with the same number of primitives as the number of contour lines. Python sop achieves this by calculating the distance between a constant point outside the geometry and each primitive in the geometry. HOM has a built-in function in hou module which does this. The function
is hou.Prim. nearestToPosition(self, pos3). This function returns the distance between the primitive and a point. For each primitive in the geometry, the distance is calculated and since the two primitives formed for each contour line are very close to each other, the distance calculated will be almost same and one primitive out of the two which shares almost the same distance is deleted.
Fig 6: Trace contour geometry with Dist node
4.2.4 Creating Grey scale map:
Once the traced geometry is cleaned up, the user has to enter the details of the contour map to convert it into a grey scale map. The details to be fed are contour interval, altitude range (lowest altitude and the highest altitude) and the number of index contours in the map. Index contours are the contour lines that are found in the map along with a numerical value (it’s altitude) labelled on top of it. Following the
number of index contours, the primitive number of index contours, the altitude of it and the group of contours whose altitude is dependent on this index contour are entered.
The process of converting the contour lines into a grey scale starts once the required inputs are entered. The primitive with the lowest altitude is filled with black color and the primitive with the highest altitude is filled with white color. The primitives with the intermediate altitude are filled with an interpolated grey value. The user can render the scene now and save the grey scale map as an image which can then be fed to IPK_Terrain asset to generate the terrain.
Fig 7: Grey scale Map created from a contour map using the asset
4.2.5 Feeding the details of the water bodies on terrain:
IPK_Terrain also takes in a water map as input to store the details of the water bodies in a city which will be used in the road network generation. This step is optional. If the user wants water bodies in the city he is developing, he can feed a black and white water map where the white areas denote the water bodies and the information is stored in the asset.
4.3 Road Network Generation:
Terrain generation is followed by the process of generating the highways or the primary road network of a city. The input for generating the highways is the “Junction points”. Junction points are nothing but a group of points in a city which acts as a source and destination of the highways. These junction points are fed to the road network generator in a form of map. The map is not a complex road map of a city. Just black dots on a white background. Black dots represent the junction points of the city.
4.3.1 Generation Process:
Create an instance of IPK_Road asset inside the same geo node where the terrain asset has been instanced. Connect the input of the IPK_Road to the output of the terrain asset. Feed the junction map to the road asset. The asset has a python node called “IPK_Roadsampler” in it which generates the highway network of the city. As did in the terrain generation, the black dots in the image are traced as primitives using a trace sop. For each primitive a point is scattered and now the junction points are available for highway generation.
4.3.2 IPK_Roadsampler:
The steps performed by this python sop for generating the highways or the primary road network are as follows:
- Selection of the points between which the roads are going to be generated.
- Computing samples using sampling techniques.
- Checking the samples for their presence in the illegal areas like water bodies and changing its position if they are in such areas.
- Feeding the computed and checked points to a NURBS curve generating function which generates smooth NURBS curves which are nothing but the highways.
All the calculations done in generating the road network are 2D calculations without taking height of the terrain into consideration. Once the NURBS curves are generated, Houdini has a powerful node called “lattice” which can be used to ray the curves onto the terrain. This reduces a lot of computations.
4.3.3 Selection of Road points:
The user specifies the number of branches each junction point can have. Based on that, each junction point is assigned a random number as the number of branches within the range specified by the user. The junction points are stored in a list. First junction point is taken and it is looped through the rest of the points down the list to find the possible destination points between which the road segments will be generated. The loop is executed until the number of branches assigned for that point is reached.
The selection process involves checking the source and destination point (source point is the junction point currently being looped through every other point down the list and the destination point is the current point with which the source point is being checked with) for intersection with any other approved road segments. If intersection is true, the proposed road segment will not be approved and the source point is checked with the next point. If the proposed road segment doesn’t intersect with the approved road segment, the angle between the approved road segments is checked. The angle should be above a certain threshold value for the proposed road to become an approved road. Approved road points are stored in a list.
This process is looped in for all the junction points in the list and a set of approved source and destination points are stored in a list.
**4.3.4 Computing the sample points:**
Each element in the approved road points has two junction points in it. One is the source point and the other is the destination point. These points are fed to the sampling technique as explained in the earlier part of this thesis and the samples are calculated. User can specify the number of samples that has to be calculated between the source and the destination point and also the angle of deviation of a sample point form the source point.
**4.3.5 Checking with the water bodies:**
This is an additional preference the user can turn on if he had water bodies in the city. On turning this option on, the computed sample points undergo a series of checks before they are fed into a NURBS curve generator. The details about the water bodies are obtained form the terrain node.
The steps involved in checking are, each computed sample in each road segment is checked if it lies inside the illegal area. Illegal areas are nothing but the primitive numbers on the terrain which has water bodies in it.
If a computed sample is found to be within the bounds of the illegal area, the exact primitive in which it lies is found out. The point is then pushed out of the primitive in the direction perpendicular to the intersecting edge of the primitive with the line drawn between the computed sample and its source point. Pushing it either on the left or right of the primitive is depend upon the distance between the point and the left and the right bounding edges of the primitive. It pushes to the side whose distance is lesser. The moved point now lies outside the primitive and it is checked again until the point is found to be entirely out of the illegal area.
Once the point is pushed away from the illegal area, the new position of the point is updated in the list of approved road segments list. “Approved road segments” is a list which has the entire computed sample point’s position along with its source and the destination point. The next check after this is to find out whether the road segment between the updated point and its previous point in the list is intersecting or crossing the illegal area. If it is found to be intersecting, the intersecting points are found out and they are moved out of the illegal area for a definite distance along the direction vector it makes with the centre point of the illegal area.
The intersection between the lines is found out using the following algorithm adopted from the book “Real Time Collision Detection” written by Christer Ericsson. The algorithm is that if the line segments AB and CD are intersecting, then the signed areas of the triangles ABC and ABD will be opposite. The signed area is positive if the triangle winds counter clockwise, negative if it winds in clockwise and zero if the triangle is degenerate (collinear or coincident points). The formula to calculate the signed area is as follows.
Signed area = \((A.x - C.x) \times (B.y - C.y) - (A.y - C.y) \times (B.x - C.x)\)\(^8\)
This algorithm, in the case of segments intersecting each other, can be used to calculate the intersecting points.
If the moved out points still intersects with the illegal areas, to avoid the expensive computation that involves in recursive checking of these points, a bridge is proposed in that site. Now the approved road segments will be appended with the bridge points in it.
Fig 13: Roads on the terrain with the proposed bridges.
4.3.6 NURBS Curve generation:
After the checking for intersection with the illegal areas is done, approved road segments list is differentiated into two lists. One list stores the road points and the other list stores the bridge points. They are differentiated based on the bool value of each point. If the point is a bridge point, the bool value will be one and if it is zero then it is a road point.
The road segments are then fed into a NURBS Curve generation function which generates the road curves.
This is the operation that is performed in the IPK_Roadsampler python node. The junction points are fed to the python node and it returns the nurbs curves which are then swepted with a line to form a visually appealing road.
The road network generated by the IPK_Roadsampler can be changed and customized. It can be changed by changing the seed value and other parameters like number of samples, deviation angle, etc. New roads can be added by the point numbers within which the roads are to be generated. Similarly existing road segments can be deleted by specifying the source and the destination point number.
The road network is latticed with the terrain using a lattice node which rays the node exactly on top of the terrain at the end.
4.3.7 IPK_RoadSeg:
IPK_Roadseg is an asset which can be used to create a road segment between any two points and they need not be in the list of junction points. User selects the source position and the destination using a mouse and the road segment will be laid and latticed on top of the terrain. It also uses the exact algorithm that is used by IPK_Road asset.
4.4 Street and Plot Generation:
The third step in the process of generating a procedural 3D city after the terrain and the road network generation is the street and the plot generation. In this step the street network and the plots on which the buildings will be distributed in the later stage are generated.
As mentioned in the earlier part of this thesis, the streets are generated by voronoi pattern and the plots are generated by a subdivision algorithm. Both street and and plots are generated by a single asset namely IPK_Streetgen. It makes use of two other assets, IPK_Orgpattern (which produces organic street pattern one can find in the cities like Newyork, New Delhi,etc) and IPK_Radpattern (produces radial streets as in Paris and Rome). Both these assets have the same workflow except in the way the points are distributed which is fed into voronoi pattern generator. In radial pattern generator, the points are distributed in a circular fashion. The voronoi pattern generator used in the asset to generate voronoi pattern is downloaded from a website [9].
User groups the region on the terrain based on the type of streets. Each group is then connected to the input of a IPK_Streetgen asset. Each group should have separate IPK_Streetgen asset. Inside the asset, the group is converted into a 2D plane on which the street pattern and subdivision occurs. The 2D plane is latticed with the terrain at the end.
Streets are generated and each cell of the street formed by the voronoi pattern is fed into a subdivision technique where the cell is cooked (Boolean operation) with a box of type mesh. The density of the plots and the size of the plots are directly related to the density and the size of the mesh. User can specify the density and the size of the box thereby affecting the density and the size of the plots.
4.5 Building Distribution:
The final step in the generation of the city is the distribution of buildings and other geometries like vegetation, street lamps, etc onto the plots generated in the previous step.
To avoid Houdini from loading and saving the high detailed extremely large geometry files at every frame which may be costly in both memory and time, mantra can be instructed to load the geometry from disk instead of having it in the ifd files. This can be done using Mantra: Delayed load shader. Mantra still has to read the geometry, but instead of having to process the embedded geometry it can load the geometry directly from the disk.
This procedural city generator uses Mantra: Delayed load to distribute the geometries in the plots at the render time. In each plot, a point is created and for each point, geometry is instanced. The point stores the information about the geometry it should load and the scale of the geometry to fit within the plot.
This process is done with the help of an asset and three python nodes. They are IPK_BuildNetwork, IPK_Instancer, IPK_BgeoDistributor and IPK_scaler.
4.5.1 IPK_Instancer:
IPK_Instancer is a python node which sets up the entire geometry distribution network. This python node is an object level operator. Once the streets are created, this instancer is created for each street network. The user uploads the geometries that
should be instanced in that particular street. On clicking the “Create Node” button, the geo nodes for each uploaded geometry is created along with their respective delayed loads in the shop level. All the references are done by the python node itself. Similarly clicking the “Create Asset” after giving the path, will create the IPK_Buildnetwork node in the given path. The path is the path where the street network is. User can feed the output of the street network to the asset which will do the geometry distribution which is explained later in this part.
Once distribution is done, user can see the street filled with bounding boxes instead of geometries and this is to reduce the memory usage and increase the speed of the process. Then press the “Create Instance” button to create the instances which automatically creates an object merge inside the instance node that it creates and references the point group of the build network. Now on clicking the render button, user can see the streets with actual building.
4.5.2 IPK_B_buildNetwork:
This is a digital asset which takes in each plot of the incoming geometry, scatters a point at the centroid of it and assigns attributes to that point which is later used by the delayed load to instance a geometry on to it at the render time. This asset has two python nodes within which actually do all the functions. They are IPK_BgeoDistributor and IPK_Scaler.
4.5.3 IPK_BgeoDistributor:
This is a python node which selects a geometry from the list of geometries user has given based on the probability set by the user. On selecting the geometry, it creates a detail attribute which stores the path of the selected geometry. It is then fed to a file node and a bounding box for the geometry is created. The bounding box is copied onto the point and the index number of the geometry, whose bounding box is on the point, is assigned as a point attribute to the point which will be used by the delayed load at the time of instancing. The list of geometry loaded by the user is referenced to both the buildnetwork asset and the instancer. This avoids loading the entire path of the geometry as an attribute to the point.
The size of the incoming plot is noted and the bounding box is fit to the size of the plot. The scale value is assigned as a point attribute to the point which will be used by the instancer node at the time of instancing.
This process of reducing the size based on the plot’s size will work as expected only in the case of square. To make the bounding box always fit within the plot, a “rescaling algorithm” is followed and it is done by IPK_scaler node. Rescaling algorithm is called only when the plot is not a square.
4.5.4 IPK_Scaler:
The base face of the bounding box is taken and checked for the intersection with any of the edges of the plot. If an intersection is found, then the distance between the centroid of the plot and the intersection point and the distance between the centroid and the vertex of the base bounding box which is outside the plot are calculated. Then the smallest among them is divided by the largest one and the result is the scaling factor. This scaling factor overwrites the scale value assigned to the point at the initial stage.
This algorithm works well as for as the incoming plot is a convex polygon. To make it work even on the concave polygon, linear programming has to be done. So, care has been taken in the plot generation step to avoid concave polygons. The disadvantage with this algorithm is that, sometimes it scales down the geometry really small. In that case, the geometry is replaced by trees and the index number and the
scale value assigned to the point is changed to the new value. The size of the geometry is checked by measuring the area of the base of the bounding box of the geometry after scaled down.
At the end of this process, each plot will have a point with the attributes required by the instance node.
This is a short description about how the procedural city generator works. For more information about the each asset and its parameters, kindly refer the help page of the assets.
Fig 17: City generated using the asset
5.0 Conclusion:
Procedural City Generator created in this project is capable of generating a digital city from scratch. It can generate terrain, road network, propose sites for bridges and street network. It can also distribute the buildings and other geometries which fills the city.
6.0 Problem faced:
A model city was set up using the asset and it failed to render everytime it was rendered in a sequence. The model city had around 8000 geometries in it. When it was rendered as a single frame, it rendered fine. Command line rendering, rendering from ifds, using batch_hrender scripts, etc failed to render the scene. Sometimes it would render a frame and then crash but most often it crashed at the end of the first frame.
The problem was approached with trial and error technique. First the shadows were turned down, then the motion blur and finally reduced the number of geometries to half. But nothing really helped.
I was able to render only after splitting my scene into different components like rendering my terrain and skyscrapers in one pass, rural buildings in a pass,
apartment type of buildings in another pass and finally the dome image and creating ifd files and using batch_hrender script.
7.0 Future Improvements:
- Shape grammars could be incorporated which can generate buildings of its own based on rules given by user which reduces the time spent in modelling the buildings. But user should have a certain level of expertise to frame rules for shape grammars.
- In the viewport, low level buildings can be displayed instead of bounding boxes.
- The road segments can be made editable after their creation. To make it happen, a way should be found out to make the edit node inside the digital asset to be the current node from outside the digital asset.
References:
Other References:
|
{"Source-Url": "https://nccastaff.bournemouth.ac.uk/jmacey/MastersProjects/MSc09/Ilangovan/Thesis_i7834000.pdf", "len_cl100k_base": 9818, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 59204, "total-output-tokens": 11816, "length": "2e13", "weborganizer": {"__label__adult": 0.0008463859558105469, "__label__art_design": 0.009857177734375, "__label__crime_law": 0.0007886886596679688, "__label__education_jobs": 0.0029468536376953125, "__label__entertainment": 0.00055694580078125, "__label__fashion_beauty": 0.0004146099090576172, "__label__finance_business": 0.0004982948303222656, "__label__food_dining": 0.0006952285766601562, "__label__games": 0.0232696533203125, "__label__hardware": 0.00185394287109375, "__label__health": 0.0006151199340820312, "__label__history": 0.0022830963134765625, "__label__home_hobbies": 0.0004041194915771485, "__label__industrial": 0.001224517822265625, "__label__literature": 0.0007834434509277344, "__label__politics": 0.0004813671112060547, "__label__religion": 0.0011548995971679688, "__label__science_tech": 0.1529541015625, "__label__social_life": 0.00020325183868408203, "__label__software": 0.037017822265625, "__label__software_dev": 0.75830078125, "__label__sports_fitness": 0.0007104873657226562, "__label__transportation": 0.0012340545654296875, "__label__travel": 0.0007467269897460938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46833, 0.03991]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46833, 0.79887]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46833, 0.92583]], "google_gemma-3-12b-it_contains_pii": [[0, 81, false], [81, 302, null], [302, 1259, null], [1259, 1523, null], [1523, 2283, null], [2283, 2791, null], [2791, 5076, null], [5076, 7558, null], [7558, 9653, null], [9653, 10834, null], [10834, 12109, null], [12109, 14421, null], [14421, 16236, null], [16236, 18711, null], [18711, 21016, null], [21016, 21855, null], [21855, 23082, null], [23082, 24352, null], [24352, 25288, null], [25288, 26181, null], [26181, 26899, null], [26899, 27321, null], [27321, 28424, null], [28424, 30555, null], [30555, 32403, null], [32403, 33071, null], [33071, 33810, null], [33810, 34641, null], [34641, 36824, null], [36824, 37586, null], [37586, 38977, null], [38977, 41373, null], [41373, 42628, null], [42628, 43144, null], [43144, 44233, null], [44233, 45752, null], [45752, 46833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 81, true], [81, 302, null], [302, 1259, null], [1259, 1523, null], [1523, 2283, null], [2283, 2791, null], [2791, 5076, null], [5076, 7558, null], [7558, 9653, null], [9653, 10834, null], [10834, 12109, null], [12109, 14421, null], [14421, 16236, null], [16236, 18711, null], [18711, 21016, null], [21016, 21855, null], [21855, 23082, null], [23082, 24352, null], [24352, 25288, null], [25288, 26181, null], [26181, 26899, null], [26899, 27321, null], [27321, 28424, null], [28424, 30555, null], [30555, 32403, null], [32403, 33071, null], [33071, 33810, null], [33810, 34641, null], [34641, 36824, null], [36824, 37586, null], [37586, 38977, null], [38977, 41373, null], [41373, 42628, null], [42628, 43144, null], [43144, 44233, null], [44233, 45752, null], [45752, 46833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46833, null]], "pdf_page_numbers": [[0, 81, 1], [81, 302, 2], [302, 1259, 3], [1259, 1523, 4], [1523, 2283, 5], [2283, 2791, 6], [2791, 5076, 7], [5076, 7558, 8], [7558, 9653, 9], [9653, 10834, 10], [10834, 12109, 11], [12109, 14421, 12], [14421, 16236, 13], [16236, 18711, 14], [18711, 21016, 15], [21016, 21855, 16], [21855, 23082, 17], [23082, 24352, 18], [24352, 25288, 19], [25288, 26181, 20], [26181, 26899, 21], [26899, 27321, 22], [27321, 28424, 23], [28424, 30555, 24], [30555, 32403, 25], [32403, 33071, 26], [33071, 33810, 27], [33810, 34641, 28], [34641, 36824, 29], [36824, 37586, 30], [37586, 38977, 31], [38977, 41373, 32], [41373, 42628, 33], [42628, 43144, 34], [43144, 44233, 35], [44233, 45752, 36], [45752, 46833, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46833, 0.033]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
d5ff646ed3e9e5f561b85672eed8f4bc5b7f54ba
|
Designing a Benchmark for the Assessment of XML Schema Matching Tools
Fabien Duchateau, Zohra Bellahsene
To cite this version:
HAL Id: lirmm-00138527
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138527
Submitted on 26 Jun 2007
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Designing a Benchmark for the Assessment of XML Schema Matching Tools
Fabien Duchateau
LIRMM - UMR 5506
Université Montpellier 2
34392 Montpellier Cedex 5 - France
duchateau@lirmm.fr
Zohra Bellahsène
LIRMM - UMR 5506
Université Montpellier 2
34392 Montpellier Cedex 5 - France
bella@lirmm.fr
ABSTRACT
Over the years, many XML schema matching systems have been developed. A benchmark for assessing the capabilities of schema matching systems and providing uniform conditions and the same testbed for all schema matching prototypes, has become indispensable as the matching systems grow in complexity. However, developing a benchmark for the schema matching problem is very challenging, given the wide range of techniques that can be applied to assist in schema matching. In this paper, we present the foundations and desiderata of a benchmark for XML schema matching. Moreover, we have extended the notion of quality of an integrated schema by proposing new scoring functions. Finally, we have designed and implemented XBenchMatch, an application which takes as input: an ideal schema and the result of a matching from a schema matching prototype (i.e. a set of mappings and/or an integrated schema) and generates as output: statistics on the quality of this input. Our proposal is aimed to provide two kinds of evaluations: (i) quality matching evaluation, which is based on the use of the quality measures and (ii) performance of matching schema. The first criteria is very important in automatic schema matching and the second is crucial in large scale when the schema to be matched are very large. In this paper, we present XBenchMatch, a benchmark for testing and assessing schema matching tools and report the experiments results of some matching tools over a large corpus of schemas using our benchmark.
1. INTRODUCTION
Over the years, several approaches of schema matching [6, 9, 14, 18, 22, 25, 28] have been proposed, demonstrating their benefit in different scenarios and many matching systems have been designed. Most of the papers describing a schema matching tool provide an experiment section. However, these experiments reflect a particular scenario, using real-world schemas. For example, a matching tool can provide an acceptable matching quality with good performance in a specific scenario, but it can be unreliable and slow in another case. Thus, it seems difficult to compare two schema matching tools, and to evaluate the one which performs best. And end-users might not know which one is the most appropriate for their task.
To the best of our knowledge, there is no complete benchmark for schema matching tools. In [8], the authors present an evaluation of schema matching tools. This evaluation suffers from two drawbacks. First, by evaluating the matching tools with the scenarios provided in their respective papers, one cannot objectively judge the capabilities of each matching tool. Secondly, some matching tools generate an integrated schema instead of a set of mappings, and the measures provided to evaluate a set of mappings appear not sufficient to evaluate the quality of an integrated schema. Another proposal for evaluating schema matching tools has been done in [28]. It extends [8] by adding time measures and relies on real-world schemas to evaluate the matching tools. However, the evaluation system has not been implemented. Our work extends the criteria provided in [8], by adding scoring functions to evaluate the quality of integrated schemas. It goes further on the evaluation aspect. Indeed all the matching tools are evaluated against the same scenarios. In this paper, we present the foundation of a benchmark for XML schema matching tools. Our evaluation system involves a set of criteria for testing and evaluating schema matching tools. It is aimed to provide uniform conditions and the same testbed for all schema matching prototypes. Our approach focuses on the evaluation of the matching tools in terms of matching quality and performance. Next, we also aim at giving an overview of a matching tool by analysing its features and deducing some tasks it might fulfill. This should help an end-user to choose among the available matching tools depending on the criteria required to perform his task. Finally, we provide a testbed involving a large schema corpus described in 7 that can be used by everyone to quickly benchmark matching algorithms.
Here we outline the main contributions of our work:
- We describe the notion of benchmark for the schema matching application. More precisely we list the different features involved in this process, and we give a methodology on how to evaluate them and to choose the most appropriate for a defined task.
- We have extended the notion of quality for a schema, by proposing new measures like structural overlap.
We have designed XBenchMatch, an application which takes as input an ideal schema the result of a matching from a schema matching system (i.e., a set of mappings and/or an integrated schema). It generates statistics on the quality of this input, based on the criteria defined above.
The rest of the paper is organised as follows: first we give some definitions and preliminaries in Section 2. In Section 3, the list of criteria is explained. In Section 4, we present the main features of schema matching tools. In Section 5 the scoring functions of quality are described. Section 6 briefly presents our XBenchMatch application and the results of our experiments. Section 9 contains the related work; and in Section 10, we conclude and outline some future work.
2. PRELIMINARIES
In this section, we define the main notions used in this paper.
Definition 1 (Schema): A schema is labeled unordered tree \( S = (V_S, E_S, r_S, \text{label}) \) where \( V_S \) is a set of nodes; \( r_S \) is the root node; \( G_S \subseteq V_S \times V_S \) is a set of edges; and \( \text{label} \) is a countable set of labels.
Definition 2 (Semantic Similarity Measure): Let \( E_1 \) be a set of elements of schema 1, and \( E_2 \) be a set of elements of schema 2. A semantic similarity measure between two elements \( e_1 \in E_1 \) and \( e_2 \in E_2 \), noted as \( S_m(e_1, e_2) \), is a metric value based on the likeness of their meaning/semantic content, given as:
\[
S_m : E_1 \times E_2 \to [0, 1]
\]
\((e_1, e_2) \to S_m(e_1, e_2)\) where a zero means a total dis-similarity and 1 value stands for total similarity.
Definition 3 (Automatic Schema Matching): Given two schema elements sets \( E_1 \) and \( E_2 \) and a similarity measure threshold \( t \). We define Automatic Schema Matching, between two elements \( e_1 \) and \( e_2 \), noted as \( \text{match}(e_1, e_2) \), as follows:
For all \( (e_1, e_2) \in E_1 \times E_2 \):
- If \( S_m(e_1, e_2) < t \) then \( \text{match}(e_1, e_2) = \text{false} \)
- Else if \( S_m(e_1, e_2) \geq t \) then \( \text{match}(e_1, e_2) = \text{true} \)
\( d = S_m(e_1, e_2) \) where \( d \) is the similarity degree
Threshold \( t \) may be adjusted by an expert, depending upon the strategy, domain or algorithms used by the match tools.
Example 2.1: If \( \text{match(address, address) is calculated using edit distance algorithm}^{1} \), the value of \( d \) is 0.857 and if \( 3\text{-gram}^{2} \) algorithm is used the result for \( d \) is 0.333. For another example match(dept, department): edit distance value of \( d \) is 0 and 3-gram result is 0.111. The examples show that the threshold has to be adjusted by an expert depending upon the properties of strings being compared and the match algorithms being applied.
Definition 4 (Best Match selection): There can be the possibility of more than one match for an element \( e_1 \in E_1 \) in \( E_2 \). In such situation the match with maximum similarity degree has to be selected. This case can be formally defined as:
Given \( E_2 \subseteq E_2 \) of size \( n \), such that \( \forall e_i \) corresponding to \( e_i \), \( \text{match}(e_i, e_j) \) is true; where \( 1 \leq j \leq n \). Best match for element \( e_i \) of \( E_1 \) noted as \( \text{match}_{\text{ib}} \) is given as follows:
\[
\text{match}_{\text{ib}} = \max_{j=1}^{n} S_m(e_i, e_j)
\]
Definition 5 (Schema Mapping): Given \( E_1 \) a set of elements of schema 1, \( E_2 \) a set of elements of schema 2 and \( I \) a set of mappings identifiers. We define a mapping between two elements \( e_1 \in E_1 \) and \( e_2 \in E_2 \) by the following function noted as \( \text{Map} \):
\[
\text{Map}: IxE_1 \times E_2 x F s \to IxE_1 \times E_2 [0, 1] \times K
\]
where \( F s \) is a set of functions performing similarity measure, \( d \) is the similarity degree returned by \( \text{match}(e_1, e_2) \) and \( K \) is the set of mapping expressions e.g. equivalence, synonym, inclusion etc., depending upon the data model being represented by schemas 1 and 2.
Schema mapping can be uni-directional i.e., from schema 1 toward schema 2, or bidirectional i.e., the correspondence holds in both directions e.g. if an element \( e_i \) from schema 1 is mapped to an element \( e_2 \) of schema 2 then there exists another correspondence for element \( e_2 \) of schema 2 toward element \( e_1 \) of schema 1.
3. DESIDERATA
The schema matching benchmark needs to have the following properties in order to be complete and efficient. It needs to be:
- **Extensible**, the benchmark is able to evolve according to progress. Thus, future schema matching tools could be benchmarked, as well as new measures can be added to evaluate the matching quality. The benchmark deals with well-formed XML schemas, and a set of mappings can easily be converted into the default set of mappings formats using a wrapper. Thus, the outputs of future matching tools should be handled. For the new measures, we intend to release the benchmark in open-source, allowing everyone to add new measures or functionalities.
- **Portable**. The benchmark should be OS-independent, since the matching tools might run on different OS. This requirement is fulfilled by using Java.
- **Simple** since both end-users and schema matching experts are targeted by this benchmark.
- **Scalable** on two aspects: creating new benchmark scenarios is an easy task. And a benchmark composed of many scenarios should be easy to construct and evaluate.
- **Generic**, it should work with most of the matchers available. Thus, the criteria have been restricted to
\[ \text{match}(s_1, s_2) = \max\{0, \min(x_1, x_2) - \text{EditDistance}(x_1, x_2) \} \]
\[ \text{match}(s_1, s_2) = 1 - \frac{\text{gram}_1(x_1) + \text{gram}_2(x_2) - 2 \times \text{gram}_1(x_1) \times \text{gram}_2(x_2)}{\text{gram}_1(x_1) + \text{gram}_2(x_2)} \]
the average capabilities of the matchers. For example, some schema matching tools are able to match a large number of schemas at a time, but some others do not. This involves the number of schemas to be limited to two. Another example is: some schema matching tools may provide as output both an integrated schema and a set of mappings while some others only provide a single output.
All these requirements should be met to provide an acceptable matching benchmark. Next we focus on the criteria dedicated to the schema matching process itself.
4. MATCHING TOOLS FEATURES
Some schema tools have enhanced the match task, namely in automatic schema matching, with pre-match and post-match phases. This section covers the general features which define the characteristics and the capabilities of the matching tools. This section is organized in four parts describing these features as follows: (i) the pre-match phase, (ii) the matching method, (iii) the output of the schema matching tool and (iv) the post-match phase.
4.1 Pre-Match Phase
This phase normally includes configuration of various parameters e.g. setting weights, thresholds of the matching algorithms etc. It can have three possibilities:
- **External resources.** They make use of some external resources, like ontologies (domain specific), thesauri or dictionaries (for example Wordnet) [13].
- **Tuning.** A matching tool might be flexible by allowing some parameters or thresholds (example 2.1) to be tuned by the user [12]. This step may be optional or compulsory, but these parameters generally affect other criteria. For instance, they can be varied to enable better performance by degrading the quality.
- **Training** Some approaches provide a new set of machine learning based matchers for specific types of complex matchings. For example, LSD [10] uses machine learning algorithms for matching as well as in summing up the match results for each pair of attribute comparison.
This pre-matching step involves more work at the beginning. However, this effort is often rewarded since it positively affects the matching quality. In our benchmark, the pre-match appears as a list of pre-processing tasks of the matching tool, performed at this phase. For example, use of dictionaries, use of ontologies, use of synonyms table, etc.
4.2 Matching Method
Schema matching is a complex problem, which starts by discovering similarities between schema elements’ names, mainly by using basic string matching approaches adapted from the information retrieval domain. These algorithms have been dependent on some basic techniques of element level string matching, linguistic similarities or constraints likeliness at element level or higher schema structure level. Similar graph algorithms utilized in schema matching is a special form of constraints matching [25]. The kernel of schema matching tool is the matcher. It correspond to the match operator defined in [5]. Some tools use composite approach to combine different matchers, for example, LSD [10] and COMA++ [1]. Our benchmark, by means, of the scoring functions described in section 5, allows to test the quality of a matching algorithm or a combination of matching algorithms for a given scenario.
4.3 The Output
There are three main issues regarding with the output:
- **Type of output.** Most matching tools generate either an integrated schema or/a set of mappings. The interesting aspect is to study how they produce the integrated schema. Our benchmark, by means of dedicated scoring functions, e.g. structural overlap would allow to test whether the method is appropriate. For example, is the method for building an integrated schema from scratch, or from a particular input schema, a good method regarding the ideal schema?
- **Format of the output.** This is an important feature which gathers the possibilities to use this output. Since our benchmark is dealing with XML schema, the output can be queryable with XQuery.
- **Complexity of the mappings.** Several types of mappings need to be handled. All matching tools supports the 1:1 mapping, i.e. one element from one schema is mapped with one element of another schema. Complex matchings, involving several elements considered as 1:n, n:1, and n:m [22] are not supported by all matching tools. The possible relationship between the mapping elements can be specified: for example, some matching tools precise that an element price is mapped to the element amount with the relationship price = amount × VAT.
Our benchmark is able to deal with all kinds of mappings.
4.4 Post-Match Phase
The post-match phase uses different measures to select the best correspondence for an element from a set of possible matches which show the semantic equivalence aspect for that element. These techniques are termed as match quality measures in the literature [8]. In our benchmark, the post-match is handled by the overall and the schema proximity measures.
5. QUALITY MEASURES
The aim of automatic schema matching process is to avoid a manual, labour and error-prone task in large scale scenario. For this purpose we have designed a set of score functions for evaluating the quality of the integrated schema. They are complemented by the performance aspect, although it just consists in the matching execution time. Our benchmark also provide some statistics like resource consumption (maximum memory needed, disk space storage) and statistics on the collection of schemata used (dimension of the integrated schema = min/max depth and width, number of nodes, etc.)
5.1 Mapping Quality Measures
**Precision** is an evaluation criterion very appropriate to the schema matching framework. Precision calculates the proportion of relevant mappings among the extracted mappings. A 100% precision means that all the mappings extracted by the system are relevant.
Another typical measurement coming from the machine learning approach is **recall** which computes the proportion of relevant mappings extracted among all the relevant mappings. A 100% recall means that all relevant mappings have been found.
The main objective of schema matching is to avoid a manual process, or at least save time since an expert is still required: the output of the matcher needs to be checked and eventually completed. Hence the **overall** measure [19] has been specifically designed to evaluate the post match effort. That is, the amount of work needed to add the relevant mappings that have not been discovered and to remove those which are not relevant but have been extracted by the matcher. The Overall measure can have negative values. It is often important to determine a compromise between recall and precision. We can use a measurement taking into account these two evaluation criteria by calculating the **F-measure** [27].
As explained in [8], the F-measure is more optimistic than overall.
5.2 Integrated Schema Quality Measures
A matching tool may provide three types of output: a set of mappings, or an integrated schema, or both. Our benchmark can evaluate the integrity of the integrated schema. In that case, our benchmark is able to evaluate the semantic integrity of the integrated schema. The previous score functions are not appropriate since they do not deal with the structure of the schema. We have designed the following measures to reach this goal.
The first measure takes into account the **backbone** of the tree. More formally, it shows if both trees share a large common subtree, seen as a backbone. This measure returns a value between 0 (no common subtree) and 1 (both trees are the same) is given by the following formula:
\[
\text{Backbone} = \frac{|LSub(S_i \cap S_j)|}{|S_i|}
\]
(1)
Where \(LSub(S_i \cap S_j)\) represents the largest common subtree between trees \(S_i\) and \(S_j\), and \(|S_i|\) is the number of elements of the tree \(S_i\). This measure reflects the structural similarity of the largest shared component of two trees. Note that this backbone measure is mainly efficient with similar trees.
In the following, a subtree is defined as ‘an extract’ of a tree which is composed of at least two nodes and has its own root. All the nodes in this subtree must be descendants of this and only this subtree root.
Considering an ideal (model or expert) schema tree \(S_i\) and another tree noted \(S_j\) which is evaluated against the ideal tree, we define \(Sub\) as the set of all disjoint subtrees which are common to \(S_i\) and \(S_j\). \(|S_i|\) stands for the number of elements in tree \(S_i\), and \(k\) for the total number of elements of all subtrees in \(Sub\).
Based on these assumptions, the **structural overlap** is a measure representing the number of elements which are shared by both trees and are included in a common subtree. A 0 value represents a lack of common subtrees while a value closer to 1 shows that most of the elements are included in a common subtree. The following formula processes this structural overlap measure.
\[
\text{StructuralOverlap} = \frac{k}{|S_i|}
\]
(2)
Another interesting measure we have designed is the **structural proximity**. This measure extends the structural overlap by adding several metrics seen as differences. Indeed, the structural overlap only measures the percentage of elements in the common subtrees, and this needs to be enhanced to evaluate a structural proximity between the two trees. Thus, we have added the number of common subtrees. If \(S_i\) and \(S_j\) are similar, they have only one common subtree, which is the whole tree. And the more common subtrees, the less similar the trees are. Another difference is the number of missing elements, i.e the elements in \(S_i\) that are not in one of the common subtrees. As \(S_i\) is the ideal schema, all its nodes which are missing in the common subtrees affect the structural proximity between the two trees. First we define \(o\) the number of elements in \(S_i\) that are not included in any common subtree. Thus, \(o = |S_i| - |S_{Com}|\) and the tree proximity is obtained by the following formula:
\[
\text{StructuralProximity} = \frac{k}{|S_i|} \times \sqrt{|Sub| + o}
\]
(3)
This formula generates a value between 0 and 1, meaning the trees are totally different and 1 ensuring the trees are identical.
Finally, the last measure, denoted **schema proximity**, computes the similarity between two trees. It takes into account both the structural aspect and the dissimilarity between the tree elements. This dissimilarity gathers the extra elements, namely those that appear in \(S_p\) but not in \(S_i\), and the missing elements, which are in \(S_i\) but not in \(S_p\). We define this dissimilarity \(d = (|S_i| - |Com|) + (|S_p| - |Com|)\) where \(Com\) stands for the set of common elements between \(S_i\) and \(S_p\) trees. The schema proximity formula is then given by:
\[
\text{SchemaProximity} = \frac{1}{|Sub|} \times \frac{k - d}{|S_i|}
\]
(4)
The value computed by the schema proximity measure stands between 1 for a complete similarity and \(-\infty\) for a total dissimilarity between the two trees.
6. XBENCHMARK: XML SCHEMA MATCHING BENCHMARK
To evaluate and compare XML schema matching tools, we have implemented XBenchMatch. The main goal of this application is to provide two kinds of evaluations: (i) quality matching evaluation, which is based on the use of the measures described in section 5 and (ii) performance of matching schema. The first criteria is very important in automatic schema matching and the second is crucial in large scale and when the schema to be matched are very large. Finally, our tool should also help an end-user to choose the
more appropriate among schema matching tools according to his requirements. This section gives an overview of our benchmark.
Figure 1 describes the architecture of our prototype. The input files may be of two types, either a well-formed integrated schema or a set of mappings. Two modules are in charge of converting them into an internal structure, the XML parser and the wrapper respectively. However, the file generated by the matching tool must be of the same type as that of the expert one. Creating new wrappers will ensure the extensibility by supporting new sets of mappings format. Next, the benchmark engines are able to compute different measures between the ideal file and the matcher’s file. XBenchMatch finally outputs various statistics (performance, size and depth of input schemas, ...) and the quality measures explained in 5. Schema matching systems can also be compared on one or more scenarii, especially by comparing their f-measure and structural proximity measures. Note that the user may also choose the schema corpus that has been matched by the matcher. This only enables to generate their f-measure and structural proximity measures. Note that the user may also choose the schema corpus that has been matched by the matcher. This only enables to generate a large schema with a smaller one. A human expert has manually generated the set of mappings between these schemas.
Scenario 1. General schemas are small-sized schemas describing a person. The ideal set of mappings and the ideal integrated schema have also been expertized manually.
Scenario 2. Business schemas dealing with an order. The first schema is drawn from the XCBL collection, and has about 160 elements. The second schema also describes an order but it is smaller with only 12 elements. This scenario reflects the possibility to matching two schemas produced both output matching files, the set of mappings and the integrated schema.
Scenario 3. University schemas have been taken from Thalia collection presented in [15]. Each schema has about 20 nodes and the set of mappings contains 15 mappings. An expert has manually mapped the two schemas produced both output matching files, the set of mappings and the integrated schema.
Scenario 4. Biology schemas. The two schemas come from different collections which are protein domain oriented, namely Uniprot and GeneCards. Both are quite large, with GeneCards around 400 XML paths, and 57 paths in UniProt. A domain expert has manually mapped both schemas, and produced 57 mappings.
Table 1 summarizes the characteristics of the scenarii which are used in the benchmark. The user can run the default benchmark, which involves four scenarii described above against the matcher’s integrated schema for all the schema (person, order, university domain oriented, namely Uniprot and GeneCards. Both are quite large, with GeneCards around 400 XML paths, and 57 paths in UniProt. A domain expert has manually mapped both schemas, and produced 57 mappings.
### Table 1: Details about the evaluation scenarii.
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Person</th>
<th>University</th>
<th>Order</th>
<th>Biology</th>
</tr>
</thead>
<tbody>
<tr>
<td>NB nodes (S1 / S2)</td>
<td>11 / 10</td>
<td>18 / 18</td>
<td>20 / 844</td>
<td>719 / 80</td>
</tr>
<tr>
<td>Avg NB of nodes</td>
<td>11</td>
<td>18</td>
<td>437</td>
<td>400</td>
</tr>
<tr>
<td>Max depth (S1 / S2)</td>
<td>4 / 4</td>
<td>5 / 3</td>
<td>3 / 3</td>
<td>7 / 3</td>
</tr>
<tr>
<td>NB of Mappings</td>
<td>5</td>
<td>15</td>
<td>10</td>
<td>57</td>
</tr>
</tbody>
</table>
and biology. XBenchMatch is able to calculate the matching quality of these matchers’ integrated schema against the ideal integrated schemas. It outputs the following measures: precision, recall, f-measure, overall, structural overlap, structural proximity. A plot is automatically drawn to show the quality according to the number of common elements between the two trees. Another plot focuses on the schema structure by comparing the structural overlap and proximity to the number of elements in the common subtrees.
As XBenchMatch is meant to be generic and extensible, it is also possible to run the benchmark using other scenarios. It provides the GUI for this option. The process is identical to the default benchmark, except that the user needs to choose, for a specific scenario, both the ideal integrated schema and the matcher’s generated integrated schema. Then the measures showing the quality of the matcher’s integrated schema are displayed in the main window.
Finally, XBenchMatch enables one to compare the quality of different matching tools on one or several scenarios. For example, figure 2 shows the comparison of three Matchers: COMA++, PORSCHE [24] and Similarity Flooding [19].
8. EXPERIMENT RESULTS
In this section, we present the evaluation results of the following matching tools: COMA++, PORSCHE, Similarity Flooding and BTreeMatch. However, our benchmark application is easily extended to other matchers. We notice, it is hard to find available matchers to test. COMA++ and Similarity Flooding matchers are considered by the schema matching community to provide good matching quality. PORSCHE [23] is a recent tool developed in our team, and it is performance-oriented. BTreeMatch [11] is another recent prototype from our team, which is aimed to provide good performance and quality as well.
8.1 Quality of COMA++
COMA++ generates an integrated schema in ASCII tree format. Thus we developed a wrapper to convert it into an XML schema, which is the normal format of our benchmark. The quality of the integrated schema are given in figure 3 and figure 4. The first remark is that COMA++ is able to keep most of the relevant elements, since the recall is equal to 1 on each scenario. However the precision shows that COMA++ becomes less accurate when the size of the schema increases, namely most of the discovered elements should not be in the integrated schema. Except on the first scenario dealing with person description, COMA++ needs much post-match effort to add non-discovered elements and to remove the non-relevant ones. This is illustrated by a negative overall value in three scenarios. However, this matching tool uses a list of synonyms, and none has been provided in these experiments. And the domain-specific scenario on biology is particularly difficult for such matching tool which mainly uses a combination of terminological measures. As for the quality of structure, the results follow the same direction: the two small scenario provide an acceptable quality in terms of schema structure, but this quality decreases with bigger schemas.
To improve the understanding of the graph, the overall value has been limited to -1 instead of -∞. One should consider a negative overall value as not significant as it was explained in [19].
COMA++ also produces a set of mappings. The quality on the set of mappings generated by COMA++ is shown in figure 5. COMA++ results are difficult to interpret. Indeed, it discovers most of the relevant mappings in two scenarios (f-measure is above 0.6) but it does not perform as well in two other scenarios (f-measure is less than 0.1). Although the set of mappings does not enable to discover most of the information compared to the integrated schema, the quality is better with the set of mappings than with the integrated schema. Therefore, the post-match effort is reduced.
8.2 Quality of PORSCHE
PORSCHE produces an integrated schema. The produced set of mappings includes those between the input schema and the integrated schema. While in the other tested schema matching the mappings are those between the input schema.
Therefore, we decide to measure only the quality of the integrated schema. The results of experiments over PORSCHE are depicted in figure 6 and figure 7 on the four scenarios. Both the structure and quality measures on the first small scenario are acceptable, with a F-score around 0.8 and a structural proximity above 0.4. Note that the post-match effort is minimized in these cases. However, when the number of elements increases, the quality tends to decrease: PORSCHE either discovers many elements with only a few relevant, or it discovers a few common elements among which most of them are relevant. The structural quality values are quite low. Thus, with large schemas, the integrated schemas are not similar with the ones provided by the experts. Like COMA++, PORSCHE normally uses a list of synonyms, and this can explain the average results on the order scenario. Besides, one can notice the importance of the precision for the overall measure: a good precision enables to avoid a negative overall value, even with a low recall, as is shown in figure 6.
8.3 Quality of Similarity Flooding
Next experiments carries on Similarity Flooding (SF), implemented in Rondo matching tool. The quality of the integrated schema is given in the two graphs of figure 3 and figure 9. In contrast to the previous matching tools, SF has a better quality with large schemas. Although the precision value stands around 0.5, the structural proximity and the recall are equal to 1 when the number of elements is higher than 75. As this matching tool propagates the benefit of discovering a match to the neighbour nodes, it seems normal that it provides better results with large schemas. The quality on smaller schemas is also acceptable, with values above 0.4. However, the structural quality on the small schemas is low. We can also notice that even in a specific scenario like biology, where other matchers may require auxiliary information (e.g., list of synonyms), in SF the quality of integrated schema does not decrease.
8.4 Quality of BTreeMatch
Figure 10 depicts the quality of the mappings that have been produced by BtreeMatch. We remark that with small schemas, the quality is very low, since the F-score is less than 0.2. However, this measure reaches 0.6 on larger schemas. This behaviour can be explained by the matching algorithms used by BtreeMatch. Indeed, it is based on both terminological and structural techniques, like Similarity Flooding. Thus, it seems that the structural algorithms are able to match large schemas while ensuring an acceptable quality.
8.5 Performance evaluation
Table 2: Matching performance on the different scenarios.
<table>
<thead>
<tr>
<th>Person</th>
<th>University</th>
<th>Order</th>
<th>Biology</th>
</tr>
</thead>
<tbody>
<tr>
<td>S1</td>
<td>S2</td>
<td>S1</td>
<td>S2</td>
</tr>
<tr>
<td>11</td>
<td>10</td>
<td>18</td>
<td>18</td>
</tr>
<tr>
<td>18</td>
<td>18</td>
<td>20</td>
<td>844</td>
</tr>
<tr>
<td>719</td>
<td>80</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 2 depicts the matching performances of each matching tool on the evaluation scenarios. All matchers are able to match the small schemas in less than one second. However, when one schema from the scenario is large, COMA++ and Similarity Flooding are less efficient. Similarity Flooding propagates until it reaches a fixpoint computation, involving this process to take more time. On the other hand, PORSCHE, which has been designed to match many large schemas, do not have decreasing performances with schemas up to 800 nodes.
8.6 Discussion.
These experiments show that some matchers are best suited for some scenarios. For example, COMA++ and PORSCHE...
generate integrated schemas with an acceptable quality on small schemas. Similarity Flooding seems to be more quality oriented when the external similarity oracle is not available and the match decision is more structure oriented. Based on the versatility of the state of the art of schema matching tools, we require more experimentations with our benchmark tool. This will enable us to classify the current tools for different domains and matching activities (matching, integration, ...). Thus converting our benchmark into a handy tool for both naive and domain expert users.
9. RELATED WORK
9.1 Tentative for Benchmarking Schema Matching Tools
To the best of our knowledge, there is no complete benchmark for schema matching tools. In [8], the authors present an evaluation of schema matching tools. The main criteria required to reach this goal are discussed. A summary of the capabilities of each matching tool is finally provided. However, as the authors explained, it is quite difficult to evaluate the matching tools for several reasons: they are not always available as a demo. Therefore, it is not possible to test them against specific sets of schemas. Some require specific resources to be efficient, like an ontology or a thesauri, which are not always available. Finally some matching tools take as input specific files, for example Rondo. This evaluation suffers from two drawbacks, not mentioning the fact it was published 5 years ago: by evaluating the matching tools with the scenarios provided in their respective papers, one cannot judge efficiently on the capabilities of each matching tool. Secondly, some matching tools generate an integrated schema instead of a set of mappings, and the measures provided to evaluate a set of mappings are not sufficient to evaluate the quality of an integrated schema.
A proposal for evaluating on schema matching has been done in [28]. It extends [8] by adding time measures and relies on real-world schemas to evaluate the matching tools. The input is limited to a set of mappings while some matchers provide a more interesting output by building an integrated schema. Moreover, the evaluation system has not been implemented. In contrast to our work, this system is not available and is not extensible.
Our work extends the criteria list provided in [8], by adding some measures to evaluate the quality of integrated schemas. It goes further on the evaluation aspect. Indeed all the matching tools are evaluated against the same scenarios, thus involving a better and more thorough comparison.
9.2 Schema Matching Tools
In this section we review works classified under the schema matching. The surveys [22, 25, 28] incorporate solutions from schema level (metadata), as well as instance level (data) research, including both Database and Artificial Intelligence domains. Most of the methods discussed in these surveys compare two schemas (with or without their data instances) and work out quality matching for the elements of schema 1 to the schema 2. Some of the tools also suggest the merging process of the schemas based on the matching found in first step. Here we present the main schema matching namely the one we have tested with our benchmark.
TRANSCM [20] objective is to transform instances of source schema into target schema. It can have input schemas as DTD or OODB. Internally the schemas are converted into labeled trees and the match process is performed node by node in the top-down manner. TRANSCM presumes a high degree of similarity between the two schemas. TRANSCM supports a number of matchers (rules), to find correspondences between schema nodes. Each rule may in turn combine multiple match criteria, e.g. name similarity and the number of descendants. The rules are assigned distinct priorities and applied in a fixed order. If more than one target elements are found as possible match, user interaction is required to select the match. And in case no match is found user is allowed to apply a new rule to find a match.
DIKE [21] prototype implements a hybrid approach to automatically find synonymy, hyponymy and homonymy correspondences between elements of Entity-Relationship (ER) schemas. User specific set of synonyms, hyponymy and homonymy are utilized, constructed by some expert or using some thesauri. Other then the linguistic and syntactic comparison, the main algorithm is a structural matcher, which performs a pair-wise comparison of elements from the input schemas. The weight of similarity between two elements is increased, if the algorithm finds some similarity between the related elements of the pair of elements.
CUPID [18] is a generic, hybrid schema matching prototype, consisting of a name matcher and a structural one. It has been used for XML and relational schemas. Internally, schemas are converted into trees, in which additional nodes are added to resolve the multiple/ recursive relationships between a shared node and its parent nodes. First, linguistic similarity of pair of nodes is calculated using external oracles of synonyms and abbreviations. Then the structural matcher is applied on the tree structures in post order manner. This technique gives similarity possibilities for non-leaf nodes, depending upon the similarity of their leaves. For
each pair of nodes, their linguistic and structural similarity are aggregated to a weighted similarity using a weighted sum. If the weighted similarity exceeds a threshold, the structural similarity of the leaf pairs is increased. Otherwise, it is decreased. For each source element, CUPID selects the target element with the highest weighted similarity exceeding a given threshold as the match candidate.
Similarity Flooding [19] have been used with Relational, RDF and XML schemas. These schemas are initially converted into labeled graphs and SF approach uses fix-point computation to determine correspondences of 1:1 local and m:n global cardinality between corresponding nodes of the graphs. The algorithm has been implemented as a hybrid matcher, in combination with a name matcher based on string comparisons. First, the prototype does an initial element-level name mapping, and then feeds these mappings to the structural SF matcher. The weight of similarity between two elements is increased, if the algorithm finds some similarity between the related elements of the pair of elements. In a modular architecture, the components of rondo, such as schema converters, the name and structural matchers, and filters, are available as high-level operators and can be flexibly combined within a script for a tailored match operation.
PROTOPLASM [6] target is to provide a flexible and a customizable framework for combining different match algorithms. Present CUPID and Similarity flooding are being used as the base matchers it. SQL and XML schemas, converted into graphs internally, have been successfully matched. PROTOPLASM supports various operators for computing, aggregating, and filtering similarity matrices. Using a script language, it allows flexibly defining and customizing the work flow of the match operators.
COMA/COMA++ [1, 9] is a generic, composite matcher with very effective match results. It uses the same architecture like that of Protoplasm but its range of match algorithms is more complete. It can process the relational, XML, RDF schemas as well as ontologies. Internally it converts the input schemas as trees for structural matching. For linguistic matching it utilizes a user defined synonym and abbreviation tables like CUPID, along with n-gram name matchers. Similarity of pairs of elements is calculated into a similarity matrix. At present it uses 17 element level matchers. For each source element, elements with similarity higher than threshold are displayed to the user for final selection. The COMA++ supports a number of other features like merging, saving and aggregating match results of two schemas.
S-MATCH/S-MATCH++ [2, 14] takes two directed acyclic graphs like structures e.g. XML schemas or ontologies and returns equivalence, subsumption type correspondences between pairs of elements. It uses external oracle Wordnet to evaluate the linguistic matching along with its structural matcher to return a subsumption type match. It is also heavily dependent on SAT solvers, which decreases its time efficiency. At present it uses 13 element-level matchers and 3 structural level matchers.
Smiljanic et al. work, [26] shows how personal schema for querying, can be efficiently matched and mapped to a large repository of related XML schemas. The method identifies fragments with in each schema of the repository, which will best match to the input personal schema, thus minimizing the target search space. The prototype implementation, called bellflower, uses k-means data mining algorithm as the clustering algorithm. The authors also demonstrate that this work can be implemented as an intermediate phase with in the framework of existing matching systems. The technique does produce efficient system but with some reduction in effectiveness.
Porsche [23] utilizes tree mining technique to cluster and holistically match and merge large number of schemas (represented as trees). It gives approximate matchings and generates an integrated schema with mappings from source schemas to this integrated schema. It has been devised to cater the quality as well as the performance element for large scale scenarios using domain specific linguistic matching (domain specific synonym and abbreviation oracles). It works in three steps. First, in the pre-mapping part, schema trees are input to the system as a stream of XML and calculate the scope and node number for each of the nodes in the input schema trees. Other statistics like each schema size, maximum depth and node parent are also calculated. A listing of nodes and a list of distinct labels for each tree is constructed. Next, a linguistic matcher identifies semantically distinct node labels in the labels list. The user can set the level of similarity of labels as A) Label String Equivalence, B) Label Token Set Equivalence (abbreviation table) and C) Label Synonym Token Set Equivalence (synonym table). Then Porsche derives the meaning for each individual token and combines these meanings to form a label concept. Finally, similar labels are clustered together. Since each input node remains attached to the its label object, this intuitively forms similar label nodes clusters within a certain schema.
BtreeMatch [11] approach uses the B-tree as the main structure to locate matches and create mappings between XML tree structures. The advantage of searching for mappings using the B-tree approach is that B-tree have indexes that significantly accelerate this process. For example, let us consider two schemas S1 and S2 with respectively 8 and 9 elements. Matching these schemas will entail 2 matching possibilities with an algorithm that tries all combinations. By indexing in a B-tree, we are able to reduce this number of matching possibilities, thus involving better performance. BtreeMatch does not use a matrix to compute the similarity of each couple of elements. Instead, a B-tree, whose indexes represent tokens, is built and enriched as we parse new schemas, and the discovered mappings are also stored in this structure. The tokens reference all labels which contains it. For each input XML schema, the same algorithm is applied: the schema is parsed element by element by preorder traversal. This enables to compute the context vector of each element. The label is split into tokens. We then fetch each of those tokens in the B-tree, resulting in two possibilities:
- no token is found, so we just add it in the B-tree with a reference to the label.
- or the token already exists in the B-tree, in which case
we try to find semantic similarities between the current label and the ones referenced by the existing token. We assume that in most cases, similar labels have a common token (and if not, they may be discovered with the context similarity).
9.3 Data Instance Based Schema Matching
In this section we consider some recent prototypes, which use schema instance data and machine learning techniques to find possible matches between two schemas. These matchers compute all possible match or mismatch possibilities among the attributes of the two source schemas to come up with best results.
AUTOMATCH [4] is the predecessor of AUTOPLEX [3]. It uses single strategy, machine learning match technique. It explicitly uses Naive Bayesian algorithm to analyse the input instances of relational schemas fields against previously built global schema. The match result consists of 1:1 correspondence and global cardinality.
CLIO [16] has been developed at IBM. It has comprehensive GUI interface and provides matching for XML and SQL schemas. It uses a hybrid approach, combining approximate string matcher for element names and Naive Bayes-learning algorithm for exploiting instance data. It also facilitates in producing transformation queries (SQL, XQuery, or XSLT) from source to target schemas, depending upon the computed mappings.
LSD [10] is a composite matcher. It requires an already developed global schema, against which newer schemas and their data instances are matched. LSD uses machine learning algorithms for in matching as well as in summing up the match results for each pair of attribute comparisons. LSD has been further utilized in Corpus-based Matching [17], which creates a CORPUS of existing schema and their matches. In this work, input schemas are first compared to schemas in the corpus before they are compared to each other. Another extension based on LSD is IMAP [7]. Here the authors utilize LSD to find 1:1 and n:m mapping among relational schemas. It provides a new set of machine-learning based matchers for specific types of complex matchings e.g. name is a concatenation of firstname and lastname. It also provides the information about the prediction criteria for a match or mismatch.
10. CONCLUSION
In this paper, we present a benchmark for XML schema matching tools. Our approach is focusing on the evaluation of the matching tools in terms of matching quality and performance. Our work extends the criteria provided in [8] by adding new scoring functions which evaluate the quality of integrated schemas and extends the evaluation methodology. Indeed, in XBenchMatch all the matching tools are evaluated against the same scenario, and produce an improved objective comparison. Next, we also aim at giving an overview of a matching tool by analysing its features and deducing some criteria it might fulfill. This should help an end-user to choose among the available matching tools depending on his requirements. Finally, we Furthermore, we provide a testbed involving a large schema corpus that can be used by everyone to quickly benchmark new matching algorithms.
We are planing to extend our experiments to CUPID prototype and other matching tools if they are available. We also plan to include evaluation about the scalability. This does not require any extension of our benchmark. We only require to manually generate an expert schema for a large number of input schemas to be matched.
11. ACKNOWLEDGMENTS
The authors would like to thank all the researchers who made available their schema matching tools.
12. REFERENCES
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138527/file/Duchateau_838_final.pdf", "len_cl100k_base": 10845, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 38927, "total-output-tokens": 12965, "length": "2e13", "weborganizer": {"__label__adult": 0.00045013427734375, "__label__art_design": 0.0010576248168945312, "__label__crime_law": 0.0006690025329589844, "__label__education_jobs": 0.00562286376953125, "__label__entertainment": 0.00023949146270751953, "__label__fashion_beauty": 0.0003273487091064453, "__label__finance_business": 0.0011587142944335938, "__label__food_dining": 0.0004401206970214844, "__label__games": 0.0008368492126464844, "__label__hardware": 0.0008182525634765625, "__label__health": 0.0008091926574707031, "__label__history": 0.0010013580322265625, "__label__home_hobbies": 0.0001995563507080078, "__label__industrial": 0.0006566047668457031, "__label__literature": 0.0013723373413085938, "__label__politics": 0.0005593299865722656, "__label__religion": 0.0007300376892089844, "__label__science_tech": 0.4287109375, "__label__social_life": 0.00040078163146972656, "__label__software": 0.07806396484375, "__label__software_dev": 0.474609375, "__label__sports_fitness": 0.00025844573974609375, "__label__transportation": 0.0005140304565429688, "__label__travel": 0.00034880638122558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53705, 0.04511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53705, 0.36369]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53705, 0.89199]], "google_gemma-3-12b-it_contains_pii": [[0, 897, false], [897, 5718, null], [5718, 11601, null], [11601, 17155, null], [17155, 23244, null], [23244, 26603, null], [26603, 30716, null], [30716, 32735, null], [32735, 34291, null], [34291, 39572, null], [39572, 46108, null], [46108, 51713, null], [51713, 53705, null]], "google_gemma-3-12b-it_is_public_document": [[0, 897, true], [897, 5718, null], [5718, 11601, null], [11601, 17155, null], [17155, 23244, null], [23244, 26603, null], [26603, 30716, null], [30716, 32735, null], [32735, 34291, null], [34291, 39572, null], [39572, 46108, null], [46108, 51713, null], [51713, 53705, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53705, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53705, null]], "pdf_page_numbers": [[0, 897, 1], [897, 5718, 2], [5718, 11601, 3], [11601, 17155, 4], [17155, 23244, 5], [23244, 26603, 6], [26603, 30716, 7], [30716, 32735, 8], [32735, 34291, 9], [34291, 39572, 10], [39572, 46108, 11], [46108, 51713, 12], [51713, 53705, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53705, 0.05357]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
e99abf44941dca7efb711c1335115eb7bf680d7f
|
# Table of Contents
- Introduction .................................................................................................................. 1
- Comparisons of Traditional and Cloud-native Applications .................................. 3
- Application Migration to the Cloud: Planning and Patterns ....................................... 7
- Cloud Infrastructure Considerations for Application Developers .......................... 13
- OpenStack APIs and SDKs for Application Developers ........................................... 15
- OpenStack Services to Support Application Development ...................................... 19
- Containerizing Applications on OpenStack .............................................................. 23
- Moving Applications from Cloud to Cloud .............................................................. 27
- Checklist: Public to Private Cloud Migration ........................................................... 31
- Summary ....................................................................................................................... 33
CONTRIBUTORS
Ricardo Ameixa, *Software Developer*, Volkswagen AG
Carol Barrett, *Cloud Software Planner*, Intel Corporation
Marcela Bonell, *Cloud Engineer*, Intel Corporation
Tyler Britten, *Technical Marketing Manager*, Red Hat
Kathy Cacciatore, *Consulting Marketing Manager*, OpenStack Foundation
Joanna H. Huang, *General Manager*, Aptira
Frank Kloeker, *Technology Manager Cloud Applications*, Deutsche Telekom
Amrith Kumar, *Founder, CTO*, Tesora
Mark Lamourine, *Sr. Software Developer*, Red Hat
Gerd Prüßmann, *Director Cloud Solutions*, Mirantis
Megan Rossetti, *Cloud Infrastructure Operations*, Walmart
Mark Smith, *Senior Product Marketing Manager*, SUSE
Yih Leong Sun, PhD, *Senior Software Cloud Architect*, Intel Corporation
Shamail Tahir, *Director of Product Management*, Athenahealth
Susan Wu, *Director of Technical Marketing*, Midokura
Introduction
We are all aware today’s business climate is fast-paced and competitive, requiring mobility and massive reach. These characteristics drive organizations in all industries to software. New agile, cloud-native development technologies are transforming and creating new infrastructures and applications to explore new opportunities, making this an exciting time for application architects and developers.
New application frameworks such as containers require an open and programmable cloud infrastructure. OpenStack cloud software is the integration engine that enables your organization to take full advantage of the intersection of new application and infrastructure technologies. That’s why OpenStack users can deploy new technologies quickly, and use already-integrated enterprise systems and networks.
Donnie Berkholz, Research Director, Development, DevOps and IT Ops, 451 Research, stated, “We’re seeing container adoption within OpenStack users at ... two to five times larger than container adoption outside OpenStack users, whether that’s containers alone or container orchestration management tooling.”
Source: “The Future of OpenStack + Kubernetes” panel at CoreOS Tectonic Summit, December 2016; (https://www.youtube.com/watch?v=j4onxTl7m-k)
OpenStack is governed by the Four Opens (https://governance.openstack.org/tc/reference/opens.html): open source, design, development, and community. If you need a feature or service not already provided, you can collaborate and contribute it. OpenStack users welcome the opportunity to add capabilities that are rigorously tested with the entire platform—without the need to retrofit them internally with each new release.
Hundreds of the world’s largest brands rely on OpenStack to help them move faster while lowering costs. OpenStack enables software-defined business environments for application developers and
architects with a single platform for application development and deployment with a stable and well-documented API, powerful yet simple SDKs, a global network of public clouds, and a thriving community.
This guide for application developers, architects and deployers was written by your business peers who develop for OpenStack clouds today. The guide includes recommendations on approaches, software development and migration patterns, OpenStack tools and services, and a planning checklist for moving applications from a public cloud to a private cloud. Discover why the world runs on OpenStack (https://www.openstack.org/user-stories/).
Kickstart your journey—it begins with applications.
Comparisons of Traditional and Cloud-native Applications
**Architecture comparisons**
Historically, data center applications were designed for deployment on a specific physical server and operating system. This approach often led to server sprawl, underutilization of compute resources, operational complexity, and high capital costs. Deployment of traditional physical infrastructures was also time consuming and inflexible.
Virtualization of traditional applications initially addressed some of these issues. But now modern businesses require new levels of agility, flexibility and ease of management. The ability to pool and dynamically share cloud-based IT resources is necessary to allow developers to define and allocate infrastructure and resources based on workload needs. Using a cloud environment and designing cloud-centric workloads helps deliver new levels of agility, innovation and efficiency to maximize business value.
**Characteristics of traditional applications**
Traditional enterprise workloads or applications are often constructed in a three-tier architecture with a monolithic design. Most often, all of the application’s functional components are bundled together. This form of application is the simplest and easiest to develop, test and deploy. However, when the application grows larger, the code becomes difficult to refactor because modules can be extensively dependent on each other. This approach often requires long-term commitments to a specific technology stack.
Scalability and high availability are often built into the underlying physical infrastructure, rather than being designed into the application itself. Hardware load balancers typically manage network traffic between and across the layers for performance and scalability. High availability and redundancy is usually provided by proprietary and expensive physical data center hardware. High performance, fast response times, and low latency are often delivered by close physical proximity or via expensive hardware solutions.
**Characteristics of cloud applications**
Cloud-native applications are designed to run in a software-defined infrastructure, where scalability, load balancing, high availability, and resiliency are delivered as part of the applications’ architectural design. And advanced functionality is provided by software rather than expensive hardware alternatives.
These applications are resilient to failure, meaning the application can gracefully handle service interruptions caused by a physical infrastructure failure. The application can adapt to latency issues and is independent of the geographical locations.
Cloud-native applications are designed to scale on-demand. The fine-grained modularity and composable nature of cloud applications allows systems to scale specific features independently and isolate failures. These characteristics also enable developers to refine a part of the application without a complete re-write.
Figure 1 provides a high-level overview of the differences between application deployed in virtualized and cloud-based environments.
**Figure 1: Virtualized vs. cloud application environments**
The following table summarizes the characteristics of traditional and cloud-ready applications.
<table>
<thead>
<tr>
<th>Traditional</th>
<th>Cloud-ready or cloud-native</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Monolithic and static</td>
<td>• Highly modular</td>
</tr>
<tr>
<td>• Tightly coupled</td>
<td>• Loosely coupled</td>
</tr>
<tr>
<td>• Specific and dedicated</td>
<td>• Stateless</td>
</tr>
<tr>
<td>• Stateful</td>
<td>• Distributed architecture</td>
</tr>
<tr>
<td>• Powerful, complex, reliable hardware and software</td>
<td>• Inexpensive, commodity hardware</td>
</tr>
<tr>
<td>• Expensive, proprietary</td>
<td>• APIs for services</td>
</tr>
<tr>
<td>• Designed for never-break</td>
<td>• Designed for failure</td>
</tr>
<tr>
<td>• Scale-up</td>
<td>• Designed for automatic scaling</td>
</tr>
<tr>
<td></td>
<td>• Scale-out</td>
</tr>
</tbody>
</table>
Cloud benefits for migrated applications
BUSINESS BENEFITS
Organizations of all sizes must respond quickly to changing market and competitive landscapes. They must also improve efficiency, deliver innovation, and rebalance or reduce costs. Designing new cloud applications and migrating existing workloads to the cloud can help address all of these challenges.
However, it’s important to note that delivering the benefits of a cloud-based strategy is not simply a matter of technology. Most business will also have to make significant changes to processes and cultures within their organizations.
TECHNICAL BENEFITS
OpenStack cloud provides a software-defined infrastructure for faster response times and greater agility. Developers and operations teams have self-service, on-demand access to compute, network and storage resources. This improves efficiency, removes IT bottlenecks, and delivers time to market advantages. Advanced functionality is also available via software instead of expensive proprietary hardware alternatives, delivering substantial cost reductions. Once an application is migrated to cloud-based architecture, it is possible to refactor a part of the system and allow each sub-component to scale independently.
Application Migration to the Cloud: Planning and Patterns
Application migration plan to the cloud
We recommend application architects and developers collaborate on a detailed, interactive migration plan for each application. The plan should be based on the selected best-fit migration patterns discussed in the table below after finalizing the services selected for migration. Communicate the plan and the decisions often with both technical and business stakeholders.
An application migration plan describes the context, objectives and challenges of the migration in addition to scenarios of how the applications will be used in the cloud. The main part of the plan should consist of coarse- and fine-grained migration paths (step-by-step, component-by-component) to the cloud. Migration paths ease planning and communications with stakeholders, project managers and cloud service providers. The plan should describe cloud migration patterns and the sequence to transform the overall system architecture and the application. Each step is identified by decomposing and rearranging multi-tier application services and combining them into groups of service components on the cloud. The integration of cloud services and migration objectives should also be considered. To help with reviews and validation, before and after architecture definitions and descriptions should be included in the migration plan.
Options for migrating applications to the cloud
An OpenStack cloud is ideal for developing and running new agile and innovative cloud-native workloads. Traditional applications can be transformed as well, offering cloud benefits to new and existing applications. Here are three application migration and deployment patterns to consider when preparing your plan.
**CLOUD-HOSTED**
Cloud hosting usually refers to deploying a traditional or conventional workload on the cloud while leaving the actual application and its architecture largely untouched. Applications designed for deployment on bare metal, physical servers, or virtualized environments, can be hosted on an OpenStack cloud without modification.
**CLOUD-OPTIMIZED**
In a cloud-optimized pattern, application elements can be modified to take advantage of cloud computing by utilizing software-defined capabilities to begin to reduce costs or improve efficiency, scalability, performance and functionality.
For example, a cloud storage system can be used instead of a traditional network file system. Or a cloud-ready database might be used in place of a traditional relational database. This approach can deliver improved functionality without redesigning the application.
**CLOUD-NATIVE**
Cloud-native applications are designed from the ground up to be deployed on a cloud-based environment. They can be extended as desired and are robustly built to protect against system failures and overload.
Traditional workloads can be transformed into cloud-native applications, if radically redesigned. This redesign work would need to be justified by measurable benefits such as availability and uptime improvements, improved scalability and flexibility, lower cost and TCO, or improved ROI.
Migrating existing applications to the cloud can be challenging and might require significant efforts, depending on:
- The overall application, system architecture, and structure.
- The dependency of the application on technologies used in the underlying infrastructure.
- Additional services integration required by the application’s use case.
- Non-functional requirements or situational context of the application such as performance, availability, security, interoperability and legal regulations.
The best process for migration depends on the level of cloud-readiness or cloud-maturity of any given application. Migration pattern options include:
- A simple “lift & shift” approach.
- Partial deconstruction and restructuring of the application.
- Complete redevelopment of the application to a cloud-native approach.
The migration approach you choose will depend on the size and complexity of the workload or application. The process might include a sequence of steps, with gradually executed sections of modernization and deployment roll-outs. In some cases, the properties of the application need to be preserved, changed or enhanced.
This table compares potential application migration patterns to an OpenStack cloud in greater detail, including considerations dictated by the application’s structure and architecture.
<table>
<thead>
<tr>
<th>Patterns</th>
<th>Methodologies</th>
<th>Advantages</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cloud-hosted</td>
<td>Lift & shift, re-hosting. Move applications to cloud without changes in the compute model.</td>
<td>Make use of elastic cloud resources without changing application architecture.</td>
<td>Scalability only on application level. Benefitting from shorter re-deployment time. Application itself may remain a single point of failure.</td>
</tr>
<tr>
<td>Cloud-optimized</td>
<td>Cloudification, relocation, replacement. One or more components of the application are replaced with a cloud service rather than redeveloping the architecture of the application. For example: • Enrich the application with OpenStack cloud services such as the Trove DBaaS. • Replace specific storage services with elastic storage systems such as Swift object storage. In some cases, the application may stay on the former platform but uses services from the OpenStack cloud platform.</td>
<td>Optimizing applications for the cloud by using OpenStack cloud services: • Can increase availability, resilience, performance, speed of re-deployment. • Reduces capex. • Reduces time to market. Since a component is replaced by cloud services, no re-development efforts are needed.</td>
<td>New components and cloud services introduce APIs or protocols which were not used before. These might entail: • Modification costs. • Require training. • Added complexity. Most organizations feel the cloud and application modernization benefits more than outweigh the costs.</td>
</tr>
<tr>
<td>Patterns</td>
<td>Methodologies</td>
<td>Advantages</td>
<td>Notes</td>
</tr>
<tr>
<td>---------------</td>
<td>---------------------------------------------------</td>
<td>-----------------------------------------------------------------------------</td>
<td>----------------------------------------------------------------------</td>
</tr>
<tr>
<td>Cloud-native</td>
<td>Refactoring, modernization.</td>
<td>• Optimal scalability, performance and availability.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Use the cloud to provide improved performance, scalability and elasticity to an application. A usage evaluation of components of a static or monolithic application is recommended. Complete rewrite of the application into cloud-native architecture.</td>
<td>• Offers agile responsiveness to changing IT and business demands.</td>
<td>Resiliency and availability are built into application. Provides horizontal scalability and elasticity.</td>
</tr>
</tbody>
</table>
**Application architecture considerations**
This section discusses the evolution of mainstream application architecture design, including architectures and software design patterns to consider when planning your application cloud migration.
**MONOLITHIC ARCHITECTURE**
The term monolithic architecture refers to applications that tightly couple the presentation code, business logic, and data access tiers. This architecture was the predominant application pattern over the last two decades. Most monolithic applications are isolated and deployed on their host or virtual machine (VM). The benefit of this architecture is that most of the fault-tolerance is assumed to be in the infrastructure or systems layer. Application developers don’t have to handle partitions and other availability constructs because it is assumed that all necessary data and services are available. The disadvantage is that the code base for the application is usually large and difficult to understand in its entirety.
**SERVICE-ORIENTED ARCHITECTURE**
Service-oriented applications were the precursor to and resemble microservices application patterns. In SOA, application services or components perform a specific function, such as executing business logic, or extract, transform and load (ETL) processes. They can communicate with one another over the network to invoke necessary application functionality. SOA services are accessible
to other application components through a communication protocol such as remote procedure call (RPC) or extensible markup language (XML) while still providing a single larger function. Each microservice is autonomous and independent but can still call one another when necessary.
**MICROSERVICES ARCHITECTURE**
Several software design patterns have emerged to help solve common problems. For example, the approach to decoupling a monolithic application into functional areas is well described by Chris Richardson’s dissertation *Pattern: Microservices Architecture* ([http://microservices.io/patterns/microservices.html](http://microservices.io/patterns/microservices.html)).
Applications developed with microservices involve components that are uniquely and independently implemented and upgraded. They can be written in a mixture of languages, using different backend databases, and yet, the system can be retired gracefully. Microservices enable developers to make their own technology choices such as integrated development environments (IDE) and software testing solutions.
**STATEFUL VS. STATELESS**
It is important to understand the differences between stateful and stateless applications when moving applications to cloud. In stateless applications, there is no state (session information, open files listing, etc.) for the server to manage. The client can retry requests because all the information needed is included or the function doesn’t require inputs. For example, an HTTP request is a stateless exchange because all the information needed for the specific action is contained in the URL and sent with a POST.
Stateful applications generally rely on some information to be stored on the server side, which requires the client to communicate with the same server once a session has been established. An example of a stateful application is a Java® HttpSession, which provides a way to identify a user across multiple pages. The session is created between the client and server, requiring the client to communicate with the same server to take advantage of this capability.
In general, moving stateless services into the cloud is considerably easier than moving stateful services.
**CAP THEOREM**
As mentioned by Professor Eric Brewer in 2000 ([http://dl.acm.org/citation.cfm?id=564601](http://dl.acm.org/citation.cfm?id=564601)), it is desirable to have Consistency, Availability and Partition-Tolerance when creating a distributed system. However, it is impossible to achieve all three. Based on the CAP theorem, and evidenced by many large-scale internet systems, it is impossible for a system to reliably provide consistent data when a network partition occurs in a distributed environment.
- **Consistency**: All nodes in a distributed system are guaranteed to return the most recent data for a given client request at all times.
- **Availability**: Every client request is guaranteed to receive a response in a distributed system.
- **Partition-Tolerance**: The system continues to operate when a network partition occurs (i.e. loss of network communication between the distributed system’s processing nodes).
In large-scale distributed computing, such as cloud computing, network failures are not uncommon. Networks break down frequently and unexpectedly. Given that network connectivity is not 100 percent reliable, a cloud application system must be designed to tolerate a network partition condition (Partition-Tolerance). A cloud application design has to sacrifice one of the two remaining properties: Consistency or Availability. This results in either a CP (Consistency + Partition-Tolerance) or AP (Availability + Partition-Tolerance) cloud application.
**CP (Consistency + Partition-Tolerance)**: A system is designed to choose consistency over availability. When the system enters partition mode, a portion of the affected nodes becomes unavailable and either returns errors or timeouts to client requests. The nodes resume activity only after the system has recovered from partitioning and the data is synchronized.
**AP (Availability + Partition-Tolerance)**: A system is designed to choose availability over consistency. When the system encounters a partition event, all nodes continue to operate and respond to client requests by returning the most recent version of the data, even if it is inconsistent. The data will eventually become consistent after the system is recovered from partitioning.
When designing a cloud-native application, you have to choose one property over another in the event of a network partition (CP or AP). Building a cloud-native application involves multiple choices on architectural design. A CP design may be selected if an application mandates atomic operations. On the other hand, an AP design might be preferable if an application can be built around the concept of eventual consistency, and service availability is more critical to the business. It is important to note that the best design can be selected for different types of operations depending on your business needs and scaling requirements. Note, Eric Brewer’s 2012 updates his original paper in, “CAP 12 years Later: How the Rules have Changed”. (https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed)
In this chapter, we discussed the strategies of migrating applications onto an OpenStack cloud. Three common migration paths were described: cloud-hosted, cloud-optimized and cloud-native. Each migration option provides different advantages depending on the application needs and architecture. Guidance for a step-by-step migration plan was offered. We highlighted a few important architectural styles and factors for consideration while planning for the migration. Other technologies, such as containers (discussed in a later chapter), the 12-factor apps methodology (https://12factor.net/), CI/CD, and DevOps, are also useful information to be considered when migrating application into the cloud.
Cloud Infrastructure Considerations for Application Developers
Deploying to private, public or hybrid clouds
Several considerations must be addressed before determining the deployment model that best fits your application. Start by assessing your organization’s regulatory compliance needs, financial resources, preferred spending model, users’ geographic locations, and predictability of demand. Not all cloud models can equally address these requirements. These factors provide an initial guide to determining if a private, public or hybrid model best meets your application needs.
Private cloud may be the best option for applications with stringent regulatory requirements that dictate how data is handled and where it resides, or handling intellectual property or proprietary information. If the goal is cost-driven, maximum utilization of a private cloud may increase your return on investment. A private cloud also offers the ability to collaborate and discover new advancements that can be applied to your business needs and your teams’ professional development. Examples of applications that are good fit for private clouds include financial applications, enterprise applications and big data.
A public cloud might be your best option during the initial application deployment when capex and physical infrastructure scalability are your main concerns. Many comprehensive resources are readily available from public cloud providers, including those running OpenStack, to help developers rapidly come up to speed. The total cost of consuming the cloud is more transparent because it is directly billed by the cloud provider. However, it is important to pay attention to the costs in addition to data and compute, including bandwidth, IP addresses, and disk I/O.
It is essential to be aware of the resource utilization (idling resources or right sizing) and cost forecasting for your applications. For example, you should know how much capacity your application uses as compared to how much is allocated to predict and control costs on a public cloud. Public clouds may also offer more unique or optimized capabilities such as GPU rendering or large storage. Enterprises often use public clouds to temporarily add resources to their private cloud during peak business times (also known as bursting applications), which is common for websites, eCommerce, and marketing campaign applications.
Many enterprises are realizing how a hybrid cloud can offer the best of both private and public cloud into a single strategy. Understanding how to use both private and public clouds—and the cost effectiveness of each—provides the most flexibility, scalability and agility. For example, hybrid cloud applications are ideal for stable applications that need to sporadically scale to handle extra workload...
or reach users who are far away from your private cloud. As a developer, it’s important to remember that applications might need to be portable and independent of private or internal systems. As an organization, hybrid cloud computing also provides the option to switch providers if the cost of private or public cloud becomes antithetical to the success of your business.
**Changes in team responsibilities for modern applications**
Historically, on-premises datacenter infrastructure and support teams delivered key application features such as: security, availability (uptime), capacity, and user response speed. Application teams deployed to this infrastructure to take advantages of these business-critical features. For cloud applications, these capabilities become the responsibility of the application teams.
Application teams are responsible for managing availability needed for a positive user experience. This includes providing the means for:
- **Load balancing:** Distributing the load across a cluster by starting additional instances of the application as needed. OpenStack LBaaS v2 (a Neutron service) provides an easy way for developers to accomplish this.
- **Scalability:** Increase the compute and storage resources dynamically. Together, OpenStack Orchestration (Heat) and the Ceilometer Telemetry services allow transparent auto-scaling based on CPU, memory, and disk usage.
- **Failure handling:** Detect application failure and restart the application without losing data and negatively affecting the user. For example, in case of an outage, the application should retry the user request on another running application instance instead of presenting an error message to the user.
- **Disaster recovery:** The standard APIs supported on all OpenStack Powered platforms allow easy migration to a secondary provider if needed.
In a traditional computing environment, the underlying infrastructure ensures security and trust. For cloud-native applications, the developer might need to implement methods to control who can access the application and data, what can be done with them, and how information is captured. The topic of authentication, authorization and auditing requires the attention of the application developers. It is also important to ensure data integrity over the full application lifecycle. Refer to the OpenStack Security Guide [https://docs.openstack.org/security-guide/](https://docs.openstack.org/security-guide/) for additional information.
OpenStack APIs and SDKs for Application Developers
In a traditional environment, a system administrator provisions the infrastructure (e.g. bare metal servers, virtual machines) before it can run an application. If an application requires a feature that needs to store persistent files, a storage file system must be provisioned by the system administrator upfront. In the OpenStack cloud-based environment, the application programing interfaces (APIs) allow an application to perform advanced on-demand infrastructure provisioning as part of its logic. Depending on the use case, the application can dynamically configure the software to run computational tasks, based on user input parameters. This API-driven cloud-based model provides a mechanism for an application to dynamically control infrastructure in a way that is not possible with the traditional infrastructure model. The OpenStack community provides a comprehensive set of developer resources and tools (https://developer.openstack.org/), including the details of the APIs and OpenStack SDKs, to help you write your first application, public and private development environments, reference architectures and more.
Application communication with OpenStack
INTERACTING WITH OPENSTACK SERVICES THROUGH OPENSTACK SDKS
Leveraging OpenStack services in your application is fundamental to managing and controlling cloud resources throughout the complete lifecycle of provisioning, managing, monitoring and reclamation.
OpenStack SDKs (https://developer.openstack.org/) are a set of software development kits for OpenStack that help cloud application developers write applications easily and quickly in their preferred programming language. By using OpenStack SDKs, developers can empower their applications to interact directly with cloud services through a language-level API, which follows the programming language standards selected for their applications.
AVAILABLE SDKS
SDKs are available in a variety of programming languages such as Python, Java, Ruby, Go, PHP and JavaScript. In terms of maturity, not all of them support the full set of OpenStack services, features, and versions. SDKs are not all maintained by the OpenStack community. They might be supported by an external community. Some SDKs, such as Shade (https://docs.openstack.org/infra/shade/), work across all OpenStack clouds, while others, such as fog (http://fog.io/), attempt to work across multiple cloud providers. For more information about available SDKs, visit the OpenStack SDK wiki. (https://wiki.openstack.org/wiki/SDKs)
The OpenStack User Committee is actively engaged in tracking and encouraging improvements to SDKs to enable application developers to write application across OpenStack clouds: You can follow the tracking and maturity of SDKs by becoming involved with the User Committee’s Working Groups (https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee).
**How to choose an SDK**
To select the best SDK for your application, determine what it needs from a cloud. Answer the following questions to find the SDK that best fits your application requirements:
- What are the programming languages used in your application?
- What services (compute, storage, etc.) does or will your application consume from the cloud?
- Will your application run in a hybrid cloud model such as a combination of OpenStack private and public clouds, AWS, Google Cloud Platform or Azure?
- After reviewing the maturity of the SDK and documentation to understand the completeness of features, does the SDK meet the application requirements?
Here are some of the most widely used SDKs:
**Shade** (Python): Shade is a simple client library for interacting with OpenStack clouds developed by the OpenStack Infrastructure team. Shade works across all modern OpenStack clouds and provides the most regularly-used services. [http://docs.openstack.org/infra/shade/](http://docs.openstack.org/infra/shade/)
**Apache jclouds** (Java): The Apache jclouds SDK is a multi-cloud toolkit that allows you to use portable abstractions or cloud-specific features for multiple clouds, including OpenStack, AWS, and Azure. [https://jclouds.apache.org/](https://jclouds.apache.org/)
**PHP OpenCloud** (PHP): The PHP OpenCloud SDK enables PHP developers to easily connect to OpenStack APIs in a simple and idiomatic way. [https://github.com/php-opencloud/openstack](https://github.com/php-opencloud/openstack)
**Gophercloud** (Go): Gophercloud is an open source library for working with OpenStack clouds in golang. [http://gophercloud.io/](http://gophercloud.io/)
**Fog** (Ruby): Fog is a multi-cloud services library that provides a simplified interface, making clouds easier to work with and switch between. [http://fog.io/](http://fog.io/)
**Adapting APIs as part of your development/application logic**
Every OpenStack service presents an API such as Nova (compute), Cinder (block storage), Keystone (identity service), and Swift (object storage). Generally, OpenStack SDKs support a variety of OpenStack services and interact with the services’ API. Using an SDK (or lower-level methodologies), application developers can integrate these APIs into their application design. This powerful approach is unavailable for traditional model applications.
CLOUD STORAGE FOR APPLICATIONS
Swift, the OpenStack object storage service, can be used to serve large amounts of static data, such as image files, documents, and videos, through the standard HTTP protocol without web server involvement. As a developer, your application will interact with the service by using object and storage containers APIs (https://docs.openstack.org/developer/swift/api/object_api_v1_overview.html). An object represents any static data. Storage containers are used to group those objects. Swift can also be used as a backup solution. In addition, other cloud technologies, such as the Docker registry (https://docs.docker.com/registry/storage-drivers/swift/), support storage plugins for Swift object storage.
You can easily use the APIs to configure your application architecture based on ever-changing user expectations. In the next section, you will find useful examples for binding storage into your application through basic API calls to the storage services.
Figure 2 shows how a sample photo album application uses the Swift API to upload and store images in an OpenStack cloud. The web photo album displays those images from the cloud. The code sample (https://github.com/MBonell/openstack-sdks-challenges/tree/master/shade/swift/photo-album) uses Shade (https://docs.openstack.org/infra/shade/) (Python) as the SDK.
Figure 2: Example application using the Shade SDK to communicate with the Swift API
- UPLOADER SCRIPT
- Create a public storage container called "my-pets".
- Upload the images to "my-pets" container.
- Select the images to upload by specifying their location in your system.
- PHOTO ALBUM BACKEND
- Get the container public URL to access the images available in "my-pets". The content is represented in XML format. Each image is accessible using the URL provided by the Swift API.
- From the photo album backend, using the XML as source, automate the generation these URLs (Container public URL + image name)
- PHOTO ALBUM FRONTEND
- In the frontend, use the generated URLs in the <img> HTML tag to display your images.
**DYNAMIC COMPUTATION FOR APPLICATIONS**
Some applications require dynamic execution of computational tasks such as batch processing or media transcoding. By using the SDK and interacting with the Nova API, developers can build applications that spin-up compute instances during runtime to perform computational or job processing tasks on-the-fly.
The following example shows how an encoder application uses the Nova API to launch transcoding workers to convert media into different formats. The code sample ([https://github.com/MBonell/openstack-sdks-challenges/tree/master/gophercloud/nova/encoder](https://github.com/MBonell/openstack-sdks-challenges/tree/master/gophercloud/nova/encoder)) uses Gophercloud ([https://github.com/gophercloud/gophercloud](https://github.com/gophercloud/gophercloud)) (Go) as the SDK and ffmpeg ([https://ffmpeg.org/](https://ffmpeg.org/)) to transcode the video files.
**Figure 3: Example application using the Gophercloud SDK to communicate with the Nova API**
<table>
<thead>
<tr>
<th>Video Encoder</th>
<th>This diagram shows how the video encoder application uses the Nova API to launch transcoding workers that convert media files into different formats as needed.</th>
</tr>
</thead>
</table>
01 WORKER CREATION
Set the infrastructure variables for the worker (flavor, image, network and security group - Nova API) the original video file stored in the cloud (Swift API), and the format to encode it (MP4, MPG, WEBM).
02 WORKER INITIALIZATION
Once the worker instance is ready (Nova API), the cloud init script prepares the worker: Update software dependencies and install ffmpeg.
03 WORKER EXECUTION
The worker instance downloads the original video from the cloud using Swift API.
The transcoding task starts converting the video in the format specified using ffmpeg.
The encoded video is uploaded back to the cloud using Swift API.
*Note: The worker instance is terminated after the task is completed.*
OpenStack Services to Support Application Development
OpenStack offers managed services that greatly improve the application development experience. They abstract the provisioning of dependent services, such as database-as-a-service, messaging-as-a-service or orchestration service. These services enable enterprise application developers to provide self-service access to the services or templatize a common development platform to streamline the development process.
**Database-as-a-Service (DBaaS)**
For application developers who want to leverage multiple database technologies, the complexities can be overwhelming. A properly tuned and updated database system is essential for applications that store crucial data. Provisioning and operating database systems with little tolerance for errors and misconfigurations is of paramount importance for application development.
OpenStack Database Service (Trove) provides a framework for successfully operating and provisioning a number of database technologies using best practices. Trove provides a consistent API for all commonly used operations throughout the database lifecycle from provisioning to configuration, tuning, operating, updating, backup, and more. Trove supports SQL databases such as PostgreSQL, MySQL, Percona, Percona XtraDB Cluster, MariaDB, DB2-Express, and Vertica, as well as NoSQL databases such as MongoDB, Cassandra, Couchbase, Redis and CouchDB.
Trove empowers application developers to provision these database technologies easily without knowing specific details of each. By providing close integration with OpenStack services such as Block Storage (Cinder) and Compute (Nova), operators can create resource pools and associate specific resources with development, QA and production systems. This advantage helps application developers to easily develop applications in a cost-effective environment, and be confident that when the database is moved to production, it will be operated in the same way. Trove also helps ensure a consistent database experience across a multi-region environment for application development.
**Orchestration Service**
OpenStack Orchestration (Heat) is a cloud orchestration engine that provides a template-based mechanism for launching a complete cloud application—from operating system through the user-facing presentation tier—on an OpenStack cloud. Heat offers features such as auto-scaling and nested application stacks.
Heat allows developers to describe the infrastructure and software components necessary for a cloud application in an easily readable text file that can be treated like code and version-controlled. Resources that can be described in Heat include OpenStack infrastructure components such as users, security key pairs, servers through Nova compute, disk attachment through Cinder volumes, firewall rules through security groups, or network configuration through Neutron networking. It also allows developers to define software components such as application configurations (scripts, Puppet manifests, Chef recipes) and software dependencies. It helps developers define composable and reusable software components. Software components can be defined once and deployed on multiple instances.
Heat supports implicit and explicit dependencies. An implicit dependency is automatically enforced when a property or input of a software deployment is obtained from the attribute or output of another deployment. Timing dependencies can also be specified. For example, a database component must be up and running before an application component can connect to it. Developers can simplify debugging and troubleshooting by creating and sharing common Heat templates with other developers to provision consistent applications.
Although it is possible to define all infrastructure and software definitions into a single template file, it is recommended to split the definitions for large and complex scenarios into multiple Heat files. Heat offers nested stacks to support this solution, making the templates reusable and easier to read and consume.
Heat provides an auto-scaling feature that should be integrated with the OpenStack Telemetry engine (Ceilometer). The application can automatically scale up and down. When a resource is defined in a scaling group, Heat automatically creates and configures the required software components in the correct order based on the scaling definition specified by the developers. It automatically re-balances the resources when the load is decreased.
Heat also supports application lifecycle management. You can update and change your infrastructure and software deployment by updating an existing stack with a modified template. Heat will automatically make the necessary changes to the system based on your new definitions.
Heat allows developers and operators to collaborate in an agile DevOps environment. It enables the team to define infrastructure as code and integrate it as part of the application development lifecycle. This approach improves infrastructure proximity across various software development lifecycle stages (Development, QA, Production), reduces application deployment issues and concerns, and increases software quality.
Application Catalog Service
Murano is the OpenStack application catalog service that enables users to browse and find applications available to deploy on the cloud. It is designed to provide an integrated and automated
turnkey deployment solution for cloud applications. Murano also allows application developers to compose reliable application environments and maintain the application service consistency across cloud projects.
Murano simplifies cloud application deployment, and manages application lifecycles to support different service workloads ranging from common use cases to complex, large-scale, and distributed applications. The catalog service is fully integrated with OpenStack services including Identity, Orchestration and Dashboard so users can generate complete applications with all the compute, storage, networking, and application resources in place.
One use case example involves launching a LAMP (Linux, Apache, MySQL, PHP) stack on the cloud. The required application packages for the LAMP service bundle are developed and stored in Murano. Cloud users can take advantage of those pre-defined packages to automatically deploy web services with a couple of clicks, using the OpenStack Dashboard or Murano command line interface (CLI).
Murano also assists in DevOps collaboration between developers and operators by generating visibility and transparency across various deployment cycles.
**Messaging-as-a-Service**
Zaqar is an OpenStack project that provides a multi-tenant cloud messaging and notification service. Developers use Zaqar within applications to communicate with end users or other software agents. Typical use cases include event broadcasting, task distribution, point-to-point messaging or email notification. Zaqar provides two communication protocols: HTTP-based RESTful API and WebSocket-based API. The HTTP-based API is firewall-friendly and provides simple request/response-style communication. The Websocket API provides communication over a persistent connection and transfers multiple request/response communications over a single TCP connection, which potentially reduces network traffic and minimizes delays.
As OpenStack continues to evolve and expand, more innovative services are developed and added such as Magnum (Container orchestration-as-a-service), Manila (File system-as-a-service), Barbican (Key management-as-a-service), and more. The OpenStack Project Navigator (https://www.openstack.org/software/project-navigator/) provides useful insights such as maturity characteristics on OpenStack projects.
Containerizing Applications on OpenStack
Virtual machines and containers
In a traditional virtualized environment, a VM includes a full operating system and often runs on a specific hypervisor such as KVM, XEN, HyperV or ESXi. To write and deploy applications for a virtualized environment, the entire application is packaged, including the binaries, libraries, and operating system, during the deployment process. Virtualization makes efficient use of hardware resources and provides a high degree of isolation. However, a VM is heavily weighted, which makes it challenging to maintain environment parity across development, testing and production.
A container runs in an isolated process on a host operating system. It does not include full operating system libraries, but it does provide a lightweight environment by sharing kernel resources with other containers. Containers are portable and run exactly the same across development, testing and production environments. Figure 4 shows the fundamental difference between VMs and containers in any environment. An introduction to containers in OpenStack is provided in a white paper (https://www.openstack.org/assets/pdf-downloads/Containers-and-OpenStack.pdf)
Figure 4: Virtual machine and container architectures
Containerizing applications
It is not always necessary to completely redesign a whole system to fit into containers. An application can be partially containerized. The portion of the system that is long-lived or depends on large-scale private storage, such as a database, can remain on bare metal or in a VM. If developers adopt a microservices architecture design, containers are one of the best approaches to package and deploy the individual services.
A number of container technologies are available today including Docker, LXC, LXD or rkt. Each technology provides a mechanism to package and install an application in its containers. For example, a Dockerfile specifies a Docker container along with commands to install the application, software or other environment variables.
Generally, a complete system will consist of multiple containers for deployment on a cluster of machines. Container orchestration tools such as Docker Swarm, Kubernetes, and Apache Mesos are used for the deployment. While each of these technologies have their differences in architecture and terminology, they follow a common pattern—all container deployments are described in the manifest file. The orchestration tools manage the container placement, scaling, networking, discovery, updates, and other services.
Deploying and operating containers on OpenStack
There are a number of options for deploying a container cluster on an OpenStack cloud. One option is the Container Orchestration service (Magnum). It allows a user to request a Mesos, Docker Swarm, or Kubernetes cluster from the OpenStack Dashboard (Horizon user interface), CLI, or API. Magnum allows multiple container technologies to be used concurrently in OpenStack.
Figure 5: OpenStack Magnum container architecture overview
Containers started by Magnum are run on top of an OpenStack resource called a bay. Bays are collections of Nova instances created with Heat. Magnum uses Heat to orchestrate an OS image that contains Docker Swarm, Kubernetes or Mesos, and runs that image in either virtual machines or bare metal in a cluster configuration. Magnum simplifies the integration with OpenStack, and allows cloud users who can already launch cloud resources such as Nova instances, OpenStack Block Storage (Cinder) volumes or OpenStack Database Service (Trove) databases to create bays where they can start application containers.
Magnum is not the only option for deploying a container cluster on OpenStack. A number of third-party tools, such as Ansible and Terraform, can be used to deploy container clusters and automatically deploy resources using the OpenStack API. It’s up to individual organizations to choose the approach that makes the most sense for their environment.
Moving Applications from Cloud to Cloud
Cloud computing provides an array of hosting and service options to fit your overall company strategy. Sometimes a public cloud is your best option and other times your data requirements demand a private cloud. As needs converge, a hybrid solution continues to gain popularity. Developers must consider if their applications might be run on either or both. This chapter discusses considerations when moving applications from cloud to cloud and additional models such as cloud bursting. It assumes your organization has researched financial implications, and has made the determination to move applications between clouds. Good sources for financial analysis include, but are not limited to, OpenStack: A Business Perspective (https://www.openstack.org/assets/pdf-downloads/business-perspectives.pdf) and 451 Research April 2016 presentation, “OpenStack Pulse: Unbiased Research on Enterprise Demand, TCO, and Market Size”. (https://www.openstack.org/videos/video/openstack-pulse-unbiased-research-on-enterprise-demand-tco-and-market-size).
Decoupling applications from the public cloud
Decoupling basic web applications that are deployed on a public cloud can be straightforward. A domain name server (DNS) redirect and a simple file copy is generally sufficient to move a web application to a private cloud.
The situation becomes more complex if the application is leveraging the public cloud’s managed services such as DNS, elastic load balancing (ELB), databases, monitoring, security or if the application deployment is tightly coupled to the public cloud environment. Application developers should have a migration plan for transferring the data in a way that avoids application downtime. Another consideration is financial—the exit cost—where a public cloud provider charges for the data transfer out of the public cloud. For example, application owners should be aware of charges for data transfers between regions within the same public cloud.
Networking and latency concerns
When moving applications from a public cloud to a private cloud, consider the infrastructure requirements for networking, security and data. One approach is to mirror the networking architecture from the enterprise data center in the public cloud and the on-premises private cloud. A common use case is to implement custom networking configurations such as L2 adjacency. Also consider leveraging distributed computing concepts such as share-nothing architecture to ensure
partition-tolerance and application availability. If you are accustomed to having dedicated links from the public cloud to the corporate environment, be sure to review bandwidth requirements, especially if there is a significant volume of data shared across the organization and remote offices.
**Mapping public cloud services to OpenStack services**
Public cloud providers offer managed services that application developers might leverage in their applications. When moving applications from a public to an OpenStack private cloud, a common scenario, be sure to understand the dependencies and map them to equivalent OpenStack services. The following table maps the most commonly used services for OpenStack and popular public clouds.
<table>
<thead>
<tr>
<th>Service</th>
<th>OpenStack Project</th>
<th>Amazon Web Services</th>
<th>Microsoft Azure</th>
<th>Google Cloud Platform</th>
</tr>
</thead>
<tbody>
<tr>
<td>Virtual servers</td>
<td>Nova</td>
<td>EC2</td>
<td>Virtual Machines</td>
<td>Compute Engine</td>
</tr>
<tr>
<td>Block storage</td>
<td>Cinder</td>
<td>Elastic Block Store (EBS)</td>
<td>Disk Storage/Page Blobs</td>
<td>Persistent Disk</td>
</tr>
<tr>
<td>Object storage</td>
<td>Swift</td>
<td>S3</td>
<td>Blob Storage</td>
<td>Cloud Storage</td>
</tr>
<tr>
<td>Orchestration</td>
<td>Heat</td>
<td>CloudFormation</td>
<td>Resource Manager</td>
<td>Deployment Manager</td>
</tr>
<tr>
<td>Database</td>
<td>Trove</td>
<td>RDS</td>
<td>SQL Database</td>
<td>Cloud SQL</td>
</tr>
<tr>
<td>Messaging</td>
<td>Zaqar</td>
<td>Simple Queue Service (SQS)</td>
<td>Service Bus Queues</td>
<td>Cloud Pub/Sub</td>
</tr>
<tr>
<td>Containers</td>
<td>Magnum</td>
<td>EC2 Container Service</td>
<td>Container Service</td>
<td>Container Engine</td>
</tr>
</tbody>
</table>
For more information, visit
- **OpenStack Project Navigator** ([https://www.openstack.org/software/project-navigator/](https://www.openstack.org/software/project-navigator/))
- **Amazon Web Services** ([https://aws.amazon.com/servicecatalog/](https://aws.amazon.com/servicecatalog/)) (free account required)
- **Google Cloud Platform** ([https://cloud.google.com/free-trial/docs/map-aws-google-cloud-platform](https://cloud.google.com/free-trial/docs/map-aws-google-cloud-platform))
Data and data residence policy decisions
Whether your company decides to use a public, private or hybrid cloud model, consider the legal implications regarding data collection, storage or processing. There are likely state, national, country, and international laws and regulations that need to be considered to ensure legal compliance. Industry-specific regulations may affect your plan also; two examples are the Payment Card Industry Data Security Standard (https://www.pcisecuritystandards.org) (PCI DSS, global cardholder safety), Health Insurance Portability and Accountability Act (http://www.hhs.gov/hipaa/index.html) (HIPAA, a United States law). Consult your legal counsel to ensure your business is and remains in compliance.
Cloud bursting into public cloud from private cloud
Cloud bursting is an application deployment model in which an application workload running in a private cloud bursts into a different cloud environment to meet dynamic workload demands. Cloud bursting is typically used to handle a traffic spike tied to a seasonal demand, a news event, or special offer that drives traffic to an application for a specific time period.
Under this hybrid cloud deployment, an organization only pays for the extra compute resources when they are needed. For example, an organization might need to maintain the application state in the private cloud while bursting the traffic spike to public cloud on-demand resources. A hybrid model with bursting also provides an excellent environment for application performance and stress testing. Latency between the two environments is an important part of successful cloud-bursting because of its ability to impact performance for the entire application system.
Checklist: Public to Private Cloud Migration
Before making the decision to migrate your application environment to the cloud, consider reviewing this checklist.
1. Workload discovery in the public cloud:
a. Inventory all the applications in the public cloud.
b. Inventory the sizing of each application in the public cloud (cores, memory, storage on each instance) and map it to the right instance type in the private cloud.
c. Estimate the cost of running the applications in public cloud (include IaaS, data transfer cost, software licenses, and support cost).
d. List the regions where the applications are deployed (North America, Europe, Asia, etc.)
e. List of the disaster recovery requirements (e.g., RTO – Recovery Time Objective and RPO – Recovery Point Objective).
f. Inventory all VPC network and security requirements.
g. Inventory all third-party software in use.
h. Inventory all the management software used to manage workloads in the public cloud (Active Directory, DNS, monitoring, deployment, backups, disaster recovery replications, ticketing and alerting systems, etc.).
i. Review the application and data dependency matrix—the application’s dependency on any other applications for connectivity, security, integration and data size.
j. Understand or benchmark the performance of the application/workload in the public cloud.
k. Understand the public cloud native service dependencies:
i. DNS service.
ii. Storage dependencies (object storage, archival, block storage, etc.).
iii. List any deployment and automation services.
iv. Map all the database services being used (include relational or NoSQL, data warehouse).
v. Understand if any notification, queuing, or email services are in use.
2. Planning the workloads for the private cloud:
a. Based on the sizing exercise in the public cloud discovery, understand the capacity needs for the workloads running in the public cloud (e.g., CPU cores, memory, storage and networking).
b. Based on the workload characteristics on the public cloud, consider cloud-native architectures such as microservices or containers.
c. Based on the regions in which the applications/workloads are currently deployed, evaluate the data center cost and colocation options.
d. Based on the native public services in use, map alternative software and technologies to remove the dependencies from the public cloud.
e. Plan proof-of-concept (POC) and performance benchmarking exercises for the selected applications/workloads in the private cloud infrastructure.
f. Complete a costing exercise including IaaS, software license and support cost, and keep in mind adjustments for operation and support costs for private cloud.
g. Understand any updates needed for IT security and audit controls (e.g., SSAE 16, ISO, FedRAMP, etc.).
h. Evaluate and plan for staff training for the private cloud deployment.
3. Testing the workloads in the private cloud:
a. Select deployment automation tools for infrastructure as code, and configuration management tools.
b. Test all workload deployments and measure deployment time.
c. Perform full performance benchmarking tests for selected workloads.
d. Optimize infrastructure resource types based on the performance test results and avoid over-provisioning.
e. Test your data migration strategy and procedures to understand the complexity and the time to complete the migration.
f. Complete a few dry runs of the migrated application to test performance.
g. Complete a security requirement evaluation for the migrated applications, then test performance again with security controls enabled.
4. Executing the migration:
a. Engage a cross-functional steering committee (e.g., Finance, Operations, Engineering) to review the strategy and plan.
b. Ensure proper communication and engagement are in place between the stakeholders and the operations team.
c. Complete all the updates to IT security and audit controls (SSAE 16, ISO, FedRAMP, etc.).
d. Develop a detailed migration plan with a schedule for each application/workload.
e. Ensure monitoring and ticketing integration systems are in place prior to going live.
f. Ensure end-to-end user acceptance testing (UAT) is done before final cut-over from the public cloud. Maintain the public infrastructure for at least two-to-three weeks post-migration.
g. Ensure sufficient private cloud infrastructure capacity is in place to meet the business’s service level agreements (SLAs) and to be validated by all stakeholders.
h. Ensure continuous monitoring and optimization for the migrated application/workload.
Summary
This guide helps traditional and cloud-native application specialists prepare and execute migration plans using cloud-enabled development patterns on OpenStack clouds. OpenStack and other communities provide tooling to use the OpenStack service APIs, and package and automate application deployment. Containers, container management systems PaaS, and other new technologies are also integrated to help you use one programmable platform for all your infrastructure needs.
To avoid hidden costs and achieve proper application scaling, thoughtful considerations and detailed planning is required for public, private and hybrid cloud computing, containers, data migration, cloud-bursting use and cloud-to-cloud migrations. The checklist provides a clear path to public-to-private cloud application migration for organizations that find non-OpenStack public clouds to be expensive and inflexible.
The OpenStack community of more than 70,000 members, across over 180 countries and 650 companies is available to answer questions and offer their experience and expertise through mailing lists, Internet Relay Chat (IRC), and working groups. We always welcome new application engineers who bring new ideas to improve migration and new cloud-based development.
For more information, please visit these resources.
<table>
<thead>
<tr>
<th>Resource</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>Application development on OpenStack</td>
<td><a href="https://www.openstack.org/appdev/">https://www.openstack.org/appdev/</a></td>
</tr>
<tr>
<td>Development resources for OpenStack Cloud</td>
<td><a href="https://developer.openstack.org/">https://developer.openstack.org/</a></td>
</tr>
<tr>
<td>OpenStack documentation, including the API Guide</td>
<td><a href="https://docs.openstack.org/">https://docs.openstack.org/</a></td>
</tr>
<tr>
<td>Join the OpenStack community</td>
<td><a href="https://www.openstack.org/community/">https://www.openstack.org/community/</a></td>
</tr>
<tr>
<td>User stories</td>
<td><a href="https://www.openstack.org/user-stories/">https://www.openstack.org/user-stories/</a></td>
</tr>
<tr>
<td>Resource</td>
<td>Link</td>
</tr>
<tr>
<td>----------</td>
<td>------</td>
</tr>
<tr>
<td>Join the conversations on IRC</td>
<td><a href="https://wiki.openstack.org/wiki/IRC">https://wiki.openstack.org/wiki/IRC</a></td>
</tr>
</tbody>
</table>
Join us today. Read how other organizations use OpenStack. And enjoy the open, programmable platform for your applications.
OpenStack is a registered trademark in the United States and in other countries.
Java is a registered trademark of Oracle and/or its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
All other company and product names may be trademarks of their respective owners.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/). To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/4.0/legalcode.
|
{"Source-Url": "https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/enterprise/OpenStack-AppDevMigration8x10Booklet-v10-press-withcover.pdf", "len_cl100k_base": 12401, "olmocr-version": "0.1.48", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 70688, "total-output-tokens": 13672, "length": "2e13", "weborganizer": {"__label__adult": 0.0002567768096923828, "__label__art_design": 0.0003268718719482422, "__label__crime_law": 0.00021219253540039065, "__label__education_jobs": 0.0008511543273925781, "__label__entertainment": 5.8531761169433594e-05, "__label__fashion_beauty": 0.00011467933654785156, "__label__finance_business": 0.0010671615600585938, "__label__food_dining": 0.00019991397857666016, "__label__games": 0.0005211830139160156, "__label__hardware": 0.0008592605590820312, "__label__health": 0.00023758411407470703, "__label__history": 0.00019371509552001953, "__label__home_hobbies": 8.32676887512207e-05, "__label__industrial": 0.0002536773681640625, "__label__literature": 0.0001832246780395508, "__label__politics": 0.00013911724090576172, "__label__religion": 0.00020885467529296875, "__label__science_tech": 0.0101776123046875, "__label__social_life": 5.7697296142578125e-05, "__label__software": 0.019805908203125, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.0001556873321533203, "__label__transportation": 0.00032138824462890625, "__label__travel": 0.00017404556274414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66465, 0.00296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66465, 0.1831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66465, 0.89067]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1104, false], [1104, 1962, null], [1962, 4032, null], [4032, 4726, null], [4726, 7113, null], [7113, 9132, null], [9132, 10370, null], [10370, 10370, null], [10370, 12749, null], [12749, 14865, null], [14865, 16807, null], [16807, 19335, null], [19335, 22052, null], [22052, 25308, null], [25308, 28117, null], [28117, 30609, null], [30609, 33176, null], [33176, 35899, null], [35899, 37988, null], [37988, 40101, null], [40101, 42541, null], [42541, 45538, null], [45538, 47886, null], [47886, 47886, null], [47886, 49157, null], [49157, 50938, null], [50938, 51896, null], [51896, 51896, null], [51896, 54398, null], [54398, 56891, null], [56891, 58617, null], [58617, 58617, null], [58617, 60775, null], [60775, 63299, null], [63299, 65370, null], [65370, 65895, null], [65895, 65895, null], [65895, 66465, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1104, true], [1104, 1962, null], [1962, 4032, null], [4032, 4726, null], [4726, 7113, null], [7113, 9132, null], [9132, 10370, null], [10370, 10370, null], [10370, 12749, null], [12749, 14865, null], [14865, 16807, null], [16807, 19335, null], [19335, 22052, null], [22052, 25308, null], [25308, 28117, null], [28117, 30609, null], [30609, 33176, null], [33176, 35899, null], [35899, 37988, null], [37988, 40101, null], [40101, 42541, null], [42541, 45538, null], [45538, 47886, null], [47886, 47886, null], [47886, 49157, null], [49157, 50938, null], [50938, 51896, null], [51896, 51896, null], [51896, 54398, null], [54398, 56891, null], [56891, 58617, null], [58617, 58617, null], [58617, 60775, null], [60775, 63299, null], [63299, 65370, null], [65370, 65895, null], [65895, 65895, null], [65895, 66465, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66465, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66465, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1104, 3], [1104, 1962, 4], [1962, 4032, 5], [4032, 4726, 6], [4726, 7113, 7], [7113, 9132, 8], [9132, 10370, 9], [10370, 10370, 10], [10370, 12749, 11], [12749, 14865, 12], [14865, 16807, 13], [16807, 19335, 14], [19335, 22052, 15], [22052, 25308, 16], [25308, 28117, 17], [28117, 30609, 18], [30609, 33176, 19], [33176, 35899, 20], [35899, 37988, 21], [37988, 40101, 22], [40101, 42541, 23], [42541, 45538, 24], [45538, 47886, 25], [47886, 47886, 26], [47886, 49157, 27], [49157, 50938, 28], [50938, 51896, 29], [51896, 51896, 30], [51896, 54398, 31], [54398, 56891, 32], [56891, 58617, 33], [58617, 58617, 34], [58617, 60775, 35], [60775, 63299, 36], [63299, 65370, 37], [65370, 65895, 38], [65895, 65895, 39], [65895, 66465, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66465, 0.12538]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
a392ef0037e6654f87c696b138928de65ff33c3b
|
From BDD Scenarios to Test Case Generation
Tannaz Zameni*, Petra van den Bos*, Jan Tretmans†‡, Johan Foederer§, and Arend Rensink*
*Formal Methods and Tools, University of Twente, Enschede, The Netherlands
†Department of Software Science, Radboud University, Nijmegen, The Netherlands
‡ TNO-ESI, Eindhoven, The Netherlands
§ TOPC Embedded Systems, Best, The Netherlands
Email: *{t.zameni, p.vandenbos, arend.rensink}@utwente.nl, †tretmans@cs.ru.nl, §johan.foederer@topic.nl
Abstract—Model-based testing (MBT) offers the possibility of automatic generation and execution of tests. However, it is not yet widely used in industry due to the difficulty in creating and maintaining models. On the other hand, Behavior Driven Development (BDD) is becoming more popular in the agile development process to achieve a common understanding of the system under development among stakeholders and to automate testing. However, BDD scenarios are written in human language and are usually not precise enough. Moreover, tests extracted from BDD scenarios are too short and incomplete; they only cover a very small part of the system. Our goal is to combine these two approaches to benefit from the usability of BDD and the test automation capabilities of MBT. In this paper, we first define a formal model of scenarios that we call BDD Transition Systems, second, we create more complete tests by composing scenarios (model composition), and finally, we generate and execute tests automatically. We demonstrate the applicability of this approach in a real-world example: an industrial printer.
Index Terms—Behavior-Driven Development, Model-Based testing, Compositional testing
I. INTRODUCTION
Modern software systems are ever-growing in size and complexity, offering an ever wider range of functionalities, and increasingly connecting to their environment. Systematic testing plays a major role in getting confidence in the quality of such systems. Software testing, however, is often an error-prone, expensive, and time-consuming process. Estimates are that testing consumes 30-50% of the total software development costs. The tendency is that the effort spent on testing is still increasing due to the continuous quest for better software quality, and the ever-growing size, complexity, and connectivity of systems. The situation is aggravated by the fact that the complexity of testing tends to grow faster than the complexity of the systems being tested, in the worst case even exponentially. This may seriously hamper the testing of future generations of software systems, implying that smarter, more effective, and more efficient testing methods are required.
a) MBT: Model-Based Testing (MBT) is one of the technologies that are propagated to meet these testing challenges. MBT is a form of black-box testing where the model serves as a specification for the system under test (SUT), prescribing the behaviour that the SUT shall, and shall not exhibit. The main advantage of MBT is that it enables the automated, algorithmic generation of large amounts of valid test cases including corresponding expected results. MBT, in particular, MBT using formal models, originates from research on formal methods and testing. Nowadays, a reasonable number of commercial and open-source tools for MBT are available, but, despite its solid foundations and promises of automated test generation, there is no widespread use of MBT in the industry, yet.
The main bottleneck that prohibits the broad application of MBT is the construction and availability of the appropriate behavioural models for MBT. Firstly, there is some reluctance against investment in creating models, as companies see this as having to develop and maintain yet another software artifact. Secondly, mastering the art of behavioural modeling requires abstract thinking, education, and experience that is not always available. Thirdly, the information necessary to construct a model, in particular for legacy, third-party, or outsourced systems or components is not always (easily) available. And last but not least, the specialized languages in which MBT models are expressed do not excel in readability and understandability for non-experts, such as product owners, customers, and other stakeholders. This complicates communication with these stakeholders, and it does not facilitate obtaining feedback and validating MBT models, i.e., getting confidence that the model really models what was intended.
b) BDD: Behaviour Driven Development (BDD) is an agile approach to software development. A key goal of BDD is to foster communication and shared understanding of what the software under development should do, among all stakeholders of the product such as developers, product owners, product analysts, testers, customers, and business developers [1, 2, 3].
In the BDD approach, three activities are distinguished: discovery, formulation, and automation. During discovery, the required behaviour of the software or feature under development is explored in structured conversations by all stakeholders involved, by constructing examples of the required behaviour. Such explorations are sometimes referred to as ‘three amigos’ sessions. In the formulation phase, the examples are documented in structured natural language, in such a way that they are understandable and shared by all stakeholders, which facilitates validation. This is also called specification by example. The most popular style to write these documented examples, called scenarios, is the Given/When/Then style used in Gherkin language. There are other styles like Context/It in RSpec [4] and tables [5]. The documented examples are written in such a way that during automation phase they can
be transformed into executable test cases. These can then be used to verify whether the developed software indeed satisfies the requirements documented in the scenarios. The collection of scenarios is also referred to as living documentation. Automation of Gherkin scenarios is supported by many tools, e.g., Cucumber [6] and SpecFlow [7].
Unlike MBT, BDD does not originate from research but from software engineering practice. Nowadays, many software companies use some form of BDD approach to explore, specify, and automate tests for software features.
c) BDD and MBT: Among the strong points of the BDD approach are the collaborative exploration of the requirements by making examples, the documentation of examples in scenarios expressed in structured, readable natural language, and the readability and understandability of scenario specifications by all stakeholders. The lack of such a shared understanding of readable specifications is a weak point of current MBT approaches.
On the other hand, MBT provides a solid foundation in the form of formal semantics and well-defined testing theory, leading to algorithmic test generation of many, long, diversified, and valid tests, together with test result analysis. In addition, the formal underlying theory enables reasoning about concurrency, non-determinism, model coverage, and compositionality. Most of these aspects are weak points of BDD. There is no underlying theory providing formal semantics to scenarios, the size and number of scenario-based test cases are limited, i.e., they usually, and deliberately, test one particular aspect, and not a combination of aspects or features. Concurrency, non-determinism, and model coverage are not considered. Composing scenarios is sometimes used, but in a very informal, ad-hoc, and sometimes ambiguous way, for example, two scenarios with overlapping Given-conditions sometimes mean that a choice can be made, and sometimes that both should be considered concurrently or conjunctively. Also, the infamous Gherkin And-keyword can have different meanings: sometimes it means sequence, sometimes concurrency, and sometimes logical and (conjunction). Additionally, reaching a state that satisfies a particular Given-condition can be difficult. This is currently completely left to the implementer of the test code in a so-called step-definition. Then it might be, however, that laying some other scenarios head-to-tail will easily reach such a state. There is no way to reason about such compositions of scenarios in the BDD approach. In MBT theory, this corresponds to the standard problem of reachability analysis.
Given this analysis of the strong and weak points of BDD and MBT, the goal of our research is to combine them in such a way that we obtain their complementary strengths. Basically, this means that we combine the exploration and specification construction in the form of Gherkin scenarios from BDD, with the test generation, compositionality, and formal reasoning from MBT. We aim to accomplish this by transforming Gherkin scenarios into small models in a formal MBT modeling language. In this way, we can use the discovery and formulation phases of BDD to construct scenarios that are readable and understandable. Moreover, after transforming these scenarios into these small models in the MBT formalism, we can compose these small models into larger models, we can use reachability analysis to reach particular Given-states, and we can generate many, long, diversified, and valid test cases that test different aspects and combined features. To the best of our knowledge, there is no research found in the literature that automatically generates, composes, and executes tests from BDD scenarios based on a formal model with formal semantics.
In this paper, we define BDD Transition Systems (BDDTS) as Symbolic Transition Systems (STS) with preconditions (for Given steps) and postconditions (for Then steps). These BDDTS are our formal MBT modeling language. STS is a well-defined formalism for MBT [8], that supports formal reasoning, composition, and test generation [9], [10]. We show how scenarios in Given/When/Then-style are transformed into simple BDDTSs using a real-world example of an industrial printer. Then we elaborate on how these simple BDDTSs can be composed into larger BDDTSs, which are the basis for test generation following [10].
We concentrate on the sequential composition of BDD scenarios and show this by example, that is the post-condition of one scenario (Then-step) enables the pre-condition of the next scenario (Given-step), where ‘enables’ means logical implication. Many other forms of composing scenarios, or, actually, composing BDDTSs generated from scenarios, are possible, e.g., the choice between scenarios (disjunction), concurrent scenarios, conjunctive scenarios, and sequential composition if there is no implication but just overlap, or interrupting scenarios. These compositions and the full formal definition of sequential composition will be considered in future papers.
d) Overview: The next section introduces BDD scenarios in general, the running example of the industrial printer, and the printer’s scenarios. Section III defines the formalism of BDDTS, after which Section IV illustrates the translation from BDD scenario to BDDTS using the real-world industrial printer example. Section V discusses the composition of BDDTSs and test generation from the composed BDDTS. Section VI and VII present related work, conclusions, and future research.
II. BDD Scenarios
This section defines the structure of the BDD scenarios and provides a real-world industrial example of a set of scenarios.
A. Structure of BDD scenarios
The structure of BDD scenarios given below is adapted from [3]. In Section III we will give a definition of a formal model for BDDs corresponding to the below definition.
A Behavior-Driven Development scenario describes a small function of the system. It describes some user-visible behavior of the system. This user can be a human user, another component in the same system, or another system interacting with the system. We refer to all these users as the system
environment or simply environment. We will call the system being described with the scenarios the System Under Test (because we will generate test cases), or just the system. A BDD scenario consists of three parts: Given, When, and Then. Given specifies the required system state that is a precondition for the next step (When).
When describes an action or a sequence of actions. Either the environment or the system performs such an action. The action has an effect or consequence on the party not performing the action.
Then can be described in three different ways
• it describes the action the system does after the When step, or
• it describes the state in which the system ends up finally, or
• it describes both, i.e., the action the system performs after When, and the final state.
We note that if Then only specifies the action, there is still an implicit final state, namely the state reached after performing the action.
B. Printer example
We now introduce the running example of this paper: a printer. The printer works as follows. The operator starts by submitting a job file using a submission method. Based on the submission method, the printer adds a controller job to the print queue called scheduled jobs. At the time the controller job is added to the scheduled jobs the printer starts printing. If the operator does nothing, the printer continues and completes printing. But, while the controller job is being printed the operator may pause printing. A paused job may be resumed, i.e. the printer continues printing the job from the scheduled jobs, or moved to another queue called the waiting jobs. Waiting jobs are not printed. The operator can move the controller job from the waiting jobs to the scheduled jobs to start printing the job from scratch.
To describe this flow with BDD scenarios, the flow is divided into several scenarios that describe a small functionality. Below, we write out all these BDD scenarios.
C. Scenarios
Scenario 1: A controller job is added to the scheduled jobs after a job is submitted
• Given a Job file
• When the operator submits the Job file with ⟨Submission method⟩
• Then the printer adds a new Controller job to the scheduled jobs
• And the Controller job is of type ⟨job type⟩
The table below shows the values allowed for Submission method in combination with job type:
<table>
<thead>
<tr>
<th>Submission method</th>
<th>Job Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>LPR</td>
<td>Production job</td>
</tr>
<tr>
<td>IPP</td>
<td>Production job</td>
</tr>
<tr>
<td>JMF</td>
<td>Production job</td>
</tr>
<tr>
<td>Socket</td>
<td>Streaming job</td>
</tr>
</tbody>
</table>
Scenario 2: A controller job is moved to the printed jobs the moment printing completes
• Given a controller job is in the scheduled jobs
• When the printer starts printing the controller job
• And the printer completes printing the controller job
• Then the controller job is in the printed jobs
Scenario 3: There is a hard copy of the controller job after completing the printing of the controller job.
• Given a controller job is in the scheduled jobs
• When the printer starts printing the controller job
• And the printer completes printing the controller job
• Then there is a printed output
• And the printed output is a hard copy of the controller job
Scenario 4: While a job is being printed, it can be paused
• Given a controller job is printing
• When the operator pauses the printing of the controller job
• Then the controller job is paused
Scenario 5: A job that is paused can be resumed to be printed
• Given a controller job is paused
• When the operator resumes printing the controller job
• then the controller job is printing
Scenario 6: A controller job that is paused and moved to the waiting jobs before it completes, is not moved to the printed jobs
• Given a controller job is paused
• When the operator moves the controller job to the waiting jobs before the printer completes printing
• Then the controller job is in the waiting jobs
• And the controller job is not in the printed jobs
Scenario 7: A controller job that is in the waiting jobs can be moved to the scheduled jobs
• Given a controller job in the waiting jobs
• When the operator moves the controller job to the scheduled jobs
• Then the controller job is in the scheduled jobs
III. A FORMAL MODEL FOR BDD SCENARIOS
In this section, we define a formal model, namely a transition system, for BDD scenarios. This transition system will need to store the data elements of a scenario, e.g. the controller job of the printer example. These data elements are defined in the next section, and after that, we define the BDD transition system itself.
A. Data elements
In this section, we introduce standard programming concepts like variables, terms (i.e. expressions), types, and assignments. For a complete, formal definition of data elements, we refer to [10].
a) Syntax: Terms consist of ground terms, e.g., \textit{true}, variables, e.g., \textit{\texttt{\textbackslash{}}}x\texttt{\textbackslash{}}\texttt{\textbackslash{}}', and operations, e.g., \textit{\textbackslash{}}\texttt{\textbackslash{}}∧\texttt{\textbackslash{}}\texttt{\textbackslash{}}\texttt{\textbackslash{}}. Let \( X \) be a set of variables. The set of terms consisting of variables \( X \) is denoted as \( T(X) \). Ground terms are the terms without variables, denoted as \( T(\emptyset) \).
Terms have a type, e.g. term \textit{true ∧ x} has type \textit{Bool}. With \textit{true ∧ x} \( : T_{\text{Bool}}(\{x\}) \) we denote that term \textit{true ∧ x} is of type \textit{Bool} and contains (at most) variable \( x \). We assume that there is a function \textit{type} mapping any term to its type, e.g. \textit{type}(true ∧ x) = Bool. For any set of variables \( X \), we define \( T(X) \) to only contain well-formed and well-typed terms, i.e., the non-well-formed term \textit{true∧} and the non-well-typed term \( 4 ∧ 3 \) are not in \( T(X) \).
An assignment assigns a term to a variable, e.g. in \( x := x + 1 \), term \( x + 1 \) is assigned to variable \( x \). Given some sets \( X \) and \( Y \) of variables, \( T(Y)^X \) denotes any set of assignments, where each term \( t \in T(Y) \) is assigned to a variable \( x \in X \).
b) Semantics: A valuation \( \vartheta(X) \) is a function assigning values to variables \( X \). Ground terms have a value corresponding to their syntax, e.g. the value of \textit{true} is denoted as \textit{true}. A \textit{term evaluation} \( \vartheta_{\text{eval}}(X) \) extends valuation \( \vartheta(X) \) to evaluate terms containing variables. For example, if \( \vartheta(x) = \textit{false} \), then \( \vartheta_{\text{eval}}(x)(\textit{true ∧ x}) = \textit{false} \).
Given a term evaluation \( \vartheta_{\text{eval}}(Y) \), a set of assignments \( A \in T(Y)^X \) is evaluated to a valuation \( \vartheta(X) \). Here, \( \vartheta(X) \) is defined as the evaluation of each assignment \( x := t \in A \), such that \( \vartheta(X)(x) = \vartheta_{\text{eval}}(Y)(t) \). For example, given \( \vartheta_{\text{eval}}(y) = 3 \), assignment \( \{ x := y + 1 \} \) evaluates to \( \vartheta_{\text{eval}}(x)(x) = 3 \).
B. BDD transition systems
We provide the definition of a Symbolic Transition System inspired by [10].
Definition 1. A Symbolic Transition System is a tuple \( S = (LOC, l_0, V, i, I, \Lambda, \rightarrow) \), where
\begin{itemize}
\item \textit{LOC} is a set of locations.
\item \( l_0 \in LOC \) is the initial location.
\item \( V \) is a set of global variables. They are global and accessible in the entire transition system.
\item \( i \in T(\emptyset)^V \) is the initial assignment of the global variables.
\item \( I \) is a set of interaction variables. We assume \( V \cap I = \emptyset \) and set \( Var =_{\text{def}} V \cup I \). They are called interaction variables as they represent the data interaction associated with a switch (see below). Variables (\textit{Var}) have types, i.e. either basic data types like \textit{Bool}, \textit{Int} and \textit{String} or composite datatypes with different fields of different types.
\item \( \Lambda \) is the set of gates. We define \( \Lambda = \Lambda_i \cup \Lambda_o \), where \( \Lambda_i, \Lambda_o \) are the sets of input and output gates, respectively.
\item \( \rightarrow \subseteq LOC \times \Lambda \times I^* \times T_{\text{Bool}}(Var) \times T(Var)^V \times LOC \) is the switch relation.
\end{itemize}
In a switch \( (\text{loc}, \lambda, f_0...f_k, \varphi, \rho, \text{loc}') \in \rightarrow \) the elements are called (source) location, gate, interaction variables, guard, assignments, and (destination) location, respectively.
A BDD transition system (BDDTS) extends the definition of STS with a few additional elements: it is a tuple \( B = (S, L_g, IG, OG, guardOfLoc) \), where
\begin{itemize}
\item \( S \) is the Symbolic Transition System
\item \( L_g \subseteq LOC \) is the set of goal locations.
\item \( IG : T_{\text{Bool}}(V) \) is the input guard of the BDDTS, denoting the pre-condition of the initial location.
\item \( OG \subseteq T_{\text{Bool}}(V) \) is the set of output guards denoting the post-conditions of the BDDTS.
\item \( guardOfLoc : L_g \rightarrow OG \) is a function mapping goal locations to their corresponding output guard.
\end{itemize}
C. Semantics of BDDTS
The semantics of an STS is formally defined in [10]. With respect to semantics, BDDTS differs from STS by the added input guard and output guards on locations. In this subsection, we provide a short explanation of the intuition of the semantics of STS and BDDTS.
Initially, the global variables of an STS have the values as determined by initialization \( i \), and the current location in \( l_0 \). We note that the values of any global variable can be obtained in any location. Next, we can execute an \textit{enabled} switch. Let \( (\text{loc}, \lambda, f_0...f_k, \varphi, \rho, \text{loc}') \) be a switch of the STS. This switch is enabled if the current location is \( loc \), and the evaluation of guard \( \varphi \) is \textit{true} for current values of the global variables and the values of the interaction variables. The values of the interaction variables are determined by the environment (if \( \lambda \) is an input gate) or by the system (if \( \lambda \) is an output gate). Execution of an enabled switch results in evaluating the assignments \( \rho \) of the switch. This way, global variables may be assigned new values. Additionally, the current location becomes \( loc' \). As an example consider the STS of [10]. Initially, we are in location 0 and suppose that for Job File (JF), \( JF.id = 0 \) and the input guard \( is(JF) \) evaluates to \textit{true}. Suppose that we wish to execute the switch with input gate \( ?submit \). By setting the environment choose \( id = 0 \), we have that guard \( id == JF.id \) evaluates to \textit{true}, so we may indeed execute this switch. Then the assignment for Submission Method (SM), \( SM := sm \) is executed accordingly. The next switch from location 1 to 2 can then be executed similarly, though the system now chooses the values of the interaction variables. For example, if the system uses controller job \( cj \) with \( cj.type == "Streamingjob" \), the environment should have chosen \( sm := "Socket" \) in the previous step, to enable the execution of the switch to location 2. If so, the assignments are executed, we reach location 2, and we can check whether the output guard really holds (as will be explained in Subsection V-B).
IV. BDD scenario translation
In this section, we define how to convert a BDD scenario written in Given-When-Then style to a BDD transition system. Currently, we do this conversion manually. We restrict ourselves to what we explained to be a BDD scenario in section 3. We identify the elements of the BDD transition system from
the scenarios. We use Scenario 2 of Subsection II-C to explain the translation.
From the Given step of Scenario 2 we extract the preconditions for the scenario. This precondition describes the required state of the system. We need two elements: the global variables (V) and the input guard (IG).
- **Given a controller job is in the scheduled jobs**
In this scenario, the controller job (CJ) and the scheduled jobs (SJ) are the global variables. We write global variables in capital letters. The fact that the controller job is in the scheduled jobs is the condition to be checked, so we define operation is_in_list and define $IG = is\_in\_list(CJ, SJ)$ as the boolean term defining the input guard of the initial location.
From the When step we extract the actions performed by the system or the environment. Each action is translated to a gate. If When describes multiple actions, conjuncted by and, we build a sequence of switches for these actions. If the actor is the environment, we use an input gate, and if the actor is the system we use an output gate. In addition, we look for interaction variables and global variables. The interaction variables update the value of global variables with assignments.
- **When the printer starts printing the controller job**
- **And the printer completes printing the controller job**
Here, print start and print complete are both output gates and the controller job is the global variable CJ. The print start gate is on the switch from the initial location 0 to location 1 and the print complete gate on a switch from location 1 to location 2. The guards are defined on interaction and global variables to ensure the conditions on data are satisfied and the value of global variables are updated in the assignment. There are two interaction variables id, state for gate print start, the id is used in the switch guard to make sure the printer is printing the requested controller job and state is used to check if the state of the job is changed to "printing" after print start. The interaction variable state is then assigned to the state field of the global variable CJ.
In the Then step, we look for the global variables and output guards:
- Then the controller job is in the printed jobs, controller job and printed jobs are global variables CJ, PJ, and is_in_list(CJ, PJ) is the output guard.
We note that the information we extract from a single scenario might be insufficient for the model. We obtain complementary information from the related scenarios in the set of existing scenarios. Scenarios 2 and 3 are examples of this case. There are two main outputs from the system when the printing is completed: 1) The controller job appears in the printed jobs and 2) there is a hard copy of the controller job. These are defined in separate scenarios but they are both needed to have a sufficient set of variables for the action print complete. To treat ambiguities in scenarios, we add extra information to the model. For example, the relation between Printed output (PO) and the corresponding Controller job (CJ) in scenarios 2 and 3 is defined by storing the id of CJ in PO, PO.id_cj==CJ.id.
In Figures 2-7 you find the BDDTS for scenarios 1-7. The BDDTS of Scenario 1 is defined as $B_1 = (\langle LOC; l_0, V, i, \Lambda, \rightarrow \rangle, L_0, IG, OG, guardOfLoc )$ where:
- $LOC = \{0, 1, 2\}$
- $\Lambda_0 = \{?submit\}$
- $V = \{JF, SM, CJ,SJ\}$
- $L_0 = \{2\}$
- $\mathcal{I} = \{jf, sm, cj, sj\}$
$IG = is(JF) \rightarrow \{ (0, ?submit, \langle jf, sm\rangle, id==JF, id, SM==sm, 1), (1, !add, \langle cj, sj\rangle, \phi_1), (CJ==cj, SJ==append(CJ,SJ), 2) \}$
$\phi_1$ is the guard on the switch from location 1 to 2, shown in Figure 2.
We use a pre-post-condition composition, such that the input guard of scenario B is satisfied by an output guard of scenario A. The composition composes BDDTS A and B sequentially, by merging the respective goal location of A with the initial location of B. Specifically, we define composability of two BDDTS as follows:
Let $A$ and $B$ be two BDDTS, where $l$ is a goal location of $A$, and $IG$ is the input guard of $B$. Then $B$ is composable with $A$ in $l$ if the output guard of $l$ in $A$ implies the input guard of $B$, i.e. for all valuations $\varrho(V_A \cup V_B)$, we have that $\varrho(\text{guardOfLoc}(l) = \Rightarrow IG) = \text{true}$.
Take scenarios 1 and 2 in Figures 2 and 3 as examples. The input guard of scenario 2 is $\text{is\_in\_list}(CJ, SJ)$, and the output guard of scenario 1 is $\text{is\_in\_list}(CJ, SJ) \land (CJ.type == "ProductionJob" \lor CJ.type == "StreamingJob")$. The implication holds since the input guard of scenario 2 is the left term of the conjunction of the output guard of scenario 1.
Another example is that the input guard of scenario 4 is the same as the output guard of scenario 5 in Figures 5 and 6.
We now explain the pre-post-composition by example and defer giving a general definition to future work. We compose the BDDTS of all scenarios of Subsection II-C. The end result is shown in Figure 1. We start with scenario 1, as we assume the input guard of this scenario to be true in the initial location of a printer. Scenario 1 is composable with both scenarios 2 and 3, because the output guard of scenario 1 is stronger than both input guards. We pick scenario 2, and merge the goal location of scenario 1 with the initial location of scenario 2, i.e. the print\_start switch can now be taken from the goal location of scenario 1. We choose to set the output guard of the goal location to be the weaker input guard of scenario 2 (see later in example why).
We note that the BDDTSs of scenarios 2 and 3 are the same, except for the output guard of the last location. We can therefore conjunct the output guard of scenario 3 to goal location 2 of scenario 2, which is now location 6 in the composition.
Location 3 of the composition is the same as the input guard of scenario 4, so trivially, scenario 4 can be composed now. We merge scenario 4 with location 3 of the composition. Since the output guard of location 3 is the same as the input guard of scenario 4, we just omit the input guard.
Similarly, the output guard of location 4 is the same as the input guard of scenario 5, so we add it to the composition. We note that the output guard of scenario 5 is the same as the output guard of location 3 of the composition, so we merge the respective locations such that the switch with the print\text{resume} gate loops back to location 3.
Similarly, scenarios 6 and 7 are added. The output guards of scenario 7 and location 2 are the same, so we make the switch of scenario 7 reach location 2. Note that this would not have been possible if we didn’t weaken the output guard of location 2 to only be is\text{in\_list}(CJ,SJ), since the output guard of scenario 7 would then be weaker than the output guard of location 2, such that merging would violate composability.
We have the following remarks on composing BDDTS:
- The precondition of a BDD scenario can be inconsistent with the rest of the scenario (e.g. by mistake). Translation to BDDTS then preserves this inconsistency. For example, if the input guard of scenario 2, that the controller job is in the scheduled jobs, would be omitted, this could imply that the state of this job cannot be ‘printing’, such that the guard of the switch with the print\text{start} gate is violated. Hence, this should be checked before composition, as composition relies on the validity of input guards.
- Similarly, a mistake may be present in the output guard, e.g. the guard could be unsatisfiable. For example, if location 1 of scenario 4 would have output guard CJ.state == "printing" this is inconsistent with the assignment CJ.state="paused" of the previous switch. Composing a BDDTS in a goal location with an unsatisfiable output guard is pointless, and should therefore be avoided by checking the satisfiability of output guards beforehand.
- Output guards could be strengthened by including the restrictions that are imposed by previous guards and assignments of switches (i.e. path condition in [10]), allowing more BDDTS to be composed in the respective goal location.
- There are edge cases where composition with weakening the output guard and merging locations may lead to inconsistencies, comparable with the discussed inconsistencies for input guards. However, weakening is a preferred property for pre-post-composition, as it allows more scenarios to be composable.
\section{B. Testing}
To generate test cases from BDDTS, we use the test generation algorithm as described in [10]. This algorithm generates test cases that reach all switches of an STS. Specifically, this means that all scenarios will be executed, and all output guards will be checked as part of a test case. To use the algorithm, we need to translate a BDDTS to an STS. Specifically, we need to encode the input guard and output guards in an STS. We note that, according to the above composition, we may assume that the initial location of a BDDTS corresponds to the initial location of the system being tested. Therefore we do not need to check the input guard as the variables are initially assigned and the input guard holds. Hence, we only need to check the output guards in test cases.
We call the STS extracted from the BDDTS the test model. In this test model, we substitute the output guards in every goal location of the BDDTS by two special switches with gates ?check and !retrieve. We assume that we can obtain values of global variables from the system. By the ?check gate the tester requests the values of the variables used in the output guard, from the system. This is done by providing a value, via an interaction variable, that identifies the variable we ask for, e.g. the id of a controller job. The system can then respond through the interaction variables of the !retrieve switch, by providing the actual value of the requested variables, e.g. a controller job with fields id=0, type="Production Job", and state="printing". The switches of the check? and receive! gates are encoded as a loop from and to the goal location.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{test_model.pdf}
\caption{Test Model of Scenario 1. Compared to its BDDTS in Figure 2 the ?check and !retrieve switches and the intermediate loop location are added with dashed lines.}
\end{figure}
\begin{itemize}
\item Figure 9 shows the test model of scenario 1. Compared to its BDDTS in Figure 2 the ?check and !retrieve switches and the intermediate loop location are added with dashed lines. In the ?check switch we use the interaction variables to pass identifiers for retrieving the controller job and scheduled jobs list from the system. In the !retrieve switch we then check with the guard that the returned values have the same identifiers and adhere to the condition specified by the output guard.
\item In Subsection V-A we noted that the user may write a post-condition for a BDD scenario that cannot be satisfied for any value of global variables. For example, if the output guard of location 4 (i.e. scenario 4) would have been CJ.state == 'printing', this would be inconsistent with the assignment CJ.state="paused" of the previous switch print\text{pause}. As a consequence, the test generation algorithm of [10] will then not be able to generate a test that reaches the receive! switch of this output guard. Hence, this way we are able to notice this inconsistency and notify the user of their mistake.
\item With the algorithm of [10] we could obtain a test case for scenario 1 that reaches the !retrieve gate. The gates and values for a successful execution would look as follows:
\begin{verbatim}
?submit { Job(id=0), JMF }
!add { CJob(id=0, type='Production Job', state='ready'), Queue(name='Scheduled Jobs', elements= EmptyList) }
?check (0, 'Scheduled Jobs')
!retrieve { CJob(id=0, type='Production Job', state='paused') }, Queue(name='Scheduled Jobs', elements=List(id=0) }
\end{verbatim}
\end{itemize}
We divide related work into four categories: model-based approaches for scenarios, model-based testing with STSs, testing based on BDD, and model-based testing with BDD.
a) Model-based approaches for scenarios: There is quite a history of (semi) automatic model generation from scenarios like [11] [12] [13] In these papers, scenarios are expressed as sequence diagrams (SD) and then converted to state charts. Due to the lack of precise formal semantics for UML diagrams, in [11], they use extra tooling like OCL, for defining variables and Finite State Machines (FSM), such that this extra information enables converting SDs to state charts and merge scenarios. In [12] and [13] FSMs are used as well. In [14] [15] they focus on testing, and transform UML models into Labelled Transition systems, to add precise formal semantics, similar to those above.
In comparison, we use BDD scenarios, written in a structured text format. The text format has made it popular among non-technical stakeholders in industry. BDD scenarios focus on the behavior of the SUT, with steps that specify pre-conditions, actions, and expected behavior. These scenarios describe how the system should behave in response to various inputs and conditions while SDs only specify the sequences of actions. However, just like UML, BDD scenarios might be ambiguous. We address this by formalizing them with BDDTS. STSs are better suited for modeling complex systems. LTSs and FSMs lack the notion of data. Although in our approach scenario translation is currently manual, automation is possible by e.g. using parsers as in [16].
b) Model-based testing with STSs: In [9], Frantzen et al. introduce Symbolic Transition Systems, that extend Labelled Transition Systems with data. They provide a test algorithm based on the $\text{ioco}_F$ relation. [10] is an extension of Frantzen’s work that provides robust test selection based on switch coverage. In this paper, we built on the STS definition and test generation algorithm of [10].
c) Testing based on BDD: "The difficulty of writing system-level test cases" is one of the challenges presented in [17]. Our approach helps in this regard by model composition and automatic generation of tests from BDD scenarios. In [18] they combine testing and formal verification by integrating test scenarios and formal properties in a single human-readable document. Then they use the Cucumber [6] tool for testing using the document. In contrast, we convert scenarios into STSs, which are per se formal models, and generate tests from the model. In [19] they provide a technique for regression testing in BDD. Their technique finds and selects the test code that is likely to be affected and needs to be modified for a change in the system. By composing scenarios leading to the code change, we could achieve the same goal and have high traceability between the tests and scenarios. In [16], the authors introduce a semi-automatic approach for extracting the code skeleton and step definition from a single scenario. They create class and sequence diagrams in a semi-automatic way and have implemented this in the Cucumber tool. While we currently do the translation from scenarios to BDDTS manually, we focus on model (scenario) composition for a more comprehensive set of test cases.
d) MBT with BDD: In [20], an MBT tool called Skyfire is presented. Skyfire automatically generates Cucumber test scenarios from UML state machine diagrams. The tests are then generated by the Cucumber tool. A similar approach is taken in [21]. They use UML diagrams to generate acceptance tests in the form of sequences of Gherkin scenarios. Executable test cases are then generated from these scenarios. This is different from what we do. We convert scenarios to formal models and generate tests from the model rather than the scenarios. In [22] they provide technical integration of BDD with the MBT tool Graphwalker and the Robot Framework, but they provide no formalism. In [23], a combination of acceptance test-driven development and model-based testing is presented in some real-world projects. They conclude that both approaches complement each other and increase test coverage. In our work, we provide an intertwined approach to benefit from both BDD scenarios and formal models. In [24] and [25], the authors use BDD to automate the assessment of artifacts throughout the development process. They use computational ontologies to formalize the concepts used in scenarios and generate test cases from ontology models, while we generate tests from the formal BDDTS.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we proposed an approach for automatic test generation and execution of BDD scenarios. We introduce a formal model for BDD scenarios: BDD transition system (BDDTS). For a set of real-world BDD scenarios of an industrial printer, we show how to translate BDD scenarios into BDDTS, and how to compose these BDDTS, with respect to the pre-and postconditions of scenarios. To automatically generate test cases, we convert the composed BDDTS to a Symbolic Transition System by adding special switches with ?check and !retrieve gates, for checking postconditions of
BDD scenarios with the System Under Test. We use the test generation algorithm from [10] to obtain test cases, such that all scenarios are executed, and all postconditions are checked. There are several directions for future work:
- Find a general, formal definition of pre-post condition composition. This composition should allow the preconditions of a scenario to be weaker than the postcondition of the scenario to be composed while preventing the introduction of inconsistencies.
- Support a check and possibly provide a correction for inconsistencies caused by the writer of a BDD. Also, BDD postconditions can be strengthened by taking into account the consequences of previous actions of the scenario itself and other scenarios in the (partial) composition. This way, more scenarios can be composed.
- In this paper, the composition is performed before test generation and execution. However, the composition could also be dynamic: scenarios are then composited ‘on the fly’ during test generation and execution. The advantages are that test execution can be steered based on past execution results and the tester’s current wishes.
- We provided our definition of BDD scenarios in Subsection II-A as we found most BDD scenarios are written vaguely and not suitable for translation to BDTS. The next step is automatic correction and modeling of BDD scenarios. This could be implemented in existing tools like Cucumber and SpecFlow.
- Finally, a combination of pre-post-composition with other forms of composition, like parallel composition, conjunction, and disjunction, should be investigated.
ACKNOWLEDGEMENT
This publication is part of the project TiCToC - Testing in Times of Continuous Change- with project number 17936 of the research program MasCot-Mastering Complexity- which is (partly) financed by the Dutch Research Council (NWO).
REFERENCES
|
{"Source-Url": "https://petravdbos.nl/publications/FromBDDScenariosToTestCaseGenerationPrePrint.pdf", "len_cl100k_base": 9741, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32379, "total-output-tokens": 12513, "length": "2e13", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.00039124488830566406, "__label__crime_law": 0.0002636909484863281, "__label__education_jobs": 0.0009851455688476562, "__label__entertainment": 5.7220458984375e-05, "__label__fashion_beauty": 0.00017189979553222656, "__label__finance_business": 0.00022482872009277344, "__label__food_dining": 0.00029754638671875, "__label__games": 0.000537872314453125, "__label__hardware": 0.0007343292236328125, "__label__health": 0.0003802776336669922, "__label__history": 0.00021326541900634768, "__label__home_hobbies": 8.600950241088867e-05, "__label__industrial": 0.00042939186096191406, "__label__literature": 0.0002770423889160156, "__label__politics": 0.00020420551300048828, "__label__religion": 0.0004379749298095703, "__label__science_tech": 0.0162811279296875, "__label__social_life": 7.814168930053711e-05, "__label__software": 0.00437164306640625, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0002593994140625, "__label__transportation": 0.0005040168762207031, "__label__travel": 0.00019156932830810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48660, 0.0162]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48660, 0.50924]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48660, 0.86864]], "google_gemma-3-12b-it_contains_pii": [[0, 5723, false], [5723, 11893, null], [11893, 16716, null], [16716, 23763, null], [23763, 27481, null], [27481, 29406, null], [29406, 35828, null], [35828, 41014, null], [41014, 48660, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5723, true], [5723, 11893, null], [11893, 16716, null], [16716, 23763, null], [23763, 27481, null], [27481, 29406, null], [29406, 35828, null], [35828, 41014, null], [41014, 48660, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48660, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48660, null]], "pdf_page_numbers": [[0, 5723, 1], [5723, 11893, 2], [11893, 16716, 3], [16716, 23763, 4], [23763, 27481, 5], [27481, 29406, 6], [29406, 35828, 7], [35828, 41014, 8], [41014, 48660, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48660, 0.02885]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
023bde2cc55b5bbe0525cddba46b646ef06552fb
|
[REMOVED]
|
{"Source-Url": "http://www.cse.usf.edu/dsg/publications/papers/pbr_springer.pdf", "len_cl100k_base": 12291, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 51206, "total-output-tokens": 16548, "length": "2e13", "weborganizer": {"__label__adult": 0.00032138824462890625, "__label__art_design": 0.0002894401550292969, "__label__crime_law": 0.0002624988555908203, "__label__education_jobs": 0.0012636184692382812, "__label__entertainment": 4.082918167114258e-05, "__label__fashion_beauty": 0.00015425682067871094, "__label__finance_business": 0.0003075599670410156, "__label__food_dining": 0.00029659271240234375, "__label__games": 0.0005350112915039062, "__label__hardware": 0.0004322528839111328, "__label__health": 0.0003199577331542969, "__label__history": 0.000171661376953125, "__label__home_hobbies": 7.194280624389648e-05, "__label__industrial": 0.000274658203125, "__label__literature": 0.0001984834671020508, "__label__politics": 0.00018405914306640625, "__label__religion": 0.00034737586975097656, "__label__science_tech": 0.00363922119140625, "__label__social_life": 7.200241088867188e-05, "__label__software": 0.0042724609375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00029969215393066406, "__label__transportation": 0.00034999847412109375, "__label__travel": 0.00016736984252929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69533, 0.04096]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69533, 0.34161]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69533, 0.90854]], "google_gemma-3-12b-it_contains_pii": [[0, 119, false], [119, 862, null], [862, 3917, null], [3917, 9505, null], [9505, 13964, null], [13964, 19806, null], [19806, 22563, null], [22563, 27563, null], [27563, 32422, null], [32422, 37319, null], [37319, 42640, null], [42640, 45488, null], [45488, 49376, null], [49376, 54026, null], [54026, 59545, null], [59545, 66540, null], [66540, 69533, null]], "google_gemma-3-12b-it_is_public_document": [[0, 119, true], [119, 862, null], [862, 3917, null], [3917, 9505, null], [9505, 13964, null], [13964, 19806, null], [19806, 22563, null], [22563, 27563, null], [27563, 32422, null], [32422, 37319, null], [37319, 42640, null], [42640, 45488, null], [45488, 49376, null], [49376, 54026, null], [54026, 59545, null], [59545, 66540, null], [66540, 69533, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69533, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69533, null]], "pdf_page_numbers": [[0, 119, 1], [119, 862, 2], [862, 3917, 3], [3917, 9505, 4], [9505, 13964, 5], [13964, 19806, 6], [19806, 22563, 7], [22563, 27563, 8], [27563, 32422, 9], [32422, 37319, 10], [37319, 42640, 11], [42640, 45488, 12], [45488, 49376, 13], [49376, 54026, 14], [54026, 59545, 15], [59545, 66540, 16], [66540, 69533, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69533, 0.06875]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
dc7d4ce96727bcc162957caa17d70bf18bc18a75
|
Implementing OCLP as a front-end for Answer Set Solvers: From Theory to Practice
Martin Brain and Marina De Vos*
Department of Computer Science
University of Bath
Bath, United Kingdom
{mjb,mdv}@cs.bath.ac.uk
Abstract. Ordered Choice Logic Programming (OCLP) allows for preference-based decision-making with multiple alternatives and without the burden of any form of negation. This complete absence of negation does not weaken the language as both forms (classical and as-failure) can be intuitively simulated in the language. The semantics of the language is based on the preference between alternatives, yielding both a skeptical and a credulous approach. In this paper we discuss the theoretical basis for the implementation of an OCLP front-end for answer set solvers that can compute both semantics in an efficient manner. Both the basic algorithm and the proposed optimizations can be used in general and are not tailored towards any particular answer set solver.
1 Introduction
Examining human reasoning, we find that people often use preference, order or defaults for making decisions: “I prefer this dish”, “This color goes better with the interior”, “This item costs more”, “In general, the human heart is positioned at the left”. When faced with conflicting information, one tends to make decisions that prefer an alternative corresponding to more reliable, more complete, more preferred or more specific information. When modeling knowledge or non-monotonic reasoning via computer programs, it is only natural to incorporate such mechanisms.
In recent years several proposals for the explicit representation of preference in logic programming formalisms have been put forward. [11, 10] are just two examples.
Systems that support preferences find applications in various domains such as law, object orientation, scheduling, model based diagnosis and configuration tasks. However, most approaches use preferences only when the models have already been computed, i.e. decisions have already been made; or only support preferences between rules with opposite (contradictory) consequences, thus statically limiting the number of alternatives of a decision.
In [8], we proposed a formalism, called Ordered Choice Logic Programming, that enables one to dynamically reason about situation-dependent decisions involving multiple alternatives. The dynamics of this system is demonstrated by the following example.
---
* This work was partially funded by the Information Society Technologies programme of the European Commission, Future and Emerging technologies under the IST-2001-37004 WASP project.
Example 1. Buying a laptop computer involves a compromise between what is desirable and what is affordable. Take, for example, the choice between a CD, CDRW or DVD drive. The CD is the cheaper option. On the other hand, for a laptop, a DVD drive may be more useful than a CD writer. If the budget is large enough, one could even buy two of the devices. The above information leads one to consider two possible situations.
- With a smaller budget, a DVD-player is indicated, while
- with a larger budget, one can order both a DVD-player and a CD-writer.
To allow this kind of reasoning, a program consists of a (strict) partially ordered set of components containing choice rules (rules with exclusive disjunction in the head). Information flows from less to more specific or preferred components until a conflict among alternatives arises, in which case the most specific one will be favored. The situation becomes less clear when two alternatives are equally valued or are unrelated.
The decision in this case is very situation dependent: a doctor having a choice between two equally effective cures has to make a decision, while it is better to remain indecisive when two of your friends have an argument! To allow both types of intuitive reasoning, a credulous and skeptical semantics are introduced.
OCLP provides an elegant and intuitive way of representing and dealing with decisions. People with little or no experience with non-monotonic reasoning can easily relate to it, due to the absence of negation. This absence of negation does not restrict the language in any way, as both types of negation (classic and as-failure) can easily be simulated.
In this paper, we propose a basic algorithm and optimizations for building an OCLP front-end for answer set solvers. Smodels ([12]), developed at Helsinki University of Technology, and DLV ([17]), created at the Technical University of Vienna and the University of Calabria are currently the most popular ones. An implementation build on top of Smodels can be obtained from http://www.cs.bath.ac.uk/~mdv/oct/.
The remainder of this paper is organized as follows: we continue in Section 2 with short overview of the basic information concerning choice logic programming, the language behind OCLP. Section 3 focuses on the introduction of OCLP with its skeptical and credulous answer set semantics. Section 4 deals with a mapping of OCLP to semi-negative logic programs allowing answer set solvers to work with OCLP. These mappings, one for each semantics, can then serve as the foundations on which we build the OCLP front-end. Apart from this theoretical/naive mapping, we propose various improvements/optimizations which allow answer set solvers to handle the transformed program more efficiently. We end this paper with a discussion on the relations to other approaches (Section 5) and directions for future research (Section 6).
2 Choice Logic Programming
Choice logic programs [7] represent decisions by interpreting the head of a rule as an exclusive choice between alternatives.
Formally, a Choice Logic Program [7], CLP for short, is a countable set of rules of the form $A \leftarrow B$ where $A$ and $B$ are finite sets of ground atoms. Intuitively, atoms in $A$ are assumed to be xor’ed together while $B$ is read as a conjunction (note that $A$
may be empty, i.e. constraints are allowed). The set \( A \) is called the head of the rule \( r \), denoted \( H_r \), while \( B \) is its body, denoted \( B_r \). In examples, we use “\( \oplus \)” to denote exclusive disjunction, while “\( \land \)” is used to denote conjunction.
The Herbrand base of a CLP \( P \), denoted \( B_P \), is the set of all atoms that appear in \( P \). An interpretation\(^1\) is a subset of \( B_P \).
A rule \( r \) in a CLP is said to be applicable w.r.t. an interpretation \( I \) if \( B_r \subseteq I \). Since we are modeling choice, we have that \( r \) is applied when \( r \) is applicable and\(^2\) \( |H_r \cap I| = 1 \). A rule is satisfied if it is applied or not applicable. A model \( M \) is said to be minimal if there does not exist a model \( N \) such that \( N^+ \subset M^+ \).
### 3 Ordered Choice Logic Programming
An ordered choice logic program (OCLP) is a collection of choice logic programs, called components, which are organized in a strict partial order\(^3\) that represents some preference criterion (e.g. specificity, reliability, . . . ).
**Definition 1.** An Ordered Choice Logic Program, or OCLP, is a pair \( \langle C, \prec \rangle \) where \( C \) is a finite set of choice logic programs, called components, and “\( \prec \)” is a strict pointed partial order on \( C \).
For two components \( C_1, C_2 \in C \), \( C_1 \prec C_2 \) implies that \( C_1 \) is preferred over \( C_2 \). Throughout the examples, we will often represent an OCLP \( P \) by means of a directed acyclic graph (dag) in which the nodes represent the components and the arcs the \( \prec \)-relation, where arcs point from smaller (more preferred) to larger (less preferred) components.
**Example 2.** The decision problem from the introduction (Example 1) can easily be written as an OCLP, as shown in Figure 1. The rules in components \( P_1, P_2 \) and \( P_3 \) express the preferences in case of a small budget. The rules in \( P_4 \) express the intention to buy/configure a laptop and, because of this, a decision about its various devices should be made. In component \( P_5 \), the first rule states the possibility of a larger budget. If so, the two remaining rules allow the purchase of both a DVD-player and a CD-writer.
**Definition 2.** Let \( P \) be an OCLP. We use \( P^* \) to denote the CLP that contains all the rules appearing in (a component of) \( P \). We assume that rules in \( P^* \) are labeled by the component from which they originate and we use \( c(r) \) to denote the component\(^4\) of \( r \). The Herbrand base \( B_P \) of \( P \) is defined by \( B_P = B_{P^*} \).
An interpretation for \( P \) is any interpretation of \( P^* \). We say that a rule \( r \) in \( P \) is applicable w.r.t. an interpretation \( I \) iff \( B_r \subseteq I \); \( r \) is applied w.r.t. \( I \) iff \( r \) is applicable and \( |H_r \cap I| = 1 \).
\(^1\) In this paper we only work with total interpretations: each atom from the Herbrand base is either true or false. Bearing this in mind, it suffices to mention only those atoms which can be considered true.
\(^2\) For a a set \( X \), we use \( |X| \) do denote its cardinality.
\(^3\) A relation \( R \) on a set \( A \) is a strict partial order iff \( R \) is anti-reflexive, anti-symmetric and transitive. \( R \) is pointed if an element \( a \in A \) exists such that \( aRb \) for all \( b \in A \) with \( a \neq b \).
\(^4\) Without losing generality, we can assume that a rule appears in only one component.
**Example 3.** For the OCLP in Example 2, the sets $I = \{dvd\_player, small\}$, $J = \{laptop, cd\_writer, small\}$ and $L = \{dvd\_player, larger, cd\_writer, cd\_player, laptop\}$ are all interpretations. The interpretation $I$ makes the rule $small \oplus larger \leftarrow$ applied while the applicable rule $cd\_writer \leftarrow$ is not applied.
Facing a decision means making an exclusive choice between the various alternatives which are available. If we want OCLP to model/solve decision problems we need a mechanism for representing them. In a CLP, decisions are generated by so-called *choice rules* i.e. rules with multiple head atoms. For OCLP, we can do a something similar as long as we also take the preference order into account. We want to make sure that we leave the option open to overrule the exclusiveness of a choice when in more preferred components multiple alternatives are suggested (e.g. Example 1). Hence we say that an atom $a$ is an *alternative* for an atom $b$ in a component $C$ if an applicable rule exists in a component at least as preferred as $C$ containing both $a$ and $b$ in its head.
**Definition 3.** Let $I$ be an interpretation of an OCLP $P = \langle C, \prec \rangle$ with $C \in C$. The set of *alternatives* in $C$ for an atom $a \in B_P$ w.r.t. $I$, denoted $\Omega_C^I(a)$, is defined as:
$$\Omega_C^I(a) = \{b \mid \exists r \in P^* \cdot c(r) \prec C \land B_r \subseteq I \land a, b \in H_r \text{ with } a \neq b\}.$$
**Example 4.** Reconsider Example 3. The alternatives for $cd\_rom$ in $P_2$ w.r.t. $J$ are $\Omega_{P_2}^J(cd\_rom) = \{dvd\_player, cd\_writer\}$. W.r.t. $I$, we obtain $\Omega_{P_2}^I(cd\_rom) = \emptyset$, since the choice rule in $P_4$ is not applicable. When we take $P_3$ instead of $P_2$, we obtain w.r.t. $J$: $\Omega_{P_3}^J(cd\_rom) = \emptyset$.
Given the alternatives in a certain context (a component and an interpretation), one naturally selects that alternative that is motivated by a more preferred rule, thus *defeating* the rule(s) suggesting less preferred alternatives. However, if alternatives appear in the same or unrelated components, two approaches are possible: using a skeptical strategy, one would refrain from making a decision, i.e. not selecting any of the various alternatives, while a credulous setting suggests an arbitrary choice of one of the alternatives. For both types of reasoning one can think of situations where one approach
---
Fig. 1. The Configuration OCLP of Example 2
works while the other gives an incorrect, unintuitive outcome. Skeptical reasoning is practiced in American law when a jury cannot come to a unanimous decision and thus no decision is made by that trial. An example of credulous reasoning is the decision a goal-keeper faces in football when trying to stop a penalty. To accommodate this problem, we introduce a semantics for both types of reasoning. From a skeptical viewpoint, we say that rule is defeated if one can find a better, more preferred alternative for each of its head atoms.
**Definition 4.** Let $I$ be an interpretation for an OCLP $P$. A rule $r \in P^*$ is defeated w.r.t. $I$ iff $\forall a \in H_r \cdot \exists r' \in P^* \cdot c(r') \triangleleft c(r) \land B_{r'} \subseteq I \land H_{r'} \subseteq \Omega_{c(r)}^I(a) \ .$
**Example 5.** Reconsider Example 3. The rule $cd\_rom ←$ is defeated w.r.t. $J$ by the rule $cd\_writer ←$. The rule $cd\_rom \oplus cd\_writer \oplus dvd\_player ←$ is defeated w.r.t. $L$ by the combination of the rules $dvd\_player ← larger$ and $cd\_writer ← larger$.
**Example 6.** Consider the OCLP $\langle P_1 = \{a ← b ← \}, P_2 = \{a \oplus b ← \} \rangle, P_2 < P_1 \rangle$. Given the interpretation $\{b\}$, the rule $a ←$ is not defeated as the only alternative of $a$, i.e. $b$, is not brought forward in a more preferred component.
Just as for the skeptical semantics we need to define an appropriate defeating strategy. An obvious way of doing so consists of simply dropping the condition that an alternative should be found in a more preferred component. Unfortunately, this leads to unintuitive results. To avoid this, we need to make sure that credulous defeaters are not only applicable, but also applied.
**Definition 5.** Let $I$ be an interpretation for an OCLP $P$. A rule $r \in P^*$ is c-defeated w.r.t. $I$ iff $\forall a \in H_r \cdot \exists r' \in P^* \cdot c(r') \neq c(r) \land r' \land I \land H_{r'} \subseteq \Omega_{c(r)}^I(a) \ .$
**Example 7.** While the skeptical approach makes it impossible to have the rule $a ←$ in Example 6 defeated w.r.t. $\{b\}$, the credulous semantics can.
For our model semantics, both skeptical as credulous, rules that are not satisfied (as for choice logic programs) must be (c-)defeated.
**Definition 6.** Let $P$ be an OCLP. A total interpretation $I$ is a skeptical/credulous model iff every rule in $P^*$ is either not applicable, applied or (c-)defeated w.r.t. $I$. A skeptical/credulous model $M$ is minimal iff $M$ is minimal according to set inclusion, i.e. no skeptical/credulous model $N$ of $P$ exists such that $N^+ \subset M^+$.
**Example 8.** Reconsider the interpretations $I, J, K$ and $L$ from Example 3. Only $K$ and $L$ are skeptical/credulous models. Model $L$ is not minimal due to the skeptical/credulous model $Z = \{dvd\_player, cd\_writer, laptop, larger\}$. The minimal skeptical/credulous models $K$ and $Z$ correspond to the intuitive outcomes of the problem.
**Example 9.** The program of Example 6 has no skeptical models but two credulous ones: $\{a\}$ and $\{b\}$.
The next example illustrates that the skeptical/credulous model semantics does not always provide the appropriate solutions to the decision problem at hand.
Example 10. Consider the ordered choice logic program $P = \langle \{ P_1 = \{ a \leftarrow \} \}, P_2 = \{ b \leftarrow \}, P_3 = \{ a \oplus b \leftarrow c \} \rangle$, where $P$ has two minimal skeptical/credulous models: $M = \{ b, c \}$, and $N = \{ a, b \}$. Clearly, $c$ is an unsupported assumption in $M$, causing $P_3$ to trigger an unwarranted choice between $a$ and $b$.
We introduce an adaptation of the Gelfond-Lifschitz [14] and reduct ([16]) transformations to filter unintended (minimal) models containing unsupported atoms. This results in the skeptical/credulous answer set semantics.
**Definition 7.** Let $M$ be a total interpretation for an OCLP $P$. The Gelfond-Lifschitz transformation (resp. reduct) for $P$ w.r.t. $M$, denoted $P^M$ (resp. $P^M_c$), is the CLP obtained from $P$ by removing all (c-)defeated rules. $M$ is called a skeptical (resp. credulous) answer set for $P$ iff $M$ is a minimal model for $P^M$ (resp. $P^M_c$).
Although both answer set semantics produce models (skeptical or credulous ones) for the program, they differ in whether they produce minimal ones or not. Just as for answer sets of semi-negative logic programs, we find that skeptical answer sets are minimal skeptical models. For extended disjunctive logic programs, the answer set semantics is not minimal[16]. The same applies for credulous answer sets of ordered choice logic programs, as demonstrated by the following example.
Example 11. Consider the program $P = \langle \{ P_1 = \{ r_1 : g \leftarrow \} \}, P_2 = \{ r_2 : p \oplus d \leftarrow ; r_1 : g \oplus p \leftarrow ; r_2 : g \oplus d \leftarrow \} \rangle$, $P_2 \leftarrow P_1$. Consider $M_1 = \{ g \}$ and $M_2 = \{ g, d \}$. Clearly, $M_1^+ \subset M^+_2$, while both interpretations are credulous answer sets for $P$. For $M_1$, we have that $P^M_{eM_1} = \{ g \leftarrow ; g \oplus d \leftarrow ; g \oplus p \leftarrow \}$ for which it can easily be verified that $M_1$ is a minimal model. The program $P^M_{eM_2} = \{ p \oplus d \leftarrow ; g \oplus p \leftarrow \}$ has two minimal models: $\{ p \}$ and $\{ g, d \}$. Note that $M_2$ is a credulous model because the c-defeater w.r.t. $M_1$ has become c-defeated w.r.t. $M_2$, i.e. the justification in $M_1$ for c-defeating $p \oplus d \leftarrow$ has disappeared in $M_2$.
Non-minimal credulous answer sets appear when the program contains inconsistencies on a decision level: in the above example the following choices have to be made: $\{ p, d \}$, $\{ g, p \}$ and $\{ g, d \}$. Because of the program’s construction, one can choose either one or two alternatives and c-defeating will make the choice justifiable.
### 4 Implementation
For the last five years, answer set programming has gained popularity. One of the main forces behind this is the growing efficiency of answer solvers like Smodels ([12]) and DLV ([17]).
In this section, we propose a mapping, for both semantics, to semi-negative logic programs. Since both answer set solvers support this type of programs, this transformation can be used for constructing an OCLP front-end. After introducing a naive mapping, we propose a number of general, not answer solver dependent, optimizations to improve efficiency of this algorithm.
---
6 The definition in [8] states a stable model, but since both are identical for CLP, we have opted in this paper to use the notion of minimal model instead.
4.1 Skeptical Mapping
The skeptical answer set semantics is based on the notion of defeat. If we want to map our formalism to a language which does not support this, we need a way to encode it. This implies anticipating which combinations of rules could be capable of defeating a rule and which ones are not.
The definition of defeating relies strongly on the notion of alternatives: rules can only be defeated by rules containing alternatives of the head atoms. Therefore, anticipating defeaters also implies predicting alternatives. According to Definition 3, \( b \) is an alternative of \( a \) in a component \( C \) if one can find an applicable choice rule as preferred as \( C \) containing both \( a \) and \( b \) in the head. This implies that even without an interpretation we can find out which atoms might be or could become alternatives; it only remains to be checked if the rule is applicable or not. These condition-based alternatives are referred to as possible future alternatives and are defined more formally below.
**Definition 8.** Let \( P \) be an OCLP, \( C \in C \) be component of \( P \) and \( a \in B_P \). The set of possible future alternatives of \( a \) in \( C \), denoted as \( A^P_C(a) \), is defined as:
\[
A^P_C(a) = \{(b, B_r) \mid \exists r \in P \cdot c(r) \not\subseteq C, a, b \in H_r, a \neq b \}.
\]
**Example 12.** Consider the OCLP \( P = \{(P_1 \equiv \{r_1 : a \leftarrow ; r_2 : f \leftarrow \}, P_2 = \{r_3 : a \oplus b \oplus c \leftarrow d; r_4 : a \oplus d \leftarrow f; r_5 : d \oplus c \leftarrow \}, P_2 \not\subseteq P_1 \} \). The possible future alternatives of \( a \) in \( P_1 \) equal \( A^P_{P_1}(a) = \{(b, \{d\}), (c, \{d\}), (d, \{f\}) \} \).
The next theorem demonstrates that alternatives can be expressed in terms of possible future alternatives.
**Theorem 1.** Let \( P \) be an OCLP, \( C \in C \) be component of \( P \), \( a \in B_P \) and \( I \) an interpretation for \( P \). Then, \( \Omega^P_C(a) = \{b \mid (b, S) \in A^P_C(a) \land S \subseteq I \} \).
Having these possible future alternatives allows us to detect possible future defeaters in much the same way as we detect standard defeaters (Definition 4). The only extra bit we need is to collect all the conditions on the alternatives. This collection then acts as the condition for the defeating rule.
**Definition 9.** Let \( P \) be an OCLP, \( C \in C \) be component of \( P \) and \( a \in B_P \). The set of possible future defeaters of \( a \) in \( C \), denoted as \( D^P_C(a) \), is defined as:
\[
D^P_C(a) = \{(r, S) \mid \exists r \in P \cdot c(r) \not\subseteq C, \forall b \in H_r \cdot (b, B_r) \in A^P_C(a), S = \bigcup_{b \in H_r} B_b \}.
\]
The set of possible future defeaters of a rule \( r \in P \), denoted as \( D^P(r) \), is defined as:
\[
D^P(r) = \{(R, S) \mid S = \bigcup_{a \in H_r} S_a \text{ such that } (r_a, S_a) \in D^P_C(a), r_a \in R \}.
\]
Having the possible future defeaters of an atom in a certain component, we can easily find that combination that can act as a possible future defeater of a rule in a certain component. We simply compute the set of possible future defeaters of each of the head atoms of this rule in this rule’s component. The set of all possible permutations of choosing an element from each of these sets gives us the possible future defeaters of our rule. In other words, we obtain a number of possible future defeaters of a rule equal to the product of the sizes of the sets of possible future defeaters for each of its head elements.
Example 13. When we look back to the program $P$ of Example 12, we have that $a$ has a one possible future defeater in $P_1$ as: $D^P_{P_1}(a) = \{(r_5, \{d, f\})\}$. In the same component, we have that $c$ has a future defeater $D^P_{P_1}(c) = \{(r_4, \{d, f\})\}$. All the other atoms in the program do not have any possible future defeaters in any of the relevant components. The rule $r_1$ is the only rule with possible future defeaters, namely $D^P(r_1) = \{(\{r_5\}, \{d, f\})\}$.
Clearly, possible future defeaters can be used for expressing interpretation-dependent defeaters.
Theorem 2. Let $P$ be an OCLP and let $I$ be an interpretation for it. A rule $r \in P^*$ is defeated w.r.t. $I$ iff $\exists (R, S) \in D^P(r) : S \subseteq I, B_r \subseteq I, \forall r' \in R$.
These possible future defeaters are the key to mapping OCLPs to semi-negative logic programs. We are only required to turn the information which makes possible future defeaters into defeaters, i.e. they have to be applicable, into a condition. To make this possible, we introduce for each non-constraint rule $r$ in the program two new atoms: $d_r$ and $a_r$. The former indicates that the rule $r$ is defeated or not, while the truth value of the latter is an indicator of the applicability of the rule.
Definition 10. Let $P$ be an OCLP. Then, the logic program $P_-$ is defined as follows:
1. $|H_r| = 0: r \in P_-$
2. $|H_r| \geq 1$:
(a) $h \leftarrow B_r, \neg d_r, \neg (H_r \setminus \{h\}) \in P_- : \forall h \in H_r$
(b) $a_r \leftarrow B_r \in P_-$
(c) $d_r \leftarrow C \in P_- \text{ with } C = S \cup \bigcup_{r' \in R} a_r$ such that $(R, S) \in D^P(r)$.
(d) $\leftarrow h, g, B_r, \neg d_r \in P_- : \forall h, g \in H_r : h \neq g$
Since constraints are not involved in the defeating process, we can simply copy them to the corresponding logic program. For the answer set semantics of ordered choice logic program, we need, among other things, that each applicable, undefeated rule admits exactly one head atom. Rules of type a) and d) make sure that the corresponding rules in the logic program do not violate this property. The rules of type b) indicate which original rules are applicable. The c)-rules are probably the most difficult ones. They express when a rule should or could be considered defeated. If we look at Theorem 2, we have a mechanism for relating possible future defeaters to actual defeaters. Given a possible future defeater $(R, S)$ for a rule $r$, we simply have to make sure that all rules in $R$ are applicable and that all atoms in $S$ are true with respect to the current interpretation. With rules of type b), we can express the former using $a_r$. Combining all of this, we can signal in the transformed program that a rule is defeated or not using a rule $d_r \leftarrow a_{r_1}, \ldots, a_{r_n}, S$ with $r_i \in R$ and $n = |H_r|$. Whenever an answer set of the transformed program makes $d_r$ true, we know that the original rule $r$ is defeated. The construction with rules of type b) makes sure that the reverse also holds.
Example 14. The corresponding logic program $P_-$ of the OCLP of Example 12 looks like:
$$
\begin{align*}
\text{a} & \leftarrow \neg d_{r_1} & \text{a} & \leftarrow f, \neg d, \neg d_{r_4} & \text{a} & \leftarrow & & \neg d, \neg d_{r_3}, a, b \\
\text{f} & \leftarrow \neg d_{r_2} & \text{d} & \leftarrow f, \neg a, \neg d_{r_4} & \text{a} & \leftarrow d & & \neg d, \neg d_{r_3}, a, c \\
\text{a} & \leftarrow d, \neg b, \neg c, \neg d_{r_3} & \text{d} & \leftarrow \neg a, \neg d_{r_3} & \text{a} & \leftarrow f & & \neg d, \neg d_{r_3}, b, c \\
\text{b} & \leftarrow d, \neg a, \neg c, \neg d_{r_3} & \text{c} & \leftarrow d, \neg d_{r_3} & \text{a} & \leftarrow & & \neg f, \neg d_{r_3}, a, d \\
c & \leftarrow d, \neg a, \neg b, \neg d_{r_3} & \text{a}_{r_2} & \leftarrow & & \text{d}_{r_2} & \leftarrow \text{a}_{r_2}, \text{d}, f & & \neg \text{d}_{r_3}, d, c
\end{align*}
$$
The original OCLP of Example 12 has two skeptical answer sets, $\{ f, d, b \}$ and $\{ f, c, a \}$, which correspond exactly with the two answer sets, $\{ a_{r_2}, a_{r_2}, a_{r_3}, a_{r_4}, d, f, d, f, d \}$ and $\{ a_{r_1}, a_{r_2}, a_{r_4}, a_{r_3}, f, c, a \}$, of $P_-$.
Theorem 3. Let $P$ be an OCLP and $P_-$ be its corresponding logic program. Then, a one-to-one mapping exists between the skeptical answer sets $M$ of $P$ and the answer sets $N$ of $P_-$, in such a way that $N = M \cup \{ a_r \mid \exists r \in P \cdot |H_r| \geq 1, B_r \subseteq M \} \cup \{ d_r \mid \exists r \in P \cdot r \text{ is defeated w.r.t. } M \}$.
4.2 Credulous Mapping
To obtain the credulous answer set semantics for OCLPs, we propose a similar mapping to semi-negative logic programs. The only difference between the skeptical and the credulous semantics is the way they both handle defeat. For the credulous version, we need to make sure that we look for c-defeaters in all components which are not less preferred as the rule we wish to defeat. Furthermore, we have to make sure that c-defeaters are applied and not just applicable as is the case for defeaters. The former will be encoded by means of possible future c-defeaters while the latter will be translated in a different style of $a_r$ rules in the mapping.
The definition of possible future c-defeater is identical to the one of its skeptical counter-part except that it looks for rules in all components which are not less preferred.
Definition 11. Let $P$ be an OCLP, $C \subseteq C$ be component of $P$ and $a \in B_P$. The set of possible future c-defeaters of $a$ in $C$, denoted as $\mathcal{F}_C P(a)$, is defined as $\mathcal{F}_C P(a) = \{ (r, S) \mid \exists r \in P \cdot C \neq C(r), \forall b \in H_r \cdot (b, B_b) \in \mathcal{A}_{C} P(a), S = \cup B_b \}$. The set of possible future c-defeaters of a rule $r$ in $P$, denoted as $\mathcal{F} P(r)$, is defined as $\mathcal{F} P(r) = \{ (r, S) \mid S = \cup_{a \in H_r} S_a \text{ such that } (r_a, S_a) \in \mathcal{F} P(a), r_a \in R \}$.
Just as before, c-defeaters can be expressed in terms of possible future c-defeaters.
Theorem 4. Let $P$ be an OCLP and let $I$ be an interpretation for it. A rule $r \in P^*$ is c-defeated w.r.t. $I$ iff $\exists (R, S) \in \mathcal{F} P(\epsilon_I(a)) \cdot S \subseteq I$, $r'$ applied w.r.t. $I$, $\forall r' \in R$.
Definition 12. Let $P$ be an OCLP. Then, the logic program $P_\leq$ is defined as follows:
1. $|B_r| \leq 0$: $r \in P^\leq$
2. $|B_r| \geq 1$:
- (a) $h \leftarrow B_r, \neg d_r, \neg (H_r \setminus \{ h \} \} \in P^\leq; \forall h \in H_r$
- (b) $a_r \leftarrow B_r, h, \neg (H_r \setminus \{ h \} \} \in P^\leq; \forall a \in H_r$
- (c) $d_r \leftarrow C \in P^\leq$ with $C = S \cup \bigcup_{r' \in R} a_{r'}$ with $(R, S) \in \mathcal{F} P(r)$.
The credulous mapping is very similar to the skeptical one but there are a couple of subtle differences: an obvious difference is the use of possible future c-defeater instead of their skeptical counterparts (c-rules). The second change are the rules implying $a_r$ (b-rules). Previously they were used to indicate applicability, the necessary condition for the defeat. Since c-defeat works with applied defeaters, we need to make sure that $a_r$ is considered only true when $r$ is applied. The less obvious change is the absence of the rules of type d). Since a rule can only be applied when one and only one head atom is considered true and because $a_r$ should only be considered true in this particular case, they no longer necessary.
Example 15. Reconsider the OCLP from Example 11. If we use the mapping from Definition 12, we obtain the following program:
$$
\begin{align*}
g &\leftarrow \neg d_1 & a_1 &\leftarrow g & d_1 &\leftarrow a_2 \\
p &\leftarrow \neg d, \neg d_2 & a_2 &\leftarrow p, \neg d & d_2 &\leftarrow a_3, a_4 \\
d &\leftarrow \neg p, \neg d_2 & a_2 &\leftarrow d, \neg p & d_3 &\leftarrow a_2, a_4 \\
g &\leftarrow \neg p, \neg d_3 & a_3 &\leftarrow g, \neg p & d_4 &\leftarrow a_2, a_3 \\
p &\leftarrow \neg g, \neg d_3 & a_3 &\leftarrow p, \neg g \\
g &\leftarrow \neg d, \neg d_4 & a_4 &\leftarrow g, \neg d \\
d &\leftarrow \neg g, \neg d_4 & a_4 &\leftarrow d, \neg g
\end{align*}
$$
The answer sets of this program correspond perfectly to the credulous answer sets of the original program. The newly introduced atoms make sure that the answer set semantics remains minimal while the credulous OCLP version is clearly not.
Theorem 5. Let $P$ be an OCLP and $P_-$ be its corresponding logic program. Then, a one-to-one mapping exists between the credulous answer sets $M$ of $P$ and the answer sets $N$ of $P_-$ in such a way that $N = M \cup \{a_r \mid \exists r \in P \cdot |H_r| \geq 1, B_r \subseteq M, |H_r \cap M| = 1\} \cup \{d_r \mid \exists r \in P \cdot r \text{ is c-defeated w.r.t. } M\}$.
4.3 Implementing an OCLP Front End to Smodels
To demonstrate the theoretical mapping and to serve as a basis for future experimentation and research a simple language was developed to allow OCLP to be processed by computer. A compiler\(^7\) was created to parse the input language and interface into the Smodels([12]) API which was then used to compute the answer set. The compiler OCT is available under the GPL (“open source”) from http://www.cs.bath.ac.uk/~mdv/oct/.
4.4 Optimizations
Definitions 10 and 12 give us a theoretical basis for a program to convert OCLPs in semi-negative logic programs but a few changes and optimizations are necessary before we have an effective algorithm for converting and solving OCLPs.
\(^7\) Here compiler is used in the broader sense of an automated computer language translation system rather than traditional procedural to machine code system.
Optimizations is used here in the context of compiler optimizations. The output will not be 'optimal' - it will be improved. Given that all of the information required to create answer sets exists at the OCLP level it would be possible to produce an 'optimal' output - it would be answer sets of the OCLP. However an answer set solver is being used to deliberately reduce the amount of logic needed when processing OCLPs and to take advantage of the optimizations and heuristics already incorporated answer set solvers. Therefore, we shall only look for simple optimizations, based on information obtained when creating the semi-negative logic program, to reduce the numbers of rules and atoms in the output, allowing answer set solvers to produce solutions more effectively. To this extent, the wording of optimization refers to the whole process and not just the compiler.
There are two key categories of optimization. The first are changes in how an individual OCLP rule is translated. These *intra-transform* optimizations don't effect the translations of any other rules and can thus be done as the rules are being translated. The other category are *inter-transform* optimizations and these are slightly more complicated. They can remove some of the simple interactions between rules and simplify the problem. However they effect the translation of other rules and thus can't be applied immediately (removing an atom from the system completely is not much use if you've already used it) but require a separate pass.
The first intra-transform improvement that can be made is to reduce the number of times that the body of any rule is included in the output. This is done by adding an extra atom \( b_r \leftarrow B_r \) for each rule and then using \( b_r \) instead of \( B_r \) for the other rules. Essentially 'factoring out' the condition that the body must apply. This is of course only a significant saving if the rule has more than one element in the body. In the case of the skeptical mapping this can be combined with \( a_r \).
The next improvements can be made while creating the \( c_r \)-rules for every rule in the OCLP. If there are no possible future \( (c-) \)defeaters for at least one of the elements in the head of a rule then there will be no rules of the form \( d_r \leftarrow C \) generated and the \( \neg d_r \) can be dropped from any other rules created. Conversely if there are any rules of the form \( d_r \leftarrow \) that would be generated then any rule that would contain \( \neg d_r \) can be ignored as the rule is considered to be automatically defeated. At the intra-transform level rules of this sort can only be located while performing a skeptical mapping as the applicability of an arbitrary rule can be determined easily but whether it is applied or not is non trivial. This improvement implies that before any other rule for \( r \) is created, the \( c_r \)-rules should be constructed.
There are several inter-transform optimizations. To add to the additional problems of using these operations, applying these can result in more rules to which they can be applied, essentially requiring looping until there are no more possible improvements that can be made. To optimize the entire system, it suffices that the compiler only uses the info it directly obtains from completing transformation. The rest can be left for the other components.
In order of application and increasing complexity:
- Propagation - Semi-negative logic programming rules of the form \( a \leftarrow \) state a fact about the system so this can be used to simplify the system. All references to \( a \) can be removed from the bodies of all of the other rules in the system as it will appear
in any answer set\(^8\). Any rule with a reference to \(\neg a\) in it can be removed for similar reasons. If the atom \(a\) is a constructed atom used in the transformation (i.e. \(a_r\) or \(d_r\) for some \(r \in P\)) then the definition \(a \leftarrow\) can be removed as well as it doesn’t add anything to the final answer set (in OCLP terms).
- Removal - Atoms of the form \(a_r, b_r\) and \(d_r\) can be removed if they are only found in the heads of rules. This is because all atoms of this form are removed from the stable model solution when it is mapped back to an OCLP solution. Thus if they do not form part of a condition on another rule they don’t need to be calculated. The list of which of these atoms are used can be generated while the translation of rules is being made, thus saving having to do another pass. However this is not as much of a improvement as it may first seem. In the skeptical case rules using the generated atom \(a_r\) are used as an alias for \(b_r\) or will be removed by propagation in all cases except a body size of 1 (which could also be recognized and removed via aliasing optimizations). The credulous case may however benefit from removal of some \(a_r\) atoms. While the nature of \(b_r\) and the optimizations applied to see if a rule can be defeated before generating atoms of the type \(d_r\) means these are only likely to be removed like this after other optimizations have been applied (which in turn will require and extra pass to work out which generated atoms are used).
- Aliasing - Each rule of the form \(a \leftarrow b\) essentially makes \(a\) an alias for \(b\). Thus any rule who’s body contains both \(a\) and \(b\) can safely remove \(a\). The other forms of aliasing and their consequences depend on what type of atom \(a\) is.
- \(a \leftarrow b\) (\(a\) is an atom from the original OCLP) if \(a\) appears in the head of only one rule then all occurrences of \(a\) in the bodies of other rules can be replaced with \(b\).
- \(a_r \leftarrow b\) If there is only one rule with \(a_r\) in the head (as will happen with a skeptical and some credulous mappings) all occurrences of \(a_r\) can be replaced and the rule then removed completely as it adds nothing to the final answer set.
- \(d_r \leftarrow b\) There may well be more than one rule giving conditions for \(d_r\) to be true, but if this is not the case then all occurrences of \(d_r\) can be replaced with \(b\) and this rule removed.
- \(b_r \leftarrow b\) Although these shouldn’t be generated directly it is possible they will arise through propagation. Again there will only be one rule with \(b_r\) in the head so it can be replaced in all bodies and then removed.
When replacing the atoms of a union with the body of the rule is needed as it is possible that the rule already contained \(b\) (of-course this can then create rules of the form \(a \leftarrow b\) which can then be optimized again, however it will not create rules that can be reduced via propagation). This stage can also be used to eliminate any duplicate rules which might arise as the result of the mapping. This is also the ideal place to remove useless rules like \(b \leftarrow b, B\) and \(a \leftarrow b, \neg b, B\) which have nothing to add to the semantics of our program.
\(^8\) Care must be taken to note the case of removing the last atom from the body of a constraint in this fashion as rather than reducing the complexity of the problem it signifies that the program has no answers. E.g. A semi-negative logic program containing the rules \(b\) and \(\leftarrow b\) will have no solutions.
Factoring - For every pair of rules \( a \leftarrow B \) and \( c \leftarrow D \) if \( |B \cap D| \geq 2 \) the common elements can be 'factored out'. For example:
\[
\begin{align*}
a & \leftarrow b_1, b_2, b_3, b \\
c & \leftarrow b_1, b_2, b_3, b
\end{align*}
\]
Gives
\[
\begin{align*}
a & \leftarrow e, b \\
c & \leftarrow e, b \\
e & \leftarrow b_1, b_2, b_3
\end{align*}
\]
This transformation should produce more compact (in terms of the total number of atoms) rules when dealing with OCLPs that have more complex possible future (c-)defeaters. However the ordering which pairs of rules should be considered and the implications of common expressions shared between more than two blocks make the application order very difficult to calculate quickly. It is possible to apply this to a smaller degree to c)-rules during the transformation process if the possible future (c-)defeaters of each atom are handled. Constructing the possible future (c-)defeaters on a rule by rule basis doesn’t give fine enough 'granularity’ to apply this kind of optimization.
5 Relationship to Other Approaches
Our formalism shows similarities with ordered logic programming [13, 15, 5], where the latter supports disjunction (in the head), which also provides a skeptical and a credulous approach. However, defeat is restricted to rules with contradictory heads, making it difficult to represent more complex decisions. In [4], preference in extended disjunctive logic programming is considered. As far as overriding is concerned, the technique corresponds rather well with our skeptical defeating, but, again, alternatives are limited to an atom and its (classical) negation.
To reason about updates of generalized logic programs, extended logic programs without classical negation, [1] introduces dynamic logic programs. A stable model of such a dynamic logic program is a stable model of the generalized program obtained by removing the rejected rules. The definition of a rejected rule corresponds to our definition of a defeated rule when \( a \) and \( \neg a \) are considered alternatives. A similar system is proposed in [11], where sequences are based on extended logic programs, and defeat is restricted to rules with opposing heads. The semantics is obtained by mapping to a single extended logic program containing expanded rules such that defeated rules become blocked in the interpretation of the “flattened” program.
In [8], a mapping from extended logic programs to OCLP was presented. A very similar mapping allows us to map both dynamic logic programs as sequences of extended logic program to OCLP.
[2] added a system of preference to the dynamic logic programs of [1]. This preference is used to select the most preferred stable models. A similar mechanism is also used by [3] to obtain preferred answer sets: preferences are used to filter out unwanted candidate models, they are not used during model creation as is the case for OCLP.
[18] also proposes a formalism that uses the order among rules to induce an order on answer sets for inconsistent programs, making it unclear on how to represent decisions. Along the same line, [10] proposes logic programs with compiled preferences, where preferences may appear in any part of the rules. For the semantics, [10] maps the program to an extended logic program.
6 Conclusions and Directions for Future Research
In this paper we proposed a mechanism for transforming ordered choice logic programs to semi-negative logic program while preserving, depending on the transformation, the skeptical or credulous answer set semantics. Having such a transformation allows an implementation of OCLP on top of answer set solvers like Smodels ([12]), and DLV ([17]). The mapping and the optimizations we proposed are very general and not directed towards any particular answer set solver. For the future we plan to experiment with the special construct provided by the different implementations. It would be interesting to find out whether incorporating them into the output of our compiler would improve the efficiency of the entire system. For this we think for example at the disjunctive rules provided by DLV. They would reduce rules of type a), while the special choice construct of Smodels would reduce both rules of type a) and d). Although this construct would reduce the number of rules in the output of our compiler, this does not automatically make the code more efficient, this depends on what these systems do with them. In case they have a special mechanism for handling them this would indeed mean a gain in effectiveness. If, however they translate everything back to standard rules, using them would only have a negative effect. It also introduces additional complications into the mapping and inter-transform optimizations and may limit their effectiveness.
Previously, OCLP was used to describe and to reason about game theory ([8, 9]). To this extend, we used a special class of OCLPs. Each atom appears exactly once in a choice rule and none of the choice rules can be defeated. Combining this knowledge with the mapping of OCLP to logic programs, we can create a game theory tailored front-end to answer set solvers.
In [9], we proposed a multi-agent system were the knowledge and beliefs of the agents is modeled by an OCLP. The agents communicate by sending answer sets, skeptical or credulous, to each other. The notion of evolutionary fixpoint shows how the various agents reasoned in order to come to their final conclusions. Having an implementation for OCLP would allow us to implement multi-agent systems and run experiments in various domains. One possibility would be incorporate this knowledge into Carrel ([19]), a multi-agent system for organ and tissue exchange.
References
|
{"Source-Url": "http://www.cs.bath.ac.uk/~mdv/pubs/asp03.pdf", "len_cl100k_base": 11884, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 52100, "total-output-tokens": 14217, "length": "2e13", "weborganizer": {"__label__adult": 0.0003812313079833984, "__label__art_design": 0.0004296302795410156, "__label__crime_law": 0.0005536079406738281, "__label__education_jobs": 0.0013370513916015625, "__label__entertainment": 9.769201278686523e-05, "__label__fashion_beauty": 0.00020062923431396484, "__label__finance_business": 0.0003714561462402344, "__label__food_dining": 0.0004355907440185547, "__label__games": 0.0010013580322265625, "__label__hardware": 0.0009169578552246094, "__label__health": 0.0006794929504394531, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 0.00014269351959228516, "__label__industrial": 0.0006365776062011719, "__label__literature": 0.0005083084106445312, "__label__politics": 0.0003795623779296875, "__label__religion": 0.0005030632019042969, "__label__science_tech": 0.08709716796875, "__label__social_life": 0.0001106858253479004, "__label__software": 0.00925445556640625, "__label__software_dev": 0.8935546875, "__label__sports_fitness": 0.0002968311309814453, "__label__transportation": 0.0006866455078125, "__label__travel": 0.00018310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49366, 0.0177]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49366, 0.64576]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49366, 0.89055]], "google_gemma-3-12b-it_contains_pii": [[0, 2613, false], [2613, 5935, null], [5935, 9494, null], [9494, 11995, null], [11995, 15225, null], [15225, 18628, null], [18628, 22179, null], [22179, 25273, null], [25273, 29011, null], [29011, 31938, null], [31938, 35660, null], [35660, 39279, null], [39279, 42254, null], [42254, 45473, null], [45473, 49366, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2613, true], [2613, 5935, null], [5935, 9494, null], [9494, 11995, null], [11995, 15225, null], [15225, 18628, null], [18628, 22179, null], [22179, 25273, null], [25273, 29011, null], [29011, 31938, null], [31938, 35660, null], [35660, 39279, null], [39279, 42254, null], [42254, 45473, null], [45473, 49366, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49366, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49366, null]], "pdf_page_numbers": [[0, 2613, 1], [2613, 5935, 2], [5935, 9494, 3], [9494, 11995, 4], [11995, 15225, 5], [15225, 18628, 6], [18628, 22179, 7], [22179, 25273, 8], [25273, 29011, 9], [29011, 31938, 10], [31938, 35660, 11], [35660, 39279, 12], [39279, 42254, 13], [42254, 45473, 14], [45473, 49366, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49366, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
59d70a8eb4799674a32dd46ea1badd093a2d44c1
|
The Suspension Notation for Lambda Terms and its Use in Metalanguage Implementations
Gopalan Nadathur \(^1,2\)
Department of Computer Science and Engineering
University of Minnesota
Minneapolis, MN 55455, U.S.A
Abstract
Many metalanguages and logical frameworks have emerged in recent years that use the terms of the lambda calculus as data structures. A common set of questions govern the suitability of a representation for lambda terms in the implementation of such systems: \(\alpha\)-convertibility must be easily recognizable, sharing in reduction steps, term traversal and term structure must be possible, comparison and unification operations should be efficiently supported and it should be possible to examine terms embedded inside abstractions. Explicit substitution notations for lambda calculi provide a basis for realizing such requirements. We discuss here the issues related to using one such notation—the suspension notation of Nadathur and Wilson—in this capacity. This notation has been used in two significant practical systems: the Standard ML of New Jersey compiler and the Teyjus implementation of \(\lambda\)Prolog. We expose the theoretical properties of this notation, highlight pragmatic considerations in its use in implementing operations such as reduction and unification and discuss its relationship to other explicit substitution notations.
1 Introduction
Metalanguages and logical frameworks manipulate a variety of symbolic objects such as formulas, programs, proofs and types whose structures naturally involve the notion of binding. Lambda terms have been found to be a useful in capturing the abstract syntax of such objects. Suppose, for example, that we wish to represent the formula \(\forall x((p \ x) \vee (q \ c))\) in which \(p\) and \(q\) are predicate names and \(c\) is a constant. Noting that a quantifier plays the dual role of determining a scope and of making a predication, the essential structure of this formula can be captured by the lambda term \((\text{all } (\lambda x \text{ or } ((p \ x), (q \ c))))\); in this
---
1 This work has been partially supported by the NSF under the grant CCR-0096322.
2 Email: gopalan@cs.umn.edu
©2003 Published by Elsevier Science B. V.
term, all is a constructor that represents universal quantification and or is a
constructor that represents disjunction. The explicit treatment of binding in
this representation makes for a simple and transparently correct implementa-
tion of several logical operations on formulas. Thus, the task of instantiating
with t the quantifier in a formula represented by the term (all P) is realized
immediately by writing the term (P t). Actual substitution is carried out,
with all the necessary renamings, by the β-reduction operation on lambda
terms. Similarly, structure analysis of formulas that is sensitive to binding
can be performed through an enhanced unification operation. For example,
suppose that we wish to recognize that the given formula is one that has a uni-
versal quantification over a disjunction where the quantified variable does not
appear in the second disjunct. This property can be ascertained by attempting
to unify the term representing it with the ‘template’ (all λx or((P x), Q))
in which P and Q are instantiatable variables. The variable Q here cannot
be substituted for in such a way that the second disjunct comes to depend on
the quantifier and will therefore only match with the ‘right’ kind of term.
The programming convenience of such higher-order abstract syntax must,
of course, be complemented by an efficient representation for lambda terms
within the implementation of the relevant metalanguage or logical framework.
While lambda term realizations have long been of interest in the functional
programming context, the present intensional use of these terms places new
constraints on adequate representations. Thus, the comparison of lambda
terms must be possible and so their structures cannot be sacrificed in a compi-
lation process. At a more detailed level, the notion of equality between lambda
terms must ignore the particular names used for bound variables. For this rea-
son, the representation must support the rapid determination of identity up
to α-convertibility. Another operation that is important to realize efficiently
is β-reduction. For reasons that we discuss later, two requirements must be
satisfied relative to this operation: it should be possible to perform the substi-
tutions generated by β-contractions lazily and to percolate such substitutions
as well as to perform β-contractions inside abstraction contexts. Finally, the
higher-order unification computation is central to many metaprogramming
tasks and consideration must be given to the treatment of meta variables and
to operations that are important in its implementation.
A good starting point for an adequate intensional representation of lambda
terms is the de Bruijn notation for lambda terms [3]. This notation eliminates
names for bound variables, thus simplifying identity checking modulo renam-
ing. Explicit substitution notations [1,2,4,8,13] that build on the de Bruijn
scheme provide the basis for meeting several of the other mentioned require-
ments. There are differences in the specific characteristics of such notations
and choices must also be made in the specific manner in which these are to be
deployed in the context of metalanguage implementation. This paper exposes
some of the issues that are important in this situation, gleaned from our ex-
perience in realizing the language λProlog. We orient the discussion around
2
the suspension notation of Nadathur and Wilson that, to our knowledge, is the only one to be used in two actual implementation tasks [12,14]. However, our general comments apply to other schemes as well and we also compare the different notations at the end.
2 The Suspension Notation
The combination of substitution walks that arise from contracting different \( \beta \)-redxes can have a significant impact on efficiency. Thus, suppose that we wish to instantiate the two quantifiers in the formula represented by \( (\text{all } (\lambda x (\text{all } (\lambda y P))) \), where \( P \) represents an unspecified formula, with the terms \( t_1 \) and \( t_2 \). Assuming a de Bruijn representation, such an instantiation is realized through two contractions, eventually requiring \( t_2 \) and \( t_1 \) to be substituted for the first and second free variables in \( P \) and the indices of all other free variables to be decremented by two. Each of these substitutions involves a walk over the same structure—the structure of \( P \)—and it would be profitable if they could all be done together. Studies reveal that, by systematically exploiting this idea, structure traversal can be substantially reduced in practice, down to as little as an eighth of the original in some cases [9]. Now, an ability that is critical to combining walks in this manner is that of temporarily suspending substitutions generated by \( \beta \)-contractions. In a situation in which all the redxes are available in a single term, this kind of delaying of substitution can be built into the reduction procedure through ad hoc devices. However, in the case being considered, the two quantifier instantiations are ones that can only be considered incrementally and, further, intervening structure needs to be processed before the abstraction giving rise to the second redex is encountered. The structure that leads to sharing is therefore not all available within a single call to a reduction procedure and an explicit encoding of substitution over \( P \) seems to be necessary for realizing this benefit.
Substitutions are delayed in the implementation of functional programming languages by using environments and it may appear that a simple reflection of such environments into term structure should suffice for the present purposes. The problem, however, is that when lambda terms are used to represent objects, it may be necessary to examine structure embedded inside abstractions. Consider, for example, the task of determining if the term that results from instantiating the quantifier in a formula of the form \( (\text{all } R) \) has a shape that is captured by the template \( (\text{all } (\lambda x \text{or}((P \ x), Q))) \); we assume that \( R \) represents an unspecified term here and that \( P \) and \( Q \) are instantiatable variables. A positive determination involves percolating a substitution underneath the abstraction corresponding to a quantifier and then checking if the embedded structure is a disjunction. In carrying out this computation it is necessary to consider \( \alpha \)-conversion or an equivalent renumbering in the de Bruijn representation, something whose incorporation into a environment model requires care. Notice also that the actual form of \( R \) may require \( \beta \)-contractions to be
performed within the abstraction capturing the quantifier scope in order to reveal its top-level logical structure. This kind of calculation, further complicates the structure of environments.
The suspension notation embodies a solution to the problems described above. Formally, it encompasses a collection of expressions called terms, environments and environment terms whose syntax is given by the categories \( \langle T \rangle \), \( \langle E \rangle \) and \( \langle ET \rangle \) defined by the following grammar rules in which \( \langle C \rangle \), \( \langle I \rangle \) and \( \langle N \rangle \) represent constants, positive numbers and natural numbers, respectively:
\[
\langle T \rangle \quad ::= \quad \langle C \rangle \mid \#(I) \mid \langle (T) \langle T \rangle \rangle \mid \langle \lambda \langle T \rangle \rangle \mid \llbracket \langle T \rangle, \langle N \rangle, \langle N \rangle, \langle E \rangle \rrbracket
\]
\[
\langle E \rangle \quad ::= \quad \text{null} \mid \langle ET \rangle \quad ::= \quad \langle E \rangle \mid \{\langle E \rangle, \langle N \rangle, \langle N \rangle, \langle E \rangle \}
\]
\[
\langle ET \rangle \quad ::= \quad @\langle N \rangle \mid \langle (T), \langle N \rangle \rangle \mid \langle \langle ET \rangle, \langle N \rangle, \langle N \rangle, \langle E \rangle \rangle.
\]
The essential addition to de Bruijn terms to produce suspension terms is that of expressions of the form \( [[t, ol, nl, e]] \), where \( t \) is a term and \( e \) is an environment. Such a term, referred to as a suspension, represents the term \( t \) with its first \( ol \) variables substituted for in a way determined by the environment \( e \) and its remaining bound variables renumbered to reflect the fact that \( t \) used to appear within \( ol \) abstractions but now appears within \( nl \) of them. In the simplest form, the elements of an environment are either substitution terms generated by contractions or are dummy entries representing abstractions that persist in an outer context. However, renumbering of indices may have to be done during substitution, and, to encode this, each such environment element is annotated by a relevant abstraction level referred to as its index. Such suspensions must satisfy certain wellformedness constraints that have a natural basis in our informal understanding of their content: in an expression of the form \( [[t, i, j, e]] \), the ‘length’ of the environment \( e \) must be equal to \( i \), the indices of the entries in \( e \) must be non-increasing and they must be bounded by \( j \). The notation also allows for the combination of substitutions: an expression of the form \( \{ e_1, i, j, e_2 \} \) represents the composition of the substitutions contained in \( e_1 \) and \( e_2 \) and \( \llbracket et, i, j, e_2 \rrbracket \) corresponds to the environment term \( et \) modified by the substitutions in the environment \( e_2 \). The numbers \( i, j \), the lengths of environments and the indices of terms in the environments being composed must satisfy certain constraints that arise naturally out of the restrictions discussed on simple environments. Space limitations prevent a discussion of these aspects here, but a detailed treatment appears in [13].
The usual \( \beta \)-contraction operation is realized in the suspension notation in two phases: the generation and the subsequent percolation of a substitution. This process is described formally by a collection of rewrite rules. These rules are broken up into three categories: the \( \beta_n \) rule that generates suspensions, the reading rules that percolate substitutions and the merging rules that permit intermediate suspensions to be combined. These rule categories are presented in Figures 1, 2 and 3, respectively. The merging rules are actually redundant from the perspective of simulating \( \beta \)-reduction. However, without them it is not possible to combine substitutions and the walks that effect them.
$\beta_s \quad ((\lambda t_1) t_2) \rightarrow [t_1, 1, 0, (t_2, 0) :: \text{nil}]$
Fig. 1. The $\beta_s$ rule
(r1) $[c, ol, nl, e] \rightarrow c$, provided $c$ is a constant.
(r2) $[#i, 0, nl, \text{nil}] \rightarrow #j$, where $j = i + nl$.
(r3) $[#1, ol, nl, @l :: e] \rightarrow #j$, where $j = nl - l$.
(r4) $[#1, ol, nl, (t, l) :: e] \rightarrow [t, 0, nl', \text{nil}]$, where $nl' = nl - l$.
(r5) $[#i, ol, nl, et :: e] \rightarrow [#i', ol', nl, e]$, where $i' = i - 1$ and $ol' = ol - 1$, provided $i > 1$.
(r6) $[(t_1 t_2), ol, nl, e] \rightarrow ([t_1, ol, nl, e] [t_2, ol, nl, e])$.
(r7) $[(\lambda t), ol, nl, e] \rightarrow (\lambda [t, ol', nl', @nl :: e])$, where $ol' = ol + 1$ and $nl' = nl + 1$.
Fig. 2. The reading rules
(m1) $[[t, ol_1, nl_1, e_1], ol_2, nl_2, e_2] \rightarrow [t, ol', nl', \{e_1, nl_1, ol_2, e_2\}]$, where $ol' = ol_1 + (ol_2 - nl_1)$ and $nl' = nl_2 + (nl_1 - ol_2)$.
(m2) $\{\text{nil}, nl, 0, \text{nil}\} \rightarrow \text{nil}$.
(m3) $\{\text{nil}, nl, ol, et :: e\} \rightarrow \{\text{nil}, nl', ol', e\}$, where $nl, ol \geq 1$, $nl' = nl - 1$ and $ol' = ol - 1$.
(m4) $\{\text{nil}, 0, ol, e\} \rightarrow e$.
(m5) $\{et :: e_1, nl, ol, e_2\} \rightarrow \{et :: e_1, nl, ol, e_2\}$.
(m6) $\{et, nl, 0, \text{nil}\} \rightarrow et$.
(m7) $\{@n, nl, ol, @l :: e\} \rightarrow @m$, where $m = l + (nl - ol)$, provided $nl = n + 1$.
(m8) $\{@n, nl, ol, (t, l) :: e\} \rightarrow (t, m)$, where $m = l + (nl - ol)$, provided $nl = n + 1$.
(m9) $\{(t, nl), nl, ol, et :: e\} \rightarrow ([t, ol', et :: e], m)$, where $l' = \text{ind}(et)$ and $m = l' + (nl - ol)$.
(m10) $\{et, nl, ol, et' :: e\} \rightarrow \{et, nl', ol', e\}$, where $nl' = nl - 1$ and $ol' = ol - 1$, provided $nl \neq \text{ind}(et)$.
Fig. 3. The merging rules
**Definition 2.1** The (one-step) reduction relations on suspension expressions generated by the reading and merging rules on one hand and by all the rules on the other are denoted by $\triangleright_{rm}$ and $\triangleright_{rmb}$, respectively. The usual $\beta$-contraction relation on de Bruijn terms is denoted by $\triangleright_{\beta}$. Finally, we denote the reflexive transitive closure of a relation $R$ by $R^*$.
5
The reading and merging rules are intended to expose the de Bruijn term underlying any given term and we would expect them to always succeed in doing so. The following proposition, proved in [13], shows this to be the case.
**Proposition 2.2** The relation \( \triangleright_{rm} \) is strongly terminating, i.e. all sequences of such reductions are finite.
A term of the form \( \left[ \left[ \left[ t, ol_1, nl_1, e_1 \right], ol_2, nl_2, e_2 \right], ol_3, nl_3, e_3 \right] \) can be ‘flattened’ into a single suspension by two uses of the rule m1. However, this flattening can be achieved in two different ways—by first composing \( e_1 \) and \( e_2 \) and then composing the result with \( e_3 \) or by first composing \( e_2 \) and \( e_3 \) and then composing \( e_1 \) with the result—and we would like the outcome to be the same in either case. Some explicit substitution calculi guarantee this by including an associativity rule for composing substitutions. In our calculus, this property is a consequence of the other rules:
**Proposition 2.3** Let \( a \) and \( b \) be environments of the form
\[
\left\{ \left\{ e_1, nl_1, ol_2, e_2 \right\}, nl_2 + (nl_1 - ol_2), ol_3, e_3 \right\}
\]
and
\[
\left\{ e_1, nl_1, ol_2 + (ol_3 - nl_2), \left\{ e_2, nl_2, ol_3, e_3 \right\} \right\},
\]
respectively. Then there is an environment \( r \) such that \( a \triangleright^{*}_{rm} r \) and \( b \triangleright^{*}_{rm} r \).
We would like a stronger property that the existence of \( \triangleright_{rm} \) normal forms to hold: these forms should be unique for any given expression. In light of Proposition 2.2, it is enough to show that \( \triangleright_{rm} \) is locally confluent. Towards this end, we need to consider the nontrivial overlaps in the lefthand sides of our rules and to show that the critical pairs corresponding to these can be rewritten to a common form. The relevant overlaps are between m1 and each of the reading rules, m1 and itself and m2 and m4. The only complicated case amongst these is when the overlap is between m1 and itself. However, Proposition 2.3 ensures reducibility to a common form in this case. Thus, we have
**Proposition 2.4** The relation \( \triangleright_{rm} \) is locally confluent and, hence, confluent.
We shall depict the \( \triangleright_{rm} \) normal form of an expression \( c \) in the suspension calculus by \(|c|\). The correspondence between the reduction relations on de Bruijn terms and suspension terms can then be stated as follows.
**Proposition 2.5** Let \( t \) be a term in the suspension calculus. If \( t \triangleright^{*}_{rm, \beta} r \) then \(|t| \triangleright^{*}_{\beta} |r|\). Conversely, if \(|t| \triangleright^{*}_{\beta} s\), then \( t \triangleright^{*}_{rm, \beta} s\).
From this proposition it follows also that \( \triangleright_{rm, \beta} \) is confluent.
### 3 Eliminating the Merging Rules
The merging rules provide a versatile mechanism for combining substitutions. However, their power derives from a fine-grained treatment of composition
that is a little cumbersome for actual implementation. For this reason, it is worthwhile to explore the possibility of capturing their common uses in coarser, more efficient, derived rules. We observe two situations below to which this approach can be effectively applied.
The first situation corresponds to the combination of substitutions arising from the contraction of nested $\beta$-redexes. As an illustration, we might consider the reduction of the term $((\lambda (\lambda ((\#1 \#2) \#3)) t_2)) t_3)$, in which $t_2$ and $t_3$ are arbitrary de Bruijn terms. In a leftmost-outermost reduction regime, the first step would be to use the $\beta_s$ rule to produce the suspension
$$[((\lambda (\lambda ((\#1 \#2) \#3)) t_2), 1, 0, (t_3, 0) :: nil]].$$
The reading rules would be used a few times to produce the term
$$((\lambda [((\lambda ((\#1 \#2) \#3)), 2, 1, 00 :: (t_3, 0) :: nil]] [t_2, 1, 0, (t_3, 0) :: nil]].)$$
At this stage, the $\beta_s$ rule would be used again, yielding the term
$$[((\lambda ((\#1 \#2) \#3)), 2, 1, 00 :: (t_3, 0), ([t_2, 1, 0, (t_3, 0) :: nil]], 0 :: nil]].$$
The m1 rule can now be used to compose the two environments, yielding
$$[((\lambda ((\#1 \#2) \#3)), 2, 0, \{00 :: (t_3, 0) :: nil, 1, 1, ([t_2, 1, 0, (t_3, 0) :: nil]], 0 :: nil]].)$$
Using the other merging rules, this term can be reduced to the form
$$[((\lambda ((\#1 \#2) \#3)), 2, 0, ([t_2, 1, 0, (t_3, 0) :: nil]], 0 :: (t_3, 0) :: nil]]$$
whose virtue is that ‘lookups’ of its environment are simple.
The sequence of rewriting steps starting from the second use of the $\beta_s$ rule and ending in the final suspension term can be collapsed into one use of a more ‘powerful’ $\beta_s$ rule:
$$(\beta_s^* ) \quad ((\lambda [t_1, ol + 1, nl + 1, 0@nl :: c]) t_2) \rightarrow [t_1, ol + 1, nl, (t_2, nl) :: c]$$
This rule can be shown to be a derived rule of the suspension calculus. The advantage to using it is that the intermediate merging steps can be avoided.
The example just considered actually illuminates a tradeoff between sharing in structure walks realized through merging and sharing in reduction. After the first use of the $\beta_s$ rule, we chose above to propagate substitutions. We could have chosen to rewrite the inner $\beta_s$-redex instead, producing
$$[((\lambda ((\#1 \#2) \#3)), 1, 0, (t_2, 0) :: nil]], 1, 0, (t_3, 0) :: nil]].$$
In a graph-based implementation of reduction, following this course ensures that this rewriting step is carried out before substitution propagation breaks any sharing relative to it. Note that to fully realize the benefits of such sharing, it is necessary to perform the two substitutions embedded in the term in separate walks over the structure of $\lambda ((\#1 \#2) \#3)$. There is, thus,
$^3$ There is an unstated proviso on this rule that holds of all terms derivable from de Bruijn ones using our reduction rules: the index of terms in $c$ must be less than that of $@nl$. 7
a dilemma between two different choices in reduction. However, this dilemma is genuine only when there are real cases of shared redexes. Our experiments reveal very few such situations in practice [9], indicating a preference for an approach that attempts to combine structure traversals.
The second situation in which merging rules are useful arises when indices need to be renumbered in a suspension that is substituted inside an abstraction context. We illustrate this by continuing the reduction of the term \( ((\lambda (\lambda ((\#1 \#2 \#3)) t_2)) t_3) \). Using the reading rules from where we left off, this term can be transformed into
\[
(\lambda ((\#1 [[t_2, 1, 0, (t_3, 0) :: nil], 0, 1, nil]]) \\
[[\#3, 3, 1, @0 :: ([t_2, 1, 0, (t_3, 0) :: nil]], 0) :: (t_3, 0) :: nil]]).
\]
The subterm \([[t_2, 1, 0, (t_3, 0) :: nil]], 0, 1, nil]]\) here corresponds to \( t_2 \) embedded within two suspensions, with the outer suspension representing a ‘bumping up’ of the indices for the free variables in the inner suspension, necessitated by its insertion inside an abstraction. Using the merging rules, the indicated subterm can be rewritten into \([t_2, 1, 1, (t_3, 0) :: nil]]\), thereby combining the different substitutions into one environment.
This use of the merging rules can also be reflected into a derived rule:
\[
(bump) \quad [[t, ol, nl, c], 0, n', nil] \rightarrow [t, ol, nl + n', c].
\]
In an actual implementation of reduction, this rule can, in fact, be rolled into the application of the reading rule \( r_4 \).
The disadvantage of the bump rule is that it, once again, prefers sharing in structure traversal to sharing in reduction. Actual loss in reduction sharing here is also something that differentiates between the de Bruijn representation and a name based representation of bound variables in implementing \( \beta \)-reduction: using the bump rule, the extra renumbering work in the de Bruijn scheme is subsumed into an already necessary traversal of the structure of the embedded term but with a possible loss in reduction sharing. As before, our observation has been that in practice there are very few real opportunities for sharing in reduction, indicating a preference for the bump rule whenever it is applicable and also little downside in reduction to using the de Bruijn scheme.
**Definition 3.1** We denote the reduction relation defined by the reading and the bump rules by \( \triangleright_{\#} \). The relation obtained when the \( \beta \) and the \( \beta' \) rules are also included is denoted by \( \triangleright_{\#\beta'} \).
The following proposition shows the coherence of our derived rules.
**Proposition 3.2** The \( \triangleright_{\#} \) relation is confluent and strongly terminating. Further, for any term \( t \), if \( t \triangleright_{\beta'} r \) then \( t \triangleright_{\beta'} r \). Conversely, if \( t \triangleright_{\#\beta'} r \), then there are terms \( s \) and \( s' \) such that \( t \triangleright_{\beta'} s, r \triangleright_{\#\beta'} s' \) and \( s \triangleright_{\#\beta'} s' \). Finally \( \triangleright_{\beta'} \) is confluent.
The reduction of a de Bruijn term to (head) normal form may be carried out using solely the rules defining the \( \triangleright_{\#\beta'} \) relation. The main disadvantage
to not using the merging rules is that some opportunities for sharing in structure walks may be missed. It turns out that, with an leftmost-outermost implementation of reduction, there are very few such cases in practice.\footnote{There should, in fact, be no such cases if reduction is the sole operation on terms. All observed cases originate from unification substitutions for the meta variables discussed later.}
4 Instantiatable Variables, Confluence and Unification
The current syntax of suspension expressions does not allow for instantiatable or meta variables. Such variables may be introduced in one of two forms.
In the first form, these variables would be treated just as in the normal lambda calculus. In particular, instantiations for them must respect the notion of scope. Thus, if \( X \) is an instantiatable variable occurring within abstractions binding \( x_1, \ldots, x_n \), then it cannot be replaced by a structure that depends on any of the abstractions. This logical view is actually the one that is needed in pattern recognition applications. The term \( (\lambda x \ or ((P\ x), Q)) \), for instance, functions as a recognizer for formulas with a universal quantification over a disjunction whose right part is independent of the quantifier precisely because \( Q \) cannot be instantiated to a form that depends on \( x \).
Building this view of instantiatable variables into the suspension notation is easy. At the level of syntax, we simply change the rule for terms to
\[
\langle T \rangle := \langle V \rangle \mid \langle C \rangle \mid \#\langle I \rangle \mid ((\langle T \rangle \ (\langle T \rangle)) \mid (\lambda \ (\langle T \rangle)) \mid ([\langle T \rangle, \langle N \rangle, \langle N \rangle, \langle E \rangle]
\]
where \( \langle V \rangle \) represents the category of such variables. To account for the fact that these variables cannot be affected by substitutions generated by \( \beta \)-contractions, we add the following to our reading rules:
\[
(r8) \quad [x, ol, nl, e] \rightarrow x, \text{ provided } x \text{ is an instantiatable variable.}
\]
This rule is similar to the one for reading constants. Thus, it should not be difficult to see that confluence and termination properties extend naturally to the syntax that includes the new variables. Note also that the smaller collection of rewrite rules discussed in Section 3 suffices for reducing terms containing such variables to normal form.
The other possibility is to view instantiatable variables as placeholders against which any wellformed term can be grafted. Such 'graftable' variables appear initially to fly in the face of pattern matching applications. However, the necessary constraints for such applications can be built in through suitable preprocessing. Thus, consider the template \( (\lambda x \ or ((P\ x), Q)) \) that in de Bruijn notation would be written as \( (\lambda (or\ (P\ #1)\ Q)) \). This term may be transformed into \( (\lambda (or\ ([P,0,1,nil],\ #1)\ [Q,0,1,nil])) \). By so embedding \( P \) and \( Q \) inside suspensions, we insulate them from a dependence on the external abstraction.
This kind of a view can also be incorporated into the suspension notation. The syntax of terms needs to be modified exactly as before. In contrast
to the earlier situation, however, no new rewrite rules need be added. The rationale is that the effect of reduction substitutions on instantiable variables is unknown until their instantiations are themselves known. At a point where these variables have been instantiated, the rewrite rules pertaining to the other forms for terms suffice for computing the effects of reductions.
Let us denote by \( \triangleright_{\text{rm}}, \triangleright_{\text{r}}, \triangleright_{\text{rm} \beta}, \) and \( \triangleright_{\text{r} \beta} \) the previously seen reduction relations on the extended syntax. While \( \triangleright_{\text{rm}} \) and \( \triangleright_{\text{r}} \) must still be strongly terminating, confluence properties are more problematic. The relation \( \triangleright_{\text{r} \beta} \) is, in fact, not confluent. Thus, consider the term \((\lambda (\lambda X \ t_1)) \ t_2\) in which \(X\) is an instantiatable variable and \(t_1\) and \(t_2\) are terms in normal form. Three distinct terms may be posited as ‘normal’ forms for this:
\[
\begin{align*}
[[X, 1, 0, (t_1, 0) : \text{nil}], 1, 0, (t_2, 0) : \text{nil}], \\
[[X, 2, 1, \, @0 : (t_2, 0) : \text{nil}], 1, 0, ([t_1, 1, 0, (t_2, 0) : \text{nil}], 0) : \text{nil}], \text{ and} \\
[X, 2, 0, ([t_1, 1, 0, (t_2, 0) : \text{nil}], 0) : (t_2, 0) : \text{nil}].
\end{align*}
\]
Adding the merging rules changes the picture: the first two terms then reduce to the last. The following proposition can, in fact, be shown:
**Proposition 4.1** Assuming a collection of terms that includes graftable variables, the relations \( \triangleright_{\text{rm}} \) and \( \triangleright_{\text{rm} \beta} \) are both confluent.
The key observation in the proof of this proposition is that associativity for composing substitutions as described in Proposition 2.3 continues to hold.
Interest in the graftable interpretation of meta variables arises from the new approach to higher-order unification described in [5] that exploits such variables. The usual procedure [7] for the (typed) lambda calculus is based on reducing any given unification problem into a set of equations of the form
\[
\lambda x_1 \ldots \lambda x_n (X \ t_1 \ldots t_m) = \lambda x_1 \ldots \lambda x_n (@ \ s_1 \ldots s_l)
\]
where \(X\) is an instantiatable variable and \(@\) is a constant or one of the variables \(x_1, \ldots, x_n\). Towards solving such an equation, substitutions of the form
\[
\lambda w_1 \ldots \lambda w_m (@' (H_1 \ w_1 \ldots w_m) \ldots (H_k \ w_1 \ldots w_m)),
\]
where \(@'\) is either \(@\) or one of \(w_1, \ldots, w_m\) and \(H_1, \ldots, H_k\) are new instantiatable variables, are posited for \(X\). Such substitutions try to get the heads of the two terms that are to be unified to match while delaying decisions concerning the arguments. The arguments of the substitution term are, in fact, chosen so as to not preclude any dependencies on the arguments of the original term. For example, if \(@' = @\) and, correspondingly, \(l = k\), then this substitution will reduce the unification problem to one of simultaneously solving the equations
\[
\lambda x_1 \ldots \lambda x_n (H_i \ t_1 \ldots t_m) = \lambda x_1 \ldots \lambda x_n s_i
\]
for \(1 \leq i \leq l\). Note that \(H_i\) is free to ‘use’ the arguments \(t_1, \ldots, t_m\) in any fashion deemed necessary.
The above transformation involves the construction of a complicated term, the contraction of several \(\beta\)-redexes and a subsequent calculation of their sub-
substitution effects. Using explicit substitutions and ‘graftable’ variables the effort involved in this percolation of dependency information can be considerably reduced. By substituting the term $\lambda w_1 \ldots \lambda w_m Y$ where $Y$ is a graftable variable for $X$, the original equation can be reduced at the outset to
$$\lambda x_1 \ldots \lambda x_n [Y, m, 0, (t_m, 0) :: \ldots :: (t_1, 0) :: nil] = \lambda x_1 \ldots \lambda x_n (@ s_1 \ldots s_l).$$
Notice that the considered substitution for $X$ is meaningful only if $Y$ can later be replaced with something that might contain the variables $w_1, \ldots, w_m$, i.e., $Y$ must be graftable. Now, after this reduction, a term of the form (@ $H_1 \ldots H_l$) can be posited for $Y$, allowing the equation to be transformed into ones of the form
$$\lambda x_1 \ldots \lambda x_n [H_i, m, 0, (t_m, 0) :: \ldots :: (t_1, 0) :: nil] = \lambda x_1 \ldots \lambda x_n s_i$$
for $1 \leq i \leq l$. Significantly, the formation of a complicated term involving applications and the subsequent reductions simply for the purpose of transmitting dependency information can be avoided.
The above discussion actually indicates a tradeoff between different approaches to implementing higher-order unification. The approach based on graftable variables has the mentioned benefits but it also requires the use of a more complete, and complicated, set of environment merging rules. An interesting observation is that the new approach to unification depends mainly on the generation of (head) normal forms that do not contain nested suspensions at the top level. A possibility is that a special control regimen with a reduced set of rewrite rules will ensure that only such forms are produced.
5 Comparison with Other Explicit Substitution Calculi
Three properties are coveted for explicit substitution notations: confluence in a situation where graftable meta variables are included, the ability to compose substitutions and the preservation of strong normalizability for terms in the underlying lambda calculus. Of these, combinability of substitutions seems to be the most important for metalanguage implementations. Unfortunately, most explicit substitution calculi seem not to include this facility. Particular calculi sacrifice other properties as well. The $\lambda \nu$-calculus preserves strong normalizability [2] but it does not admit meta variables. The $\lambda_{s_r}$-calculus permits meta variables and is confluent even with this addition [8] but does not preserve strong normalizability [6]. The $\lambda_{w_{s_r}}$-calculus alone both admits meta variables and preserves strong normalizability [4].
The two calculi that do permit the composition of substitutions are the $\lambda \sigma$-calculus [1] and the suspension notation. There are several similarities between the two calculi that we hope to demonstrate via translation functions in a longer paper. We restrict ourselves here to mentioning two differences that might be significant to low-level implementation tasks. First, it appears easier in our calculus to separate out rewrite rules based on function and to thereby identify subsets like that in Section 3 that are easier to use in prac-
tice. The second difference concerns the way in which the adjustments to the indices of terms in the environment are encoded. In our notation, these are not maintained explicitly but are obtained from the difference between the embedding level of the term that has to be substituted into and an embedding level recorded with the term in the environment. Thus, consider a suspension term of the form $[[t_1, 1, nl, (t_2, nl') :: nil]]$. This represents a term that is to be obtained by substituting $t_2$ for the first free variable in $t_1$ (and modifying the indices for the other free variables). However, the indices for the free variables in $t_2$ must be ‘bumped up’ by ($nl - nl'$) before this substitution is made. In the $\lambda\sigma$-calculus, the needed increment to the indices of free variables is maintained explicitly with the term in the environment. Thus, the suspension term shown above would be represented, as it were, as $[[t_1, 1, nl, (t_2, (nl - nl')) :: nil]]$; actually, the old and new embedding levels are needed in this term only for determining the adjustment to the free variables in $t_1$ with indices greater than the old embedding level, and devices for representing environments encapsulating such an adjustment simplify the actual notation used. The drawback with this approach is that in moving substitutions under abstractions every term in the environment is affected. Thus, from a term like $[(\lambda t_1), 1, nl, (t_2, (nl - nl')) :: nil]$, we must produce one of the form $(\lambda [t_1, 2, nl + 1, @1 :: (t_2, nl - nl' + 1) :: nil])$. In contrast, using our notation, it is only necessary to add a ‘dummy’ element to the environment and to make a local change to the embedding levels of the overall term.
Both the $\lambda\sigma$-calculus and the suspension notation admit grafted meta variables. The former calculus is known not to preserve strong normalizability [10]. For the suspension notation, this is an open question. We conjecture that it actually does preserve this property.
6 Conclusion
We have exposed the suspension notation in this paper with an eye to its use in metalanguage implementations. Certain questions raised in this discussion need a fuller treatment. In Section 4, we have considered the possibility of utilizing our notation augmented with grafted meta variables in realizing higher-order unification. In reality, this procedure needs to be spelled out in detail and a careful, implementation level comparison with an approach that does not use such variables needs to be done. The benefits of the different treatments of meta variables are likely to depend on the way in which substitutions are generated and, for this reason, the experimentation should also consider special cases of higher-order unification such as that described in [11]. In another direction, it is of interest to manifest the connections between the suspension notation and the $\lambda\sigma$-calculus more completely, possibly via translations between them. Finally, the question of whether or not the suspension notation preserves strong normalizability needs to be settled. We hope to consider some of these aspects in a sequel to this paper.
References
|
{"Source-Url": "http://www-users.cs.umn.edu/~gopalan/papers/explicit-subst02.pdf", "len_cl100k_base": 9648, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 79598, "total-output-tokens": 11383, "length": "2e13", "weborganizer": {"__label__adult": 0.0005331039428710938, "__label__art_design": 0.0008006095886230469, "__label__crime_law": 0.0005731582641601562, "__label__education_jobs": 0.001839637756347656, "__label__entertainment": 0.0001773834228515625, "__label__fashion_beauty": 0.0002944469451904297, "__label__finance_business": 0.0004804134368896485, "__label__food_dining": 0.0008206367492675781, "__label__games": 0.0008449554443359375, "__label__hardware": 0.0011911392211914062, "__label__health": 0.0016050338745117188, "__label__history": 0.0005412101745605469, "__label__home_hobbies": 0.0001697540283203125, "__label__industrial": 0.0010919570922851562, "__label__literature": 0.0014028549194335938, "__label__politics": 0.0006122589111328125, "__label__religion": 0.0012750625610351562, "__label__science_tech": 0.3076171875, "__label__social_life": 0.0002027750015258789, "__label__software": 0.007083892822265625, "__label__software_dev": 0.6689453125, "__label__sports_fitness": 0.00048470497131347656, "__label__transportation": 0.0011415481567382812, "__label__travel": 0.000278472900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40395, 0.03172]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40395, 0.72229]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40395, 0.86874]], "google_gemma-3-12b-it_contains_pii": [[0, 2235, false], [2235, 5608, null], [5608, 8938, null], [8938, 12947, null], [12947, 15163, null], [15163, 18223, null], [18223, 21183, null], [21183, 24489, null], [24489, 27779, null], [27779, 31293, null], [31293, 34519, null], [34519, 37714, null], [37714, 40395, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2235, true], [2235, 5608, null], [5608, 8938, null], [8938, 12947, null], [12947, 15163, null], [15163, 18223, null], [18223, 21183, null], [21183, 24489, null], [24489, 27779, null], [27779, 31293, null], [31293, 34519, null], [34519, 37714, null], [37714, 40395, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40395, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40395, null]], "pdf_page_numbers": [[0, 2235, 1], [2235, 5608, 2], [5608, 8938, 3], [8938, 12947, 4], [12947, 15163, 5], [15163, 18223, 6], [18223, 21183, 7], [21183, 24489, 8], [24489, 27779, 9], [27779, 31293, 10], [31293, 34519, 11], [34519, 37714, 12], [37714, 40395, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40395, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
2789fafe1ee5b057bc964cbf2b0fb36e7dac3182
|
A Guided Genetic Algorithm for Automated Crash Reproduction
Soltani, Mozhan; Panichella, Annibale; van Deursen, Arie
DOI
10.1109/ICSE.2017.27
Publication date
2017
Document Version
Accepted author manuscript
Published in
Proceedings of the 39th International Conference on Software Engineering (ICSE)
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.
A Guided Genetic Algorithm for Automated Crash Reproduction
Mozhan Soltani, Annibale Panichella, and Arie van Deursen
Report TUD-SERG-2017-006
A Guided Genetic Algorithm for Automated Crash Reproduction
Mozhan Soltani
Delft University of Technology
The Netherlands
m.soltani@tudelft.nl
Annibale Panichella
SnT Centre - University of Luxembourg
Luxembourg
annibale.panicella@uni.lu
Arie van Deursen
Delft University of Technology
The Netherlands
Arie.vanDeursen@tudelft.nl
Abstract—To reduce the effort developers have to make for crash debugging, researchers have proposed several solutions for automatic failure reproduction. Recent advances proposed the use of symbolic execution, mutation analysis, and directed model checking as underlying techniques for failure analysis of crash stack traces. However, existing approaches still cannot reproduce many real-world crashes due to such limitations as environment dependencies, path explosion, and time complexity. To address these challenges, we present EvoCrash, a post-failure approach which uses a novel Guided Genetic Algorithm (GGA) to cope with the large search space characterizing real-world software programs. Our empirical study on three open-source systems shows that EvoCrash can replicate 41 (82%) of real-world crashes, 34 (89%) of which are useful reproductions for debugging purposes, outperforming the state-of-the-art in crash replication.
Keywords—Search-Based Software Testing; Genetic Algorithms; Automated Crash Reproduction;
I. INTRODUCTION
Manual crash replication is a labor-intensive task. Developers faced with this task need to reproduce failures reported in issue tracking systems, which all too often contain insufficient data to determine the root cause of a failure.
Hence, to reduce developer effort, many different automated crash replication techniques have been proposed in the literature. Such techniques typically aim at generating tests triggering the target failure. For example, record-replay approaches [1]–[5] monitor software behavior via software/hardware instrumentation to collect the observed objects and method calls when failures occur. Unfortunately, such techniques suffer from well-known practical limitations, such as performance overhead [6], and privacy issues [7].
As opposed to these costly techniques, post-failure approaches [6]–[12] try to replicate crashes by exploiting data that is available after the failure, typically stored in log files or external bug tracking systems. Most of these techniques require specific input data in addition to crash stack traces [6], such as core dumps [8]–[10], [13] or models of the software like input grammars [14], [15] or class invariants [16].
Since such additional information is usually not available to developers, recent advances in the field have focused on crash stack traces as the only source of information for debugging [6], [7], [12]. For example, Chen and Kim developed STAR [6], an approach based on backward symbolic execution. STAR outperforms earlier crash replication techniques, such as Randoop [17] and BugRedux [18]. Xuan et al. [12] presented MuCrash, a tool that updates existing test cases using specific mutation operators, thus creating a new pool of tests to run against the software under test. Nayrolle et al. [7] proposed JCHARMING, based on directed model checking combined with program slicing [7], [19].
Unfortunately, the state-of-the-art tools suffer from several limitations. For example, STAR cannot handle cases with external environment dependencies [6] (e.g., file or network inputs), non-trivial string constraints, or complex logic potentially leading to a path explosion. MuCrash is limited by the ability of existing tests in covering method call sequences of interest, and it may lead to a large number of unnecessary mutated test cases [12]. JCHARMING [7], [19] applies model checking which can be computationally expensive. Moreover, similar to STAR, JCHARMING does not handle crash cases with environmental dependencies.
In our previous preliminary study [20], we have suggested to re-use existing unit test generation tools, such as EvoSuite [21], for crash replication. To that end, we developed a fitness function to assess the capability of candidate test cases in replicating the target failure. Although this simple solution could help to replicate one crash not handled by STAR and MuCrash, our preliminary study showed that this simple solution still leaves other crashes as non-reproducible. These negative results are due to the large search space for real world programs where the probability to generate test data satisfying desired failure conditions is low. In fact, the classic genetic operators from existing test frameworks are aimed at maximizing specific coverage criteria [21] instead of exploiting single execution paths and object states that characterize software failures.
To address this challenge, this paper presents an evolutionary search-based approach, named EvoCrash, for crash reproduction. EvoCrash is built on top of EvoSuite [21], the well-known automatic test suite generation tool for Java. For EvoCrash we developed a novel guided genetic algorithm (GGA). It lets the stack trace guide the search, thus reducing the search space. In particular, GGA uses a novel generative routine to build an initial population of tests exercising at least one of the methods reported in the crash stack frames.
Furthermore, GGA uses new crossover and mutation operators to avoid the generation of futile tests that lack calls to failing methods. To further guide the search process, we developed a novel fitness function that improves the calculation of stack trace distance previously defined in [20], to assess candidate test cases.
The contributions of our paper are:
- A novel guided genetic algorithm (GGA) for crash reproduction that generates and evolves only tests that exercise at least one of the methods involved in the failure;
- EvoCrash, a Java tool implementing GGA that generates JUnit tests that developers can directly use for debugging purposes;
- An empirical study on 50 real-world software crashes involving different versions of three open source projects showing that EvoCrash can replicate 41 cases (82%), 34 (89%) of which are useful for debugging;
- A comparison of EvoCrash with three state-of-the-art approaches based on crash stack traces (STAR [6], Mu-Crash [12] and JCHARMING [7]).
Furthermore, we provide a publicly available replication package1 that includes: (i) an executable jar of EvoCrash, (ii) all bug reports used in our study, and (iii) the test cases generated by our tool.
II. RELATED WORK
Since our approach aims at crash reproduction using test generation, we start by summarizing related work in the areas of automated crash replication and coverage-based unit test generation.
A. Automated Approaches to Crash Replication
Previous approaches in the field of crash replication can be grouped into three main categories: (i) record-play approaches, (ii) post-failure approaches using various data sources, and (iii) stack-trace based post-failure techniques. The first category includes the earliest works in this field, such as ReCrash [1], ADDA [2], Bugnet [3], and jRapture [5]. The aforementioned techniques rely on program run-time data for automated crash replication. Thus, they record the program execution data in order to use it for identifying the program states and execution path that led to the program failure. However, monitoring program execution may lead to (i) substantial performance overhead due to software/hardware instrumentation [6]–[8], and (ii) severe privacy issues since the collected execution data may contain sensitive information [6].
On the other hand, post-failure approaches [8]–[11], [15] analyze software data (e.g., core dumps) only after crashes occur, thus not requiring any form of instrumentation. Rossler et al. [8] developed an evolutionary search-based approach named RECORE that leverages from core dumps (taken at the time of a failure) to generate input data. RECORE combines the search-based input generation with a coverage-based technique to generate method sequences. Weeratunge et al. [13] used core dumps and directed search for replicating crashes related to concurrent programs in multi-core platforms. Leitner et al. [9], [10] used a failure-state extraction technique to create tests from core dumps (to derive input data) and stack traces (to derive method calls). Kifetew et al. [14], [15] used genetic programming requiring as input (i) a grammar describing the program input, and (ii) a (partial) call sequence. Boyapati et al. [16] developed another technique requiring manually written specifications containing method preconditions, post-conditions, and class invariants. However, all these post-failure approaches need various types of information that are often not available to developers, thus decreasing their feasibility.
To increase the practical usefulness of automated approaches, researchers have focused on crash stack traces as the only source of information for debugging. For instance, ESD [11] uses forward symbolic execution that leverages commonly reported elements in bug reports. BugRedux [18] also uses forward symbolic execution but it can analyze different types of execution data, such as crash stack traces. As highlighted by Chen and Kim [6], both ESD and BugRedux rely on forward symbolic execution, thus inheriting its problems due to path explosion and object creation [22]. To address these two issues, Chen and Kim [6] introduced STAR, a tool that applies backward symbolic execution to compute crash preconditions and generates a test using a method sequence composition approach.
Different from STAR, JCHARMING [7] uses a combination of crash traces and model checking to automatically reproduce bugs that caused field failure. To address the state explosion problem [23] in model checking, JCHARMING applies program slicing to direct the model checking process by reduction of the search space. Instead, MuCrash [12] uses mutation analysis as the underlying technique for crash replication. First, MuCrash selects the test cases that include the classes in the crash stack trace. Next, it applies predefined mutation operators on the tests to produce mutant tests that can reproduce the target crash.
In our earlier study [20], we showed that even coverage-based tools like EvoSuite can replicate some target crashes if relying on a proper fitness function specialized for crash replication. However, our preliminary results also indicated that this simple solution could not replicate some cases for two main reasons: (i) limitations of the developed fitness function, and (ii) the large search space in complex real-world software.
The EvoCrash approach presented in this paper resumes this line of research because it used evolutionary search to synthesize a crash reproducible test case. However, it is novel because it utilizes a smarter fitness function and it applies an Guided Genetic Algorithm (GGA) instead of coverage-oriented genetic algorithms. Section III presents full details regarding the novel fitness function and GGA in EvoCrash.
B. Unit Test Generation Tools
A number of techniques and tools have been proposed in the literature to automatically generate tests maximizing specific code coverage criteria [17], [21], [24]–[27]. The main
1http://www.evocrash.org/
difference among them is represented by the core search algorithm used for generating tests. For example, EvoSuite [21] and JTestExpert [27] use genetic algorithms to create test suites optimizing branch coverage; Randoom [17] and T3 [24] apply random testing, while DART [25] and Pex [26] are based on dynamic symbolic execution.
As reported in the related literature, such tools can be used to discover bugs affecting software code. Indeed, they can generate test triggering crashes when trying to generate tests exercising the uncovered parts of the code. For example, Fraser and Arcuri [28] successfully used EvoSuite to discover undeclared exceptions and bugs in open-source projects. Recently, Moran et al. [29] used coverage-based tools to discover android application crashes. However, as also pointed out by Chen and Kim [6] coverage-based tools are not specifically defined for crash replication. In fact, these tools are aimed at covering all methods (and their code elements) in the class under test. Thus, already covered methods are not taken into account for search even if none of the already generated tests synthesizes the target crash. Therefore, the probability of generating tests satisfying desired crash triggering object states is particularly low for coverage-based tools [6].
On the other hand, for crash replication, not all methods should be exploited for generating a crash: we are interested in covering only the few lines in those methods involved in failure, while other methods (or classes) might be useful only for instantiating the necessary objects (e.g., input parameters). Moreover, among all possible method sequences, we are interested only on those that can potentially lead to the target crash stack trace. Therefore, in this paper we developed a tool, named EvoCrash, which is specialized for stack trace based crash replication.
III. THE EVOCRASH APPROACH
According to Harman et al. [30], [31], there are two key ingredients for a successful application of search-based techniques. The first is the formulation of a proper fitness function to guide the search toward reaching the target, which in our case is a way to trigger a crash. The second ingredient consists of applying a proper search algorithm to promote tests closer to mimicking the crash, while penalizing tests with poor fitness values. The next sub-sections detail the fitness function as well as the genetic algorithms we designed in EvoCrash.
A. Crash Stack Trace Processing
An optimal test case for crash reproduction has to crash at the same location as the original crash and produce a stack trace as close to the original one as possible. Therefore, in EvoCrash we first parse the log file given as input in order to extract the crash stack frames of interest. A standard Java stack trace contains (i) the type of the exception thrown, and (ii) the list of stack frames generated at the time of the crash. Each stack frame corresponds to one method involved in the failure, hence, it contains all information required for its identification: (i) the method name; (ii) the class name, and (iii) line numbers where the exception was generated. The last frame is where the exception has been thrown, whereas the root cause could be in any of the frames, or even outside the stack trace.
From a practical point of view, any class or method in the stack trace can be selected as code unit to use as input for existing test case generation tools, such as EvoSuite. However, since our goal is to synthesize a test case generating a stack trace as close to the original one as possible, we always target the class where the exception is thrown (last stack frame in the crash stack trace) as the main class under test (CUT).
B. Fitness Function
As described in our previous study [20], our fitness function is formulated to consider three main conditions that must hold so that a test case would be evaluated as optimal and have zero distance: (i) the line (statement) where the exception is thrown has to be covered, (ii) the target exception has to be thrown, and (iii) the generated stack trace must be as similar to the original one as possible. More formally, we use the following fitness formulation:
**Definition 1.** The fitness function value of a given test t is:
\[
f(t) = 3 \times d_s(t) + 2 \times d_{e,\text{except}}(t) + d_{\text{trace}}(t)
\]
where \(d_s(t)\) denotes how far \(t\) is to executing the target statement, i.e., the location of the crash; \(d_{e,\text{except}}(t)\) is a binary value indicating whether the target exception is thrown or not; and \(d_{\text{trace}}(t)\) measures the distance between the generated stack trace (if any) and the expected trace.
For the line distance \(d_s(t)\), we use the two well-known heuristics approach level and branch distance to guide the search for branch and statement coverage [20]. The approach level measures the distance (i.e., minimum number of control dependencies) between the path of the code executed by \(t\) and the target statement. The branch distance uses a set of well-established rules [32] to score how close \(t\) is to satisfying the branch condition for the branch on which the target statement is directly control dependent.
If the target exception is thrown, \(d_{e,\text{except}}(t) = 0\), then we proceed by calculating the trace distance, \(d_{\text{trace}}(t)\), otherwise, the trace distance remains equal to the maximum value it can have, 1.0. To calculate the trace distance, \(d_{\text{trace}}(t)\), in our preliminary study [20] we used the distance function defined as follows. Let \(S^* = \{e_1^*, \ldots, e_n^*\}\) be the target stack trace to replicate, where \(e_i^* = (C_i^*, m_i^*, l_i^*)\) is the \(i\)-th element in the trace composed by class name \(C_i^*\), method name \(m_i^*\), and line number \(l_i^*\). Let \(S = \{e_1, \ldots, e_k\}\) be the stack trace (if any) generated when executing the test \(t\). The distance between the expected trace \(S^*\) and the actual trace \(S\) is defined as:
\[
D(S^*, S) = \sum_{i=1}^{\min(k,n)} \varphi(\text{diff}(e_i^*, e_i)) + |n - k|
\]
where \(\text{diff}(e_i^*, e_i)\) measures the distance between the two trace elements \(e_i^*\) and \(e_i\) in the traces \(S^*\) and \(S\) respectively; finally, \(\varphi(x) \in [0,1]\) is the widely used normalizing function \(\varphi(x) = x/(x + 1)\) [32]. However, such a distance definition
has one critical limitation: it strictly requires that the expected trace $S^*$ and the actual trace $S$ share the same prefix, i.e., the first $\min\{k, n\}$ trace elements. For example, assume that the triggered stack trace $S$ and target trace $S^*$ have one stack trace element $e_{\text{shared}}$ in common (i.e., one element with the same class name, method name, and source code line number) but that is located at two different positions, e.g., $e_i^*$ is the second element in $S$ ($e_{\text{shared}} = e_2$ in $S$) while it is the third one in $S^*$ ($e_{\text{shared}} = e_3^*$ in $S^*$). In this scenario, Equation 2 will compare the element $e_j^*$ in $S^*$ with the element in $S$ at the same position $i$ (i.e., with $e_3$) instead of considering the closest element $e_{\text{shared}} = e_2$ for the comparison.
To overcome this critical limitation, in this paper we use the following new definition of stack trace distance:
**Definition 2.** Let $S^*$ be the expected trace, and let $S$ be the actual stack trace triggered by a given test $t$. The stack trace distance between $S^*$ and $S$ is defined as:
$$D(S^*, S) = \sum_{i=1}^{n} \min \{ \text{diff}(e_i^*, e_j) : e_j \in S \}$$
(3)
where $\text{diff}(e_i^*, e_j)$ measures the distance between the two trace elements $e_i^*$ in $S^*$ and its closest element $e_j$ in $S$.
We say that two trace elements are equal if and only if they share the same trace components. Therefore, we define $\text{diff}(e_i^*, e_j)$ as follows:
$$\text{diff}(e_i^*, e_j) = \begin{cases} 3 & C_i^* \neq C_i \\ 2 & C_i^* = C_i \\ \text{ otherwise} & m_i^* \neq m_i \end{cases}$$
(4)
The score $\text{diff}(e_i^*, e_j)$ is equal to zero if and only if the two trace elements $e_i^*$ and $e_j$ share the same class name, method name and line number. Similarly, $D(S^*, S)$ in Equation 3 is zero if and only if the two traces $S^*$ and $S$ are equal, i.e., they share the same trace elements. Starting from the function in Equation 3, we define the trace distance $d_{\text{trace}}(t)$ as the normalized $D(S^*, S)$ function:
$$d_{\text{trace}}(t) = \varphi(D(S^*, S)) = D(S^*, S)/(D(S^*, S) + 1)$$
(5)
Consequently, $D(S^*, S)$ in Equation 3 is zero if and only if $S^*$ shares the same trace elements with $S$. In addition, our fitness function $f(t)$ assumes values within the interval $[0, 6]$, reaching a zero value if and only if the evaluated test $t$ replicates the target crash.
**C. Guided Genetic Algorithm**
In EvoCrash, we use a novel genetic algorithm, named GGA (Guided Genetic Algorithm), suitably defined for the crash replication problem. While traditional search algorithms in coverage-based unit test tools target all methods in the CUT, GGA gives higher priority to those methods involved in the target failure. To accomplish this, GGA uses three novel genetic operators that create and evolve test cases that always exercise at least one method contained in the crash stack trace, increasing the overall probability of triggering the target crash. As shown in Algorithm 1, GGA contains all main steps of a standard genetic algorithm: (i) it starts with creation of an initial population of random tests (line 5); (ii) it evolves such tests over subsequent generations using crossover and mutation (lines 12-20); and (iii) at each generation it selects the fittest tests according to the fitness function (lines 22-24). The main difference is represented by the fact that it uses (i) a novel routine for generating the initial population (line 5); (ii) a new crossover operator (line 15); (iii) a new mutation operator (lines 19-20). Finally, the fittest test obtained at the end of the search is optimized by post-processing (in line 26).
**Initial Population.** The routine used to generate the initial population plays a paramount role [33] since it performs sampling of the search space. In traditional coverage-based tools (e.g., EvoSuite [21] or JTestXpert [27]) such a routine is designed to generate a well-distributed population (set of tests) calling as many methods in the target class as possible [21], which is not the main goal for crash replication.
For this reason, in this paper we use the novel routine highlighted in Algorithm 2 for generating the initial sample for random tests. In particular, our routine gives higher importance to methods contained in crash stack frames. Subsequently, if a target call, selected by the developer, is public or protected, Algorithm 2 guarantees that this call is inserted in each test at least once. Otherwise, if the target call is private, the algorithm guarantees that each test contains at least one call to a public caller method which invokes the target private call. Algorithm 2 generates random tests using the loop in lines 3-18, and requires as input (i) the set of public target method(s)
$M_{crash}$, (ii) the population size $N$, and (iii) the class under test $C$. In each iteration, we create an empty test $t$ (line 4) to fill with a random number of statements (lines 5-18). Then, statements are randomly inserted in $t$ using the iterative routine in lines 8-18: at each iteration we insert a call to one public method either taken from $M_{crash}$, or member classes of $C$. In the first iteration, crash methods in $M_{crash}$ (methods of interest) are inserted in $t$ with a low probability $p = 1/\text{size}$ (line 7), where $\text{size}$ is the total number of statements to add in $t$. In the subsequent iterations, such a probability is automatically increased when no methods from $M_{crash}$ is inserted in $t$ (line 15-17). Therefore, Algorithm 2 ensures that at least one method of the crash is inserted in each initial test$^2$.
The process of inserting a specific method call in a test $t$ requires several additional operations [21]. For example, before inserting a method call $m$ in $t$ it is necessary to instantiate an object of the class containing $m$ (e.g., calling one of the public constructors). Creating a proper method call also requires the generation of proper input parameters, such as other objects or primitive variables. For all these additional operations, Algorithm 2 uses the routine INSERT-METHOD-CALL (line 18). For each method call in $t$, it sets the input parameters values by re-using objects and variables already defined in $t$, setting some input values to null (only for objects used as input parameters), or randomly generating new objects and primitive values.
**Guided Crossover.** Even if all tests in the initial population exercise one or more methods contained in the crash stack trace, during the evolution process—i.e., across different generations— tests can lose the inserted target calls. One possible cause for this scenario is represented by the traditional single-point crossover, which generates two offsprings by randomly exchanging statements between two parent tests $p_1$ and $p_2$. Given a random cut-point $\mu$, the first offspring $o_1$ inherits the first $\mu$ statements from parent $p_1$, followed by $|p_2| - \mu$ statements from parent $p_2$. Vice versa, the second offspring $o_2$ will contain $\mu$ statements from parent $p_2$ and $|p_1| - \mu$ statements from the parent $p_1$. Even if both parents exercise one or more failing methods from the crash stack trace, after crossover is performed, the calls may be moved into one offspring only. Therefore, the traditional single-point crossover can hamper the overall algorithm.
To avoid this scenario, GGA leverages a novel guided single-point crossover operator, whose main steps are highlighted in Algorithm 3. The first steps in this crossover are identical to the standard single-point crossover: (i) it selects a random cut point $\mu$ (line 5), (ii) it recombines statements from the two parents around the cut-point (lines 7-8 and 12-13 of Algorithm 3). After this recombination, if $o_1$ (or $o_2$) loses the target method calls (a call to one of the methods reported in the crash stack trace), we reverse the changes and re-define $o_1$ (or $o_2$) as pure copy of its parent $p_1$ ($p_2$ for offspring $o_2$) (i.e. conditions in lines 10-11 and 16-17). In this case, the mutation operator will be in charge of applying changes to $o_1$ (or $o_2$).
Moving method calls from one test to another may result in non well-formed tests. For example, an offspring may not contain proper class constructors before calling some methods; or some input parameters (either primitive variables or objects) are not inherited from the original parent. For this reason, Algorithm 3 applies a correction procedure (lines 9 and 15) that inserts all required objects and primitive variables into non well-formed offspring.
**Guided Mutation.** After crossover, new tests are usually mutated (with a low probability) by adding, changing and removing some statements. While adding statements will not affect the type of method calls contained in a test, the statement deletion/change procedures may remove relevant calls to methods in the crash stack frame. Therefore, GGA also uses a new guided-mutation operator, described in Algorithm 4.
---
$^2$In the worst case, a failing method will be inserted at position size in $t$ since the probability $\text{insert}_t\text{probability}$ will be $1/(\text{size} - \text{size} + 1) = 1$.
Algorithm 4: GUIDED-MUTATION
Input: Test \( t = \langle s_1, \ldots, s_n \rangle \) to mutate
Set of failing methods \( M_{\text{crash}} \)
Result: Mutated test \( t' \)
1. begin
2. \( \text{apply}\_\text{mutation} \leftarrow \text{true} \)
3. while \( \text{apply}\_\text{mutation} \rightarrow \text{true} \) do
4. \( \text{for } i = 1 \text{ to } n \) do
5. \( \phi \leftarrow \text{random number } \in [0; 1] \)
6. if \( \phi \notin 1/n \) then
7. delete statement \( s_i \)
8. if \( \phi \) change statement \( s_i \)
9. if \( \phi \) insert statement \( s_i \)
10. if \( \phi \) change probability then
11. delete statement \( s_i \)
12. insert a new method call at line \( i \)
13. end if
14. end if
15. end while
16. end if
Post processing. At the end of the search process, GGA returns the fittest test case according to our fitness function. The resulting test \( t_{\text{best}} \) can be directly used by developer as starting point for crash replication and debugging.
Since method calls are randomly inserted/changed during the search process, the final test \( t_{\text{best}} \) can contain statements not useful to replicate the crash. For this reason, GGA post-processes \( t_{\text{best}} \) to make it more concise and understandable. For this post-processing, we reused the test optimization routines available in EvoSuite [21], namely: test minimization, and values minimization. Test minimization applies a simple greedy algorithm: it iteratively removes all statements that do not affect the final fitness value. Finally, randomly generated input values can be hard to interpret for developers [34]. Therefore, the values minimization from EvoSuite shortens the identified numbers and simplifies the randomly generated strings [35].
IV. EMPIRICAL STUDY
This section describes the empirical study we conducted to benchmark the effectiveness of the EvoCrash approach.
A. Definition and Context
The context of this study consists of 50 bugs from three real-world open source projects: Apache Commons Collections\(^3\) (ACC), Apache Ant\(^4\) (ANT), Apache Log4j\(^5\) (LOG). ACC is a popular Java library with 25,000 lines of code (LOC), which provides utilities to extend the Java Collection Framework. For this library we selected 12 bug reports publicly available on Jira\(^6\) submitted between October 2003 and June 2012, thus involving five different ACC versions. ANT is a large Java build tool with more than 100,000 LOC, which supports different built-in tasks, including compiling, running and executing tests for Java applications. For ANT we selected 20 bug reports submitted on Bugzilla\(^7\) between April 2004 and August 2012 and that concern 10 different versions and sub-modules. Finally, LOG is a widely used Java library with 20,000 LOC that implements logging utilities for Java applications. For this library we selected 18 bug reports reported within the time windows between June 2001 and October 2009 and that are related to three different LOG versions. The characteristics of the selected bugs, including type of exception and priority, are summarized in Table I.
### Table I
<table>
<thead>
<tr>
<th>Project</th>
<th>Bug IDs</th>
<th>Versions</th>
<th>Exception</th>
<th>Priority</th>
<th>Ref.</th>
</tr>
</thead>
<tbody>
<tr>
<td>ACC</td>
<td>10798, 11570, 31003</td>
<td>4.0 - 4.4</td>
<td>NullPointer (1), Major (10)</td>
<td>Minor (10)</td>
<td>[6]</td>
</tr>
<tr>
<td>ANT</td>
<td>17026, 18026, 30262</td>
<td>1.0 - 1.4</td>
<td>NullPointer (1), Major (10)</td>
<td>Minor (10)</td>
<td>[7]</td>
</tr>
<tr>
<td>LOG</td>
<td>29, 43, 509, 10528</td>
<td>1.2 - 1.2</td>
<td>InitalizerError (1), Medium (11)</td>
<td>Medium (11)</td>
<td>[6]</td>
</tr>
</tbody>
</table>
3https://commons.apache.org/proper/commons-collections/
4http://ant.apache.org
5http://logging.apache.org/log4j/2.x/
6https://issues.apache.org/jirasecure/Dashboard.jspa
7https://bz.apache.org/bugzilla/
---
6 TUD-SERG-2017-006
selection covers crashes that involve the most common Java Exceptions [38], such as NullPointerException (77%), ArrayIndexOutOfBoundsException (8%), IllegalStateException and IllegalArgumentException Exception (4%). Furthermore, the severity of these real-world bugs varies between medium (50%), major (36%) and critical (6%) as judged by the original developers.
B. Research Questions
To evaluate the effectiveness of EvoCrash we formulate the following research questions:
- **RQ1:** In which cases can EvoCrash successfully reproduce the targeted crashes, and under what circumstances does it fail to do so? With this preliminary research question we aim at evaluating the capability of our tool to generate test cases (i) that can replicate the target crashes, and (ii) that are useful for debugging.
- **RQ2:** How does EvoCrash perform compared to state-of-the-art reproduction approaches based on stack traces? In this second research question we investigate the advantages of EvoCrash as compared to the best known stack trace approaches previously proposed in the literature.
C. Experimental Procedure
We run EvoCrash on each target crash to try to generate a test case able to reproduce the corresponding stack trace. Given the randomized nature of genetic algorithms, the search for each target bug/crash was repeated 50 times in order to verify that the target crashes are replicated the majority of the time. In our experiment, we configured GGA by using standard parameter values widely used in evolutionary testing [21], [39], [40]:
- **Population size:** for GGA, we initially use a population size of 50 test cases. If the search reaches the timeout (30 minutes), we increment the population size by 25 and run EvoCrash once again until the population size reaches 300. If with population size of 300 EvoCrash cannot create a test case with fitness = 0.0 in 30 minutes, we specify the crash case as non-reproducible.
- **Crossover:** we use the novel guided single-point crossover with crossover probability set to 0.75 [21].
- **Mutation:** as mutation operator we use our guided uniform mutation, which mutates test cases by randomly adding, deleting, or changing statements. We set the mutation probability equal to 1/n, where n is the length of the test case taken as input [21].
- **Search Timeout:** the search stops when a zero fitness function value is detected or when the timeout of 30 minutes is reached [40].
To address RQ1, we apply the two criteria proposed by Chen and Kim [6] for evaluating the effectiveness of crash replication tools: Crash Coverage and Test Case Usefulness. According to the Crash Coverage criterion, a crash is covered when the test generated by EvoCrash results in the generation of the same type of exception at the same crash line as reported in the crash stack trace. Therefore, for this criterion we classified as covered only those crashes for which EvoSuite reached a fitness value equal to 0.0, i.e., when the generated crash stack trace is identical to the target one. In these cases, we also re-executed the generated tests against the CUT to ensure that the crash stack frame was correctly replicated.
For the Test Case Usefulness criterion, a generated test case by EvoCrash is considered useful if it can reveal the actual bug that causes the original crash. Therefore, we manually examined each crash classified as covered (using the coverage criterion) to investigate if it can reveal the actual bug following the guidelines in [6]. A test case reveals a bug if the generated crash trace includes the buggy frame (i.e., the stack element which the buggy method lies in [6]) or the frame the execution of which covers the buggy component. To assess usefulness of the tests, we carefully inspected the original developers’ fixes to identify the bug fixing locations. Finally, useful tests have to reveal the origin of the corrupted input values (e.g., null values) passed to the buggy methods that trigger the crash [6]. This manual validation has been performed by two authors independently, and cases of disagreement were discussed.
To address RQ2, we selected three state-of-the-art techniques, namely: STAR [6], MuCrash [12], and JCHARMING [7]. These three techniques are modern approaches to crash replication for Java programs, and they are based on three different categories of algorithms: symbolic execution [6], mutation analysis [12], and model checking [7].
At the time of this submission, these three tools (either as executable jars or source code) were not available. Therefore, to compare our approach, we rely on their published data. Since the studies use different data sets, we cannot report data points for all subject systems. Thus, we compared EvoCrash with MuCrash for the 12 bugs selected from ACC that have also been used by Xuan et al. [12] to evaluate their tool. We compared EvoCrash with JCHARMING for the 8 bug reports that have been also used by Nayrolles et al. [7]. Finally, we compare EvoCrash with STAR for the 50 bugs in our sample that are in common with the study by Chen and Kim [6].
V. EXPERIMENTAL RESULTS
This section presents the results of the empirical study we conducted to evaluate the effectiveness of EvoCrash in terms of crash coverage and test case usefulness. Moreover, we provide the first comparison results between the effectiveness of EvoCrash, STAR [6], MuCrash [12], and JCHARMING [7], as the state-of-the-art approaches based on crash stack traces.
**EvoCrash Results (RQ1)** As Table II illustrates, EvoCrash can successfully replicate the majority of the crashes in our dataset. Of the replicated cases, LOG-509 had the lowest rate of replications - 39 out of 50 - and 39 cases could be replicated 50 times out of 50. EvoCrash reproduces 10 crashes out of 12 (83%) for ACC, 14 out of 20 (70%) for ANT, and 17 out of 18 (94%) for LOG. Overall, it can replicate 41 (82%) out of the 50 crashes.
To assess the usefulness of the generated test cases, we used the same criterion that was used for STAR [6]. Based on
## TABLE II
**Detailed Crash Reproduction Results**, where Y(YES), indicates the capability to generate a useful test case, N(NO) indicates lack of ability to reproduce a crash, NU(Not Useful) shows that a test case could be generated, but it was not useful, and **
** indicates that data regarding the capability of the approach in reproducing the identified crash is missing.
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>ACC</td>
<td>4</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>28</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>35</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>48</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>53</td>
<td>Y</td>
<td>Y</td>
<td>N</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>68</td>
<td>N</td>
<td>N</td>
<td>N</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>70</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>77</td>
<td>NU</td>
<td>NU</td>
<td>N</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>1104</td>
<td>N</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>331</td>
<td>Y</td>
<td>N</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>411</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td>ANT</td>
<td>28285</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>33446</td>
<td>NU</td>
<td>NU</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>34722</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>34734</td>
<td>NU</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>36733</td>
<td>NU</td>
<td>NU</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>39458</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>38622</td>
<td>NU</td>
<td>Y</td>
<td>-</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>42179</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>43289</td>
<td>N</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>46489</td>
<td>Y</td>
<td>NU</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>44790</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>46747</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>47306</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>48715</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>49137</td>
<td>Y</td>
<td>NU</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>49755</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>49803</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>50894</td>
<td>Y</td>
<td>NU</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>51035</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>53626</td>
<td>Y</td>
<td>N</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>104</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>29</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>43</td>
<td>N</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>509</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>10528</td>
<td>Y</td>
<td>N</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>10706</td>
<td>Y</td>
<td>N</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>11570</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>31003</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>40212</td>
<td>Y</td>
<td>NU</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>41186</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td></td>
<td>44032</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>44899</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>45335</td>
<td>Y</td>
<td>NU</td>
<td>-</td>
<td>N</td>
</tr>
<tr>
<td></td>
<td>46144</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>46271</td>
<td>NU</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>46404</td>
<td>Y</td>
<td>N</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>47547</td>
<td>Y</td>
<td>Y</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>47912</td>
<td>Y</td>
<td>NU</td>
<td>Y</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>47957</td>
<td>NU</td>
<td>Y</td>
<td>-</td>
<td>N</td>
</tr>
</tbody>
</table>
This, 34 (89%) of the replications were useful, as they included buggy frame. The remaining 13% non-useful replications were mainly due to having dependency on data from external files which were not available during replication.
For ACC, there were two cases (ACC-68, and ACC-104) which were not reproducible by EvoCrash. For ACC-68, the class under test includes three nested classes, and the innermost one was where the crash occurs. Currently, EvoSuite does not support instrumentation of multiple inner classes. For ACC-104, EvoCrash could replicate the case 4 times out of 50. This low ratio was due to the fact that calls to the input object and target method had to be made in a certain order to trigger the crash.
For ANT, 6 of the 20 cases (30%) are currently not supported by EvoCrash. For these cases, the major hindering factor was the dependency on a missing external build.xml file, which is used by ANT for setting up the project configuration. However, build.xml was not supplied for many of the crash reports. In addition, the use of Java reflection made it more challenging to reproduce these ANT cases, since the specific values for class and method names are not known from the crash stack trace.
For LOG, 1 of the 18 cases (5%) is not supported by EvoCrash. In this case, the target call is made to a static class initializer, which is not supported by EvoCrash yet.
### Comparison to the State of the Art (RQ2)
Table II shows the comparisons of EvoCrash to STAR, MuCRASH, and JCHARMING. Bold entries represent bugs which can be triggered by EvoCrash, yet one of the other techniques is not; Underlined entries represent bugs that EvoCrash cannot reproduce, while there is another technique that can. As can be seen, there are 22 (bold) cases in which EvoCrash outperforms the state of the art, and there are 2 (underlined) cases that EvoCrash cannot handle. Below we discuss these cases in more detail.
**EvoCrash vs. STAR.** As Table II presents, for ACC, EvoCrash covers all the cases that STAR covers except for ACC-104 (that was reflected on previously). In addition, EvoCrash covers 3 cases (25%) which were not covered by STAR due to the path explosion problem. For instance, in ACC-331, the defect exists in a private method, least, inside a for loop, inside the third if condition, which was too complicated for STAR. The case was complex for EvoCrash too, since this was one of the cases where we had to increase the population size (from 50 to 175).
For ANT, EvoCrash supports 7 cases (35%) which are not covered by STAR. Out of the 7, there are 3 cases, for which only EvoCrash can generate a useful test case. Listing 1 shows the crash stack trace for these cases (ANT-49137). As reported in the issue tracking system of the project\(^8\), in this case, the defect exists in the 4th stack frame. Thus, a useful test case should (i) make a call to the method delete, (ii) trigger a `java.lang.NullPointerException`, and (iii) yield a crash trace which includes the first stack frame, where the exception was thrown. As Listing 2 depicts,
```java
public void test0() throws Throwable {
Symlink symlink0 = new Symlink();
symlink0.setLink("");
symlink0.delete();
}
```
\(^8\)https://bz.apache.org/bugzilla/show_bug.cgi?id=49137
public void test0() throws Throwable {
java.io.File v1 = (java.io.File) null;
org.apache.tools.ant.util.SymbolicLinkUtils v2 =
org.apache.tools.ant.util.SymbolicLinkUtils.
getSymbolicLinkUtils();
v2.isSymbolicLink((java.io.File) v1, (java.lang.
String) null);
}
Listing 8. The EvoCrash Test for LOG-45335.
case of UnboundedFifoBuffer, the tail index is set to a number larger than the buffer size, and then that the method remove is invoked. In addition, the order in which the methods are invoked matters. So, if the tail index would be set after remove is called, the target crash would not be replicated. As shown in Listing 6, EvoCrash synthetized the right method sequence and reproduced ACC-53.
EvoCrash vs. JCHARMING. As Table II shows, only few cases from ANT and LOG were shared with the cases used to evaluate JCHARMING. While 75% of the shared cases are covered both by EvoCrash and JCHARMING, there is substantial difference in the efficiency of the two approaches. On average, EvoCrash takes less than 2 minutes to cover the target crashes, whereas JCHARMING may take from 10 to 38 minutes to generate tests for the same cases.
For LOG-41186, 2 LOG cases out of 7 (29%) are only supported by EvoCrash. As an example, Listing 7 shows the crash stack trace for LOG-45335, which is one of the two cases covered only by EvoCrash. To generate a useful test for LOG-45335, as depicted in Listing 8, EvoCrash sets the ht state in NDC to null, and then makes a call to the static method remove, which is the buggy frame method.
VI. DISCUSSION
We identify two possible directions for future work.
Interactive Search. It should be noted that since GGA strives for finding the fittest test case, thus discarding the ones with fitness ≠ 0.0, the crash coverage and usefulness evaluation was performed on a set of EvoCrash tests with fitness equal to 0.0. However, considering the crash exploitability and usefulness criteria adopted from STAR [6], it could be possible that EvoCrash discarded tests with fitness between 0.0 and 1.0,
which would actually conform to the aforementioned criteria. Considering the fitness function range, fitness values could be from 0.0 to 6.0, where 6.0 means a test case that does not reach the target line, therefore does not invoke the target method, and in turn, does not trigger the target exception. In contrast, fitness 0.0 means that the test covers the target line and method, and triggers the target exception. According to the definition of the fitness function (presented in Section III), when the fitness value is between 0.0 and 1.0, the target line and exception are covered, however, the stack trace similarity is not ideal yet. In this case, even though the target stack similarity is not achieved, crash coverage and test usefulness criteria could be covered. As the result, future work can provide interactive mechanisms through which the precision of the fitness function could be adjusted, so tests with fitness between 0.0 and 1.0 could also be accepted.
In addition, dependency on external files was a major factor that prevented EvoCrash from covering more cases. As described earlier, for some of the cases with environmental dependency, we increased the population size, which in turn led to successful generation of tests for some of the cases. Thus, if external files were to be provided by the bug reporters, then enabling developers to specify the external files, or adjust the population size through interactive mechanisms could be another possible direction for the future work.
Extending Comparisons. While to make the comparison among EvoCrash, STAR, MuCrash, and JCHARMING, we had to identify a subset of cases shared in the empirical evaluations of the techniques, we realize the need to extend the comparison between (i) EvoCrash and JCHARMING, and (ii) EvoCrash and MuCrash. To improve the comparison with JCHARMING, we would adopt the other projects that were targeted by JCHARMING, and evaluate EvoCrash against the identified cases for them. Considering the substantial performance difference between EvoCrash and JCHARMING, we also wish to statistically compare the efficiency of the tools. To do so, we would rely on availability of JCHARMING for experimentation. To improve the comparison with MuCrash, if additional evaluation data is published for the tool, or MuCrash becomes publicly available, we would extend the empirical study to increase the validity of the comparison results.
VII. THREATS TO VALIDITY
With respect to external validity, the main threats arise from the focus on Java and open source. The use of Java is needed for our experiments due to the dependency on EvoSuite, yet we expect our approach to behave similarly on other languages such as Ruby or C#.
To maximize reproducibility and to enable comparison with the state of the art we rely on open source Java systems. We see no reason why closed source stack traces would be substantially different. As part of our future work we will engage with one of our industrial partners, mining their log files for frequent stack traces. This will help them create test cases that they can add to their test suite to reproduce and fix errors their software suffers from.
In order to facilitate comparison with earlier approaches we selected bugs and system versions that have been used in earlier studies, and hence are several years old. We anticipate that our approach works equally well on more recent bugs or versions as well, but have not conducted such experiments yet.
A finding of our experiments is that a key limiting factor for any stack-trace based approach is the unavailability of external data that may be needed for the reproduction. Further research is needed to (1) mitigate this limitation; and (2) identify a different data set of crashes focusing on such missing data, in order to further narrow down this problem.
With respect to internal validity, a key threat is in the evaluation of the crash coverage and usefulness of the generated test cases. In case EvoCrash generated a test with fitness = 0.0, we double checked the generated crash stack trace to ensure that the corresponding test correctly replicated the crash stack frame. Despite having taken the above procedures, it is still possible that we made errors in the inspections and evaluations. To mitigate the chances of introducing errors, we peer reviewed the tests and crashes. In addition, we make the EvoCrash tool, and the generated test cases publicly available for further evaluations.
VIII. CONCLUSION
To increase developers’ productivity while debugging, several approaches to automated crash reproduction have been proposed. However, the existing solutions have certain limitations that adversely affect their capability in covering more crash cases for real-world software projects. This paper presents EvoCrash, which is a search-based approach to crash replication based on using data from crash stack traces. EvoCrash applies a novel Guided Genetic Algorithm (GGA) as well as a smart fitness function, to search for a test case that can trigger the target crash and reveal the buggy frame in the crash stack trace. Our experimental evaluation shows that EvoCrash addresses the major challenges that were faced by three cutting-edge approaches, and thereby, outperforms them in automated crash reproduction.
The future work may take several directions, including: (i) enhancing the fitness function implemented in EvoCrash, (ii) extending the comparison between EvoCrash and the other techniques, which considerably would depend on the availability of the tools, and (iii), evaluating EvoCrash for industrial projects.
The implementation of EvoCrash, as well as the experimental data are publicly available.
ACKNOWLEDGMENT
This research was partially funded by the EU Project STAMP ICT-16-10 No.731529, the Dutch 4TU project “Big Software on the Run” and National Research Fund, Luxembourg FNR/P10/03.
REFERENCES
|
{"Source-Url": "http://pure.tudelft.nl/ws/portalfiles/portal/11926462/TUD_SERG_2017_006.pdf", "len_cl100k_base": 13278, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 50295, "total-output-tokens": 16794, "length": "2e13", "weborganizer": {"__label__adult": 0.0003325939178466797, "__label__art_design": 0.0002505779266357422, "__label__crime_law": 0.0002887248992919922, "__label__education_jobs": 0.0005488395690917969, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.00014269351959228516, "__label__finance_business": 0.0001348257064819336, "__label__food_dining": 0.00024378299713134768, "__label__games": 0.0005564689636230469, "__label__hardware": 0.000583648681640625, "__label__health": 0.00034046173095703125, "__label__history": 0.00017940998077392578, "__label__home_hobbies": 7.18832015991211e-05, "__label__industrial": 0.00023233890533447263, "__label__literature": 0.0002083778381347656, "__label__politics": 0.00018310546875, "__label__religion": 0.0003387928009033203, "__label__science_tech": 0.00658416748046875, "__label__social_life": 7.742643356323242e-05, "__label__software": 0.004901885986328125, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0003421306610107422, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62418, 0.04475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62418, 0.31815]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62418, 0.87175]], "google_gemma-3-12b-it_contains_pii": [[0, 1165, false], [1165, 1310, null], [1310, 1310, null], [1310, 6616, null], [6616, 12649, null], [12649, 19066, null], [19066, 23907, null], [23907, 28379, null], [28379, 32157, null], [32157, 38214, null], [38214, 45568, null], [45568, 47649, null], [47649, 53773, null], [53773, 62170, null], [62170, 62418, null], [62418, 62418, null], [62418, 62418, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1165, true], [1165, 1310, null], [1310, 1310, null], [1310, 6616, null], [6616, 12649, null], [12649, 19066, null], [19066, 23907, null], [23907, 28379, null], [28379, 32157, null], [32157, 38214, null], [38214, 45568, null], [45568, 47649, null], [47649, 53773, null], [53773, 62170, null], [62170, 62418, null], [62418, 62418, null], [62418, 62418, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62418, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62418, null]], "pdf_page_numbers": [[0, 1165, 1], [1165, 1310, 2], [1310, 1310, 3], [1310, 6616, 4], [6616, 12649, 5], [12649, 19066, 6], [19066, 23907, 7], [23907, 28379, 8], [28379, 32157, 9], [32157, 38214, 10], [38214, 45568, 11], [45568, 47649, 12], [47649, 53773, 13], [53773, 62170, 14], [62170, 62418, 15], [62418, 62418, 16], [62418, 62418, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62418, 0.18689]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
17b969d1d25a3e4e5951a9ff820e7e2f8bf0aa9b
|
Specifying and Enforcing Intertask Dependencies
Paul C. Attie, Munindar P. Singh
Carnot Project, MCC
3500 W. Balcones Center Drive
Austin, TX 78759
USA
{attie, m Singh}@mcc.com
Amit Sheth
Bellcore
444 Hoes Lane
Piscataway, NJ 08854
USA
amit@ctt.bellcore.com
Marek Rusinkiewicz
Dept of Computer Science
University of Houston
Houston, TX 77204
USA
marek@cs. u h.edu
Abstract
Extensions of the traditional atomic transaction model are needed to support the development of multi-system applications or workflows that access heterogeneous databases and legacy applications. Most extended transactions use conditions involving events or dependencies between transactions. Intertask dependencies can serve as a uniform framework for defining extended transaction models. In this paper, we introduce event attributes needed to determine whether a dependency is enforceable and to properly schedule events in extended transaction models. Using these attributes and a formalization of a dependency into the temporal logic CTL, we can automatically synthesize an automaton that captures the computations that satisfy the given dependency. We show how a set of such automata can be combined into a scheduler that produces global computations satisfying all relevant dependencies. We show how dependencies required to implement relaxed transactions such as Sagas can be enforced and discuss briefly the issues of concurrency control, safety, and recoverability.
1 Introduction
One of the main objectives of the Carnot project at MCC is to provide an environment for the development and execution of applications that access related information stored in multiple existing systems [Ca91]. An important component of this effort is a facility for relaxed task management. A task is any unit of computation that performs some useful function in a system. The tasks that are of particular interest are database transactions. To efficiently develop such multi-system applications accessing existing heterogeneous and closed1 systems, we must be able to modularly capture the execution constraints of various applications. This can be achieved by modeling them as relaxed transactions consisting of related tasks executed on different systems.
The requirements of the traditional transaction model based on full isolation, atomic commitment, and global serializability may be either too strong, or not sufficient for a particular multi-system application. For example, an application may need to ensure that two tasks commit only in a certain temporal order. An example is a banking application in which deposits made into an account over a certain period may have to be processed before debits are made from the account over the same period. Therefore, we may need to selectively relax the ACID properties [Gra81, HR83] for multi-system transactions to capture precisely the synchrony and coupling requirements based on the true application semantics. The semantic constraints may be specified as intertask dependencies, which are constraints over significant task events, such as commit and abort.
The concomitant reduction in semantic constraints across tasks enables the generation of scripts that can be efficiently executed with a high level of parallelism. This, in turn, may result in a higher availability of data, better response times, and a higher throughput. The modeling of complex telecommunication applications is discussed in [ANRS92], where it is argued that many multi-system applications can be efficiently modeled and executed as relaxed transactions.
To illustrate these concepts, let us consider the following scenario. A travel agency maintains two databases: one containing detailed information about the bookings made by different agents and another
1In many such systems, the data can be accessed only through the existing interfaces, even if it is internally stored under the control of a general purpose DBMS. Such systems are frequently referred to as legacy systems and the applications that access several of them are called workflows.
containing a summary of the information in the first database with the number of bookings per agent. When the summary changes, a task is run that sets off an alarm if the summary falls below a preset threshold. An obvious integrity constraint is that for each travel agent, the number of rows in the bookings database should be equal to the number of bookings stored for that agent in the summary database.
If it holds initially, this constraint can be assured by executing all the updates to both databases as atomic multidatabase transactions that are globally serializable [BS88]. This, however, may be inefficient or even impossible, if the database interfaces do not provide visible two-phase commit facilities. Instead, we may assume that the interdatabase integrity is maintained by executing separate tasks that obey the appropriate intertask dependencies. These dependencies state that if a delete task on the bookings database commits, then a decrement-summary task should also commit. Furthermore, if a delete task aborts, while its associated decrement-summary task commits, then we must restore consistency by compensating for the spurious decrement. We do this by executing an increment-summary task. Figure 1 shows the tasks involved in this example; \( dB \), \( dS \), \( iS \), and \( u/a \) denote the delete-booking, decrement-summary, increment-summary, and update-alarm tasks, respectively.
Figure 1: Task Graph for the Delete Booking Example
We model each intertask dependency as a \textit{dependency automaton}, which is a finite state automaton whose paths represent the computations that satisfy the dependency. Each such automaton ensures that its corresponding dependency is not violated, by permitting only those events whose execution would not lead to the violation of the dependency. The \textit{scheduler} receives events corresponding to a possible task execution. It queries the applicable dependency automata to determine whether they all allow the event to be executed. If so, the event is executed; otherwise, it is delayed (if delayable) and re-attempted later.
We present a framework in which dependencies can be stated modularly as constraints across tasks. We also present a \textit{scheduler} that enforces all stated dependencies, provided they are jointly enforceable, and assures that a dynamically changing collection of tasks is executed in accordance with the dependencies. It does this by appropriately accepting, rejecting, or delaying significant events.
The rest of the paper is organized as follows. Section 2 provides the technical and methodological background for our work and gives an example of its application. Section 3 describes how we formally specify dependencies, discusses events attributes and their impact on the enforceability of dependencies, and considers how dependencies can be added or removed at run-time. Section 4 gives a formal definition of a dependency automaton, which we use to represent each dependency; it also shows how dependency automata operate and enforce their corresponding dependencies. Section 5 presents our execution model as well as the notion of \textit{viable paths}, which we use as a correctness criterion. It formalizes these definitions and uses them in the definition of a scheduling algorithm.\footnote{This paper is a revised and abbreviated version of the report [ASRS92] available from the authors. The report contains proofs of all theorems.} It also shows how a relaxed transaction model such as the Sagas [GS87] can be described (and hence enforced) as a set of dependencies. Section 6 briefly discusses the concurrency control, safety and recovery issues in the context of flexible transactions [JNRS91]. Some conclusions are presented in Section 7.
2 Background
The specification and enforcement of intertask dependencies has recently received much attention [CR90, DHL90, EL92, ELLR90, KL91]. Following [Kl94] and [CR92], we specify intertask dependencies as constraints on the occurrence and temporal order of certain significant events. Klein has proposed the following two primitives [Kl91]:
1. \( e_1 \rightarrow e_2 \): If \( e_1 \) occurs, then \( e_2 \) must also occur. There is no implied ordering on the occurrences of \( e_1 \) and \( e_2 \).
2. \( e_1 < e_2 \): If \( e_1 \) and \( e_2 \) both occur, then \( e_1 \) must precede \( e_2 \).
Well-known examples of dependencies include:
- Commit Dependency [CR92]: Transaction \( A \) is commit-dependent on transaction \( B \), iff if both transactions commit, then \( A \) commits before \( B \) commits. Let the relevant significant events be denoted as \( cm_A \) and \( cm_B \). This can be expressed as \( cm_A < cm_B \).
- Abort Dependency [CR92]: Transaction \( A \) is abort-dependent on transaction \( B \), iff if \( B \) aborts, then \( A \)
must also abort. Let the significant events here be $ab_A$ and $ab_B$, so this can be written $ab_B \rightarrow ab_A$.
- Conditional Existence Dependency [KL91]: If event $e_1$ occurs, then if event $e_2$ also occurs, then event $e_3$ must occur. That is, the existence dependency between $e_2$ and $e_3$ comes into force if $e_1$ occurs. This can be written $e_1 \rightarrow (e_2 \rightarrow e_3)$.
Note that we allow dependencies of the form $E_1 \rightarrow E_2$, where $E_1$ and $E_2$ are general expressions. An expression $E$ can be formally treated as an event by identifying it with the first event occurrences that make it definitely true. For example, $e_2 \rightarrow e_3$ is made true as soon as $e_3$ or the complement of $e_2$ occurs.
The above primitives can capture many of the semantic constraints encountered in practice; any useful framework for intertask dependencies should be at least as powerful. Our approach meets this criterion: $\rightarrow$ and $<$ are special cases of our formalism.
The relationships between the significant events of a task can be represented by a state transition diagram, which serves as an abstraction for the actual task by hiding irrelevant details of its internal computations. The execution of an event causes a transition of the task to another state. Figure 2 shows an example task state transition diagram taken from [KL91]. From its initial state (at the bottom of the diagram), the task first executes a start event (st). Once the task has started, it will eventually either abort, as represented by the ab transition, or finish, as represented by the pr transition (for “done”). When a task is done, it can either commit, i.e., make the cm transition, or abort, i.e., make the ab transition.
Using the state transition diagrams and significant events defined above, we can represent the travel agent application described in the previous section as shown in Figure 3. The intertask dependencies are shown as “links” between states that result after the corresponding significant events of the different tasks are performed ($\&$ denotes conjunction).

3 Intertask Dependency Declarations
As discussed in Section 2, we specify intertask dependencies as constraints on the occurrence and temporal order of events. The significant events and transitions of a task depend on the characteristics of the local system where it executes. Our theory and implementation applies to tasks with an arbitrary set of task states and significant events. We assume that an event can occur at most once in any possible execution of the system. This is not a restriction in real terms. If a task aborts and must be re-executed, a new id may be generated for it (and for its events). The dependencies can be appropriately modified and everything can proceed normally.
Let $e$, $e_i$, $e_j$, etc. denote any significant event and $D(e_1, \ldots, e_n)$ denote an unspecified dependency over $e_1, \ldots, e_n$.
3.1 Formal Specification of Dependencies
We adopt the language of Computation Tree Logic (CTL) as the language of our dependencies [EM90]. CTL is a powerful language, well-known from distributed computing. A brief description of CTL and modeling of various dependencies is given in Appendix A. The primitives $<$ and $\rightarrow$ are useful macros that yield CTL formulae. CTL can uniformly express different dependencies. And, since it is a formal language, it helps reduce ambiguity in communication. It also makes it possible to formally determine the relationships among different dependencies, e.g., whether they are consistent, or whether one entails another.
We would like our dependencies to be easily speci-
able by users or database administrators. For this reason, it is essential that the automata that enforce those dependencies be synthesized automatically from those dependencies. CTL formulae can be used to automatically synthesize dependency automata; this process is hidden from the dependency specifier. Thus we retain the flexibility of Klein’s approach, while using a formal, more expressive and general representation.
### 3.2 Enforceable Dependencies
The scheduler enforces a dependency by variously allowing, delaying, rejecting, or forcing events to occur, so that the resulting computation satisfies the given dependency. Some syntactically well-formed dependencies may not be enforceable at run-time. For example, the dependency \( ab(T_1) \rightarrow cm(T_2) \) is not enforceable, because a scheduler can neither prevent \( ab(T_1) \) from occurring nor in general guarantee the occurrence of \( cm(T_2) \). This is because, in general, a scheduler cannot prevent tasks from unilaterally deciding to abort. Thus both \( T_1 \) and \( T_2 \) can abort.
We associate the following attributes with significant events that meet the given conditions:
- **Forcible**, whose execution can be forced;
- **Rejectable**, whose execution can be prevented;
- **Delayable**, whose execution can be delayed.
We assume below that local systems on which the tasks are executed provide a prepared-to-commit state so that a task can issue a prepare-to-commit (pr) event. The prepared-to-commit state is *visible* if the scheduler can decide whether the prepared task should commit or abort. Table 1 below shows the attributes of the significant events of transactions commonly found in database applications and DBMSs. Therein, an \( \sqrt{\ } \) indicates that the given attribute always holds, whereas a \( \times \) indicates that the given attribute may not always hold.
<table>
<thead>
<tr>
<th>Event</th>
<th>Forcible?</th>
<th>Rejectable?</th>
<th>Delayable?</th>
</tr>
</thead>
<tbody>
<tr>
<td>cm</td>
<td>( \times )</td>
<td>( \sqrt{\ } )</td>
<td>( \sqrt{\ } )</td>
</tr>
<tr>
<td>ab</td>
<td>( \sqrt{\ } )</td>
<td>( \times )</td>
<td>( \times )</td>
</tr>
<tr>
<td>pr</td>
<td>( \times )</td>
<td>( \sqrt{\ } )</td>
<td>( \sqrt{\ } )</td>
</tr>
</tbody>
</table>
Table 1: Attribute Tables for Significant Events
We can characterize the enforceability of dependency \( D(\epsilon_1, \ldots, \epsilon_n) \) in terms of the attributes of \( \epsilon_1, \ldots, \epsilon_n \). For example, \( \epsilon_1 \rightarrow \epsilon_2 \) is run-time enforceable if rejectable(\( \epsilon_1 \)) and delayable(\( \epsilon_1 \)) hold, since we can then delay \( \epsilon_1 \) until \( \epsilon_2 \) is submitted, and reject \( \epsilon_1 \) if we see that the task that issues \( \epsilon_2 \) has terminated (or timed out; see below) without issuing \( \epsilon_2 \). Alternatively, if \( \epsilon_2 \) is forcible, then we can enforce \( \epsilon_1 \rightarrow \epsilon_2 \) at run-time by forcing the execution of \( \epsilon_2 \) when \( \epsilon_1 \) is accepted for execution. Yet another (although somewhat vacuous) strategy would be to unconditionally reject \( \epsilon_1 \). This strategy is available if rejectable(\( \epsilon_1 \)) holds.
As another example, consider \( \epsilon_1 < \epsilon_2 \), for which there are two possible strategies. The first, which can be applied if delayable(\( \epsilon_1 \)) holds, is to delay \( \epsilon_2 \) until either \( \epsilon_1 \) has been accepted for execution, or task 1 has terminated without issuing \( \epsilon_1 \). The second, which can be applied if rejectable(\( \epsilon_1 \)) holds, is to let \( \epsilon_2 \) be executed when it is submitted and thereafter reject \( \epsilon_1 \) if it is submitted.
One way to extend our approach to real-time dependencies is by considering real-time events, such as clock times (e.g., 5:00 p.m.), as regular events that lack the attribute of delayability. Consider \( \epsilon_1 < 5:00 \text{ p.m.} \). This dependency is enforceable only if \( \epsilon_1 \) is rejectable. The scheduler can enforce \( \epsilon_1 < 5:00 \text{ p.m.} \) by accepting \( \epsilon_1 \) if 5:00 p.m. has not already occurred (i.e., if it is before 5:00 p.m.) and by rejecting \( \epsilon_1 \) otherwise.
### 3.3 Dynamic Addition and Removal of Dependencies
The preceding exposition assumed that all dependencies are initially given, i.e., at compile-time. However, dependencies may be added or deleted dynamically at run-time. The removal of a dependency is achieved simply by removing its corresponding automaton. The addition of a dependency requires that an automaton be synthesized for it and used in further scheduling. A dependency may be added too late to be enforced. Suppose \( D = \epsilon_1 -> \epsilon_2 \) is added after \( \epsilon_1 \) occurs. If \( \epsilon_2 \) is not forcible and is never submitted, \( D \) cannot be enforced. This is unavoidable in general, since the addition of dependencies cannot be predicted. At best we can report a violation when such a dependency is added.
### 4 Dependency Automata: Enforcing a Single Dependency
For each dependency \( D \), we create a finite state machine \( A_D \) that is responsible for enforcing \( D \). \( A_D \) captures all possible orders of event on which \( D \) is satisfied. This can be done either manually, or by using an extension of the CTL synthesis technique of [EC82, Em90] that we have developed [ASRS92]. Our procedure requires only the specification of the dependencies, not of the tasks over which those dependencies are defined. That is, the precise transitions for a task’s state transition diagram do not affect the representations of the different dependencies. As a result, our procedure generates an open system. By contrast, traditional temporal logic synthesis methods [EC82, MW84] require a specification of the entire system. Thus their results have to be recomputed whenever the system is modified. The details of the synthesis procedure are omitted for brevity, but can be found in [ASRS92]. In the worst case, the size of \( A_D \) is exponential in the number of events
in $D$. This number is often small (in our experience, 2-4), so the complexity is not a major impediment in practice.
$A_D$ is a tuple $(s_0, S, \Sigma, \rho)$, where $S$ is a set of states, $s_0$ is the distinguished initial state, $\Sigma$ is the alphabet, and $\rho \subseteq S \times \Sigma \times S$ is the transition relation. We use $t_i$ to indicate the specific termination event of task $i$, and $\varepsilon$ to denote any event which can either be a significant event (notated with $e$) or a termination event. We discuss the generation and usage of termination events below. The elements of $\Sigma$ are notated as $\sigma$, $\sigma'$, etc. $\sigma$ can be of any of the forms described below:
- $a(\varepsilon_1, \ldots, \varepsilon_m)$: This indicates that $A_D$ accepts the events $\varepsilon_1$ through $\varepsilon_m$. If this transition is taken by $A_D$, then each $\varepsilon_i$ is accepted and, if $\varepsilon_i$ is a significant event, it is then forwarded to the event monitor for execution.
- $r(\varepsilon_1, \ldots, \varepsilon_m)$: This indicates that $A_D$ rejects the events $\varepsilon_1$ through $\varepsilon_m$, because the execution of any of them would violate the dependency $D$.
- $\sigma_1 \parallel \ldots \parallel \sigma_n$, where the $\sigma_i \in \Sigma$: This indicates the interleaving of the accept operations corresponding to $\sigma_1$ through $\sigma_n$.
- $\sigma_1; \ldots; \sigma_n$, where the $\sigma_i \in \Sigma$: This indicates the accept operations of $\sigma_i$ occur before the accept operations of $\sigma_{i+1}$ (for $1 \leq i \leq (n - 1)$).
**Example Dependency Automata**
We represent $A_D$ as a labeled graph, whose nodes are states, and whose edges are transitions. Each edge is labeled with an element $\sigma$ of $\Sigma$. $\sigma$ denotes the actions, such as accept or reject, that are taken by the scheduler when that transition is executed.
In Figures 4 and 5 below, we give example dependency automata for the dependencies $e_1 < e_2$, and $e_1 \rightarrow e_2$, respectively. The symbol $\parallel$ indicates choice: an edge labeled $\sigma\sigma'$ may be followed if the scheduler permits either $\sigma$ or $\sigma'$.
**The Operation of an Automaton**
We assume for simplicity that each task can have at most one event in a given dependency, i.e., only intertask dependencies are explicitly considered. Thus the input alphabet for $A_D$, where $D$ is of the form $D(e_1, \ldots, e_n)$, is $\{e_1, \ldots, e_n, t_1, \ldots, t_n\}$. That is, the size of the input alphabet for $A_D$ is $2n$.
$A_D$ operates as follows. At any time, it is in some state, say, $s$. Initially, $s = s_0$. Events arrive sequentially. Let $\varepsilon$ be the current event. If $s$ has an outgoing edge labeled $a(\varepsilon)$ and incident on state $s'$, then the given transition is enabled. This means that, as far as its local state is concerned, $A_D$ can change its state to $s'$. However, $A_D$ cannot actually make the transition unless the scheduler permits it (see Section 5).
If the scheduler permits a certain transition, then the automaton can execute it, thereby changing its local state to keep in synchronization with respect to the events executed so far. The behavior of the scheduler is such that it accepts an event only if it can find an event ordering that is consistent with all of the dependency automata that contain that event in their input alphabet. So if it accepts an event, all the relevant automata must be in agreement. Therefore, each of them must execute the given accepting transition. This ensures that acceptance of the event does not violate any of the dependencies in which the event is mentioned. Similarly, the scheduler can reject an event only if all of the relevant automata reject it, i.e., only if it can find an event ordering that is consistent with all of the relevant dependency automata executing a rejecting transition for the event. The same reasoning as for accepting an event applies here, since the rejection of an event can also cause the violation of a dependency in which the event is mentioned. Section 5 discusses the operation of the scheduler in detail.
The following observations concern how a dependency automaton enforces a dependency. A $t_i$ indicates the termination of or timing out of task $i$. A dependency automaton cannot reject a $t_i$ event, since it cannot unilaterally prevent such an event. The importance of $t_i$ events is that their submission tells the automaton that


Dealing with Failures using Timeouts
We have so far interpreted the $t_i$ events to indicate the termination of task $i$. Ordinarily, tasks terminate by committing or aborting. However, system problems, such as disk crashes and communication failures, may cause indefinite waits. For example, the automaton for $e_i < e_j$, shown in Figure 4, delays accepting $e_j$ until $t_1$ or $e_1$ is submitted. Thus, this automaton could possibly hang forever, if neither $t_1$ nor $e_1$ is forthcoming.
One policy is to have the automaton accept $e_j$ when $e_j$ arrives and reject $e_i$ if $e_i$ arrives later. In general, this policy speeds up $e_j$'s task at the cost of aborting $e_i$'s task and, possibly, delaying or aborting the global task. In cases where both policies, namely, one in which an event is indefinitely delayed and the other in which an event is eagerly rejected, are unacceptable, a policy based on timeouts may be preferred. This would require tasks to wait, but would allow timeouts to be generated when expected events are not received within a reasonable time. This is an improvement in practical terms, but does not require any significant change in our approach. We support timeouts by modifying the interpretation of the $t_i$ events in the above and associate them with either the normal termination of a task or a timeout on the corresponding event, $e_i$.
We assume that $e_i$ is not submitted after $t_i$ has been submitted. This is easy enough to implement.
5 The Scheduler: Enforcing Multiple Dependencies
A system must enforce several dependencies at the same time. A naive approach would generate a product of the individual automata ($A_P$'s) that each enforce a single dependency. However, if there are $m$ individual automata each roughly of size $N$, then the product automaton has size of the order of $N^m$. This is intractable for all but the smallest $m$. We avoid this "state explosion problem" [CG87], by coordinating the relevant individual automata at run-time rather than building a static (and exponentially large) product at compile-time, using techniques similar to those of [AE89]. Although the worst case time complexity is still exponential, we have reason to believe that in many interesting cases, e.g., certain workflows in telecommunications applications [ANRS92], the time complexity is polynomial. Also, the space complexity of our technique is polynomial versus the exponential complexity of building the product automaton.
5.1 The Execution Model
Figure 6 shows the execution model. Events are submitted to the scheduler as tasks execute. We introduce the correctness criterion of viable pathsets, which is used to check whether all dependencies can be satisfied if a given event is executed. Computing a viable pathset requires looking at all relevant dependency automata. If an event can be accepted based on the viable pathset criterion, it is given to the event dispatcher for execution. If an event cannot be accepted immediately, then it still may be possible to execute it after other events occur, provided that the event is delayable. In that case, the event is put in the pending set and a decision taken on it later. If the scheduler ever permits the execution of an $r(e)$ transition by some automata, then $e$ is rejected, and a reject($e$) message is sent to the task that submitted $e$ to the scheduler.
5.2 Pathsets
We now discuss pathsets, present an algorithm to compute them, and discuss event execution in more detail. When an event $e$ is submitted, the scheduler searches for a pathset, i.e., a set of paths with one path from each relevant dependency automaton. The desired pathset must
1. accept $e$;
2. begin in the current global state of the scheduler;
3. be order-consistent;
4. be $a$-closed and $r$-closed; and
5. be executable.
A pathset accepts $e$ iff all its member paths mentioning $e$ should accept it and there should be no paths accepting the termination event associated with $e$. Order-consistency means that different paths in the set must agree on the order of execution of each pair of events. The requirements of $a$-closure and $r$-closure mean that for any event that is accepted or rejected, paths from each automaton referring to that event must be included and must agree on whether to accept or reject it. Executable means that all rejected events must have been submitted and all accepted events must have been submitted or be forcible. A pathset that meets criteria 2–5 is called viable. After some technical definitions, we give further intuitions and present an algorithm to compute pathsets.
Definition 1 (Global State).
A global state \( s \) is a tuple \( (s_D, \ldots, s_D, \ldots, s_D) \) where \( s_D \) is the local state of \( A_D \), and \( D_1, \ldots, D_n \) are all the dependencies in the system.
The global state is simply the aggregation of the local states of every individual dependency automaton.
Definition 2 (Path).
A path \( \pi_D \) in \( A_D \) is a sequence \( s^1 \overset{s_1}{\rightarrow} s^2 \overset{s_2}{\rightarrow} \ldots \) such that \((\forall j \geq 1 : (s^j, s^{j+1}) \in \rho_D)\) where \( \rho_D \) is the transition relation of \( A_D \).
A global computation is a sequence of events as executed by the event dispatcher. Recall that \( A_D \) is meant to encode all the computations that satisfy dependency \( D \). Thus, each path of \( A_D \) represents computations that satisfy \( D \). Furthermore, \( A_D \) is maximal in the sense that every possible computation whose prefixes satisfy \( D \) is represented by some path in \( A_D \). By definition, a global computation must consist solely of events accepted by the scheduler. Our scheduler has the property that, for each dependency \( D \), the projection of any global computation onto the events in \( D \) is represented by a path in \( A_D \). This means that our scheduler enforces each dependency.
Definition 3 (Pathset).
A pathset is a set, \( II \), of paths such that:
1. Each element of \( II \) is a path in some \( A_D \).
2. Each \( A_D \) contributes at most one path to \( II \).
As mentioned in Section 5.1, when an event \( \varepsilon \) is submitted to the scheduler, the scheduler attempts to execute \( \varepsilon \) by finding a viable pathset \( II \) that accepts \( \varepsilon \). If such a pathset is found, then all events that are accepted by the pathset are executed in an order that is consistent with that imposed by the pathset. This results in the global state of the scheduler being updated appropriately. If such a pathset is not found, then event \( \varepsilon \) is placed in the pending set. Another attempt at finding a suitable pathset is made when other events affecting the acceptability of \( \varepsilon \) are submitted. Event \( \varepsilon \) remains in the pending set until a viable pathset is executed that either accepts or rejects it. In any case, the task that submitted \( \varepsilon \) is informed of this decision.
5.3 The Pathset Search Algorithm
In Figure 6, we present a (recursive) procedure \texttt{search} \( II \) that searches for viable pathsets. The procedure is initially called as \texttt{search} \( II(\emptyset) \). The event to be executed, \( \varepsilon \), and other necessary data structures are assumed to be globals for simplicity (they are passed as parameters in the actual implementation). The search procedure attempts to construct a viable pathset by selecting paths (from each relevant automaton) that are order-consistent with \( II \) and are executable. If these paths contain \( a(\varepsilon) \) or \( r(\varepsilon) \) events that occur in automata outside the set of automata being considered, these automata are also considered to ensure a-closure and r-closure of the eventual solution.
The function \texttt{get.candidate.path}(A, II) returns a set of executable paths from automaton A that are order-consistent with all paths in II. Some of the returned paths may be extensions of paths already in II. We now establish some correctness properties of the pathset search algorithm. Most proofs are not included here for brevity, but appear in [ASRS92].
Lemma 1 For any event, \( \varepsilon \), and global state \( s \), if \texttt{search} \( II(\emptyset) \) terminates with \( II \neq \emptyset \), then II is viable (w.r.t. global state s) and accepts \( \varepsilon \).
Proof sketch.
We show that each of the clauses of the definition of viable pathsets is satisfied. The search for a pathset always begins in the current global state. New paths that are added to the candidate pathset \( (II_c) \) are ex-
search\_II(\Pi) \\
\text{if } r\text{-closed}(\Pi) \text{ and } a\text{-closed}(\Pi) \text{ and } \Pi \text{ accepts } \varepsilon \text{ then} \\
\quad \text{return }(\Pi); \\
\text{else} \\
\quad \text{Let } A \text{ be an automaton needed to close off } \Pi; \\
\quad \Pi_\ast := \text{get\_candidate\_paths}(A, \Pi); \\
\quad \text{for each } \pi \in \Pi_\ast \\
\quad \quad \Pi_i := \text{search\_II}(\Pi \cup \{ \pi \}); \\
\quad \text{if } \Pi_i \neq \emptyset \text{ then} \\
\quad \quad \text{/* } \Pi_i \text{ is viable; end all recursive calls */} \\
\quad \quad \text{return}(\Pi_i); \\
\quad \text{endfor} \\
\quad \text{/* } \Pi_i \text{ in \Pi_\ast failed, so return } \emptyset */ \\
\text{return}(\emptyset);
Figure 7: Pathset Search Algorithm
executable and order-consistent with \Pi, by definition of the get\_candidate\_paths function. The search terminates when either \Pi is empty or is a\_closed and r\_closed.
Lemma 2 search\_II(\emptyset) always terminates.
Proof sketch.
The essential idea is that because the number of automata is finite and each automaton has finitely many paths, only finitely many candidate pathsets need to be considered. Thus the algorithm terminates.
5.4 The Scheduler
The scheduler is a nonterminating loop, which on each iteration attempts to execute an event \varepsilon that has just been submitted or is in the pending set (Figure 6). It does this by invoking search\_II(\emptyset). If this invocation returns a nonempty \Pi, then \Pi is immediately executed. Otherwise, \varepsilon is placed in the pending set. \Pi is executed by (a) accepting the events that \Pi accepts in a partial order that is consistent with \Pi and (b) rejecting all events rejected by \Pi.
Definition 4 (Path Projection).
The projection η|D of global computation η onto a dependency automaton D is the path obtained from η by removing all transitions \varepsilon such that \varepsilon \not\in \Sigma_D.
Lemma 3 Let η be a global computation generated by the scheduler. Then, for every dependency D, η|D is a path in AD.
Proof sketch. By construction of the scheduler.
The paths in \Pi_\ast returned by get\_candidate\_paths are examined in arbitrary order. The quality of the generated pathset could be improved if the paths in \Pi_\ast were examined according to some appropriate criterion, such as minimal length or maximal acceptance. We are currently experimenting with such criteria.
5.5 Example of Scheduler Operation
We now give an example of how relaxed transactions expressed with < and \rightarrow can be scheduled using our algorithm. For simplicity, let the only dependencies in force be e_1 < e_2 and e_1 \rightarrow e_2, where both e_1 and e_2 are rejectable and delayable. Let A_\prec and A_\rightarrow be the corresponding automata as shown in Figures 4 and 5. Assume that e_1 is submitted first. We find a(\{e_1\}) in A_\prec. However, since no path in A_\rightarrow begins with a(\{e_1\}), the empty pathset is returned and e_1 added to the pending set. When e_2 is submitted, two executable paths can be found in A_\rightarrow: a(\{e_2, e_1\}) and a(\{e_2\})||a(\{e_1\}). The a-closure requirement now forces the scheduler to search A_\prec for a path that accepts e_1 and e_2. The only such path is a(\{e_1\})|a(\{e_2\}). Since a(\{e_1\})|a(\{e_2\}) and a(\{e_2\})|a(\{e_1\}) are not mutually order-consistent, the only viable pathset is \{a(\{e_1\})|a(\{e_2\}), a(\{e_2\})|a(\{e_1\})\}. This is finally returned. The partial order consistent with it is: e_1 and then e_2.
Table 2 shows how the axioms for the Saga transaction model [GS87], that were formulated in [CR92] using the ACTA formalism, can be expressed using the < and \rightarrow primitives. A Saga, \Sigma, is a sequence of subtransactions, T_i, i = 1, ..., n. The term ‘post’ denotes the postcondition of the given event. The Saga commits iff all subtransactions are successfully executed in the specified order; otherwise, if one of the subtransactions aborts, the Saga aborts and the compensating transactions CT_i are executed in the reverse order. Since the specifications use only the < and \rightarrow primitives, our scheduler can be used to execute relaxed transactions with Sagas semantics.
6 Executing Multidatabase Transactions
Three issues in executing multidatabase transactions are: concurrency control, safety, and recoverability.
6.1 Concurrency Control
Our scheduler is a part of a multidatabase environment in which local database systems (LDBS) cooperate in the execution of global transactions. Each LDBS will, in general, contain a concurrency control module, which enforces local concurrency control (typically ensuring local serializability). We may assume that a task executing at each of the local systems has a serialization event that determines its position in the local serialization order. For example, if the local system uses two-phase locking (2PL), the serialization order of a local transaction is determined by its lock point—the point when the last lock of the transaction is granted.
A problem arises if local concurrency control modules impose an inconsistent ordering on serialization events of tasks belonging to a given multidatabase application.
We resolve this problem by transferring the responsibility for global concurrency control to the scheduler. This is achieved by restating the concurrency control obligations as a set of dependencies, which are then treated like other dependencies. However, unlike other scheduling dependencies, concurrency control dependencies arise at run-time, when a serialization precedence between tasks in different applications is established at some site. However, once these dependencies are added, there is no difference in how they are treated. Thus we have a uniform mechanism for both dependency enforcement and concurrency control.
The main difficulty in this approach is that the serialization events are neither reported by the local concurrency controllers, nor can they be deduced from the temporal order of other significant events controlled by the global scheduler (start, commit, terminate). It is possible for a local concurrency controller to completely execute task $T_i$ before task $T_j$ has even begun, yet serialize them in such a way that $T_j$ precedes $T_i$. This problem can be overcome by using the idea of tickets introduced in [GRS91]. As in [GRS91], we may add a ticket read and ticket write operation to each task of a global application. These ticket read/write operations can be regarded as significant events, and so their execution can be controlled by declaring dependencies that refer to them. Thus the required concurrency control is then obtained simply by declaring an appropriate set of ticket access dependencies.
### 6.2 Flexible Transaction Safety
A flexible transaction [ELL09] is defined as a set of subtransactions and their scheduling preconditions along with a set of conditions over their final states [ELL90]. These conditions specify the acceptable termination states of the flexible transaction; it completes successfully if it terminates in such a state.
Consider the following example, adapted from [JN91]. We have a travel agent flexible transaction, consisting of reserve-flight ($F$) and reserve-car ($C$) subtransactions. If we fail to secure a car reservation, we wish to cancel the plane reservation. This cancellation is achieved by a subtransaction $F^{-}$, which is a compensating transaction for $F$. Thus the set of acceptable termination states for the overall transaction is given in Table 3, where $in$, $cm$, and $ab$ indicate that the subtransaction is in its initial state, is committed, and is aborted, respectively. The set of acceptable states is a constraint on the execution of a flexible transaction. This constraint can also be expressed as the set of dependencies given in Table 3.
<table>
<thead>
<tr>
<th>$F$</th>
<th>$F^{-}$</th>
<th>$C$</th>
</tr>
</thead>
<tbody>
<tr>
<td>in</td>
<td>cm</td>
<td>ab</td>
</tr>
<tr>
<td>ab</td>
<td>in</td>
<td>cm</td>
</tr>
<tr>
<td>cm</td>
<td>cm</td>
<td>ab</td>
</tr>
</tbody>
</table>
Table 3: Acceptable States of a Flexible Transaction
### 6.3 Recoverability
We will not deal extensively with the issue of recovery from failure in this paper. Suffice it to say that the following data must be checkpointed in order to enable recovery of the scheduler from a failure:
1. The current state of every dependency automaton.
2. Any (partially executed) pathset (see Section 5), plus the current state along every path in the pathset.
3. The set of pending events.
The above data is subject to concurrent updates that must be executed atomically with respect to the checkpointing mechanism. For example, when an event $\varepsilon$ is executed, the current state of every dependency automaton $A_D$ where $\varepsilon$ occurs in $D$ must be updated. We do not wish a checkpoint to reflect only some of these updates. It should either reflect none of them (corresponding to a state before $\varepsilon$ is executed), or reflect all of them (corresponding to a state after $\varepsilon$ is executed).
In addition, the communication mechanism between the scheduler and the tasks must be persistent, so that no messages are lost while the scheduler is down (i.e., after a failure and before recovery from that failure).
...
Mailboxes or persistent pipes may be used to provide this functionality.
7 Conclusions and Future Work
We addressed the problem of specifying and enforcing intertask dependencies. Our framework allows dependencies to be stated modularly and succinctly as constraints across tasks. The actual set of significant events is not predetermined, but can vary with the application. Our framework can be extended to accommodate the issues of concurrency control, flexible transaction safety, recoverability, and the enforcement of other dependencies that are introduced dynamically at run-time.
We showed how a dependency can be expressed as an automaton that captures all the computations that satisfy the dependency. We presented a scheduling algorithm that enforces multiple dependencies at the same time. This algorithm uses the automata corresponding to each dependency. We showed that every global computation generated by the scheduler satisfies all of the dependencies. We also showed how relaxed transaction models such as the Saga model can be captured in our framework. The desiderata for a task scheduler for multidatabase transaction processing include correctness (no dependencies are violated), safety (transaction terminates only in an acceptable state), recoverability, and optimality and quality. We have established the correctness, safety and recoverability of the scheduler; we are currently studying issues concerning the quality of the schedules generated and the optimality of generating them.
An implementation of this work has been completed as part of the distribution services of the Carnot project [Ca91] at MCC. Our implementation is in the concurrent actor language Rosette, whose asynchrony and other features make for a natural realization of our execution model. Carnot enables the development of open applications that use information stored under the control of existing closed systems. The specification and run-time enforcement of data and intertask dependencies is an important component of this effort.
Acknowledgements We are indebted to Greg Meredith and Christine Tomlinson for discussions, and to Allen Emerson for advice on CTL. We also benefited from conversations with Phil Cannata and Darrell Woelk. Discussions of this paper at ETH-Zürich and comments by H. Ye were helpful. Sridhar Ganti provided the Sagas example.
References
1. Each of $p$, $f \land g$ and $\neg f$ is a formula (where the latter two constructs indicate conjunction and negation, respectively).
2. $\text{EX}_t f$ is a formula that intuitively means that there is an immediate successor state reachable by executing one step of process $P_t$ in which formula $f$ holds.
3. $\text{A}[\text{U}g]$ is a formula that intuitively means that for every computation path, there is some state along the path where $g$ holds, and $f$ holds at every state along the path until that state.
4. $\text{E}[\text{U}g]$ is a formula that intuitively means that for some computation path, there is some state along the path where $g$ holds, and $f$ holds at every state along the path until that state.
Formally, we give the semantics of CTL formulae with respect to a structure $M = (S, A_1, \ldots, A_k, L)$ that consists of:
- $S$ - a countable set of states
- $A_i \subseteq S \times S$, a binary relation on $S$ giving the possible transitions by process $i$, and
- $L$ - a labeling of each state with the set of atomic propositions true in the state.
Let $A = A_1 \cup \cdots \cup A_k$. We require that $A$ be total, i.e., that $\forall x \in S, \exists y : (x, y) \in A$. A fullpath is an infinite sequence of states $(s_0, s_1, s_2, \ldots)$ such that $\forall(s_i, s_{i+1}) \in A$. To any structure $M$ and state $s_0 \in S$ of $M$, there corresponds a computation tree (whose nodes are labeled with occurrences of states) with root $s_0$ such that $s \rightarrow t$ is an arc in the tree iff $(s, t) \in A_i$.
We use the usual notation to indicate truth in a structure: $M, s_0 \models f$ means that $f$ is true at state $s_0$ in structure $M$. When the structure $M$ is understood, we write $s_0 \models f$. We define $\models$ inductively:
$$s_0 \models p \quad \text{iff } p \in L(s_0)$$
$$s_0 \models \neg f \quad \text{iff } \neg s_0 \models f$$
$$s_0 \models f \land g \quad \text{iff } s_0 \models f \text{ and } s_0 \models g$$
$$s_0 \models \text{EX}_t f \quad \text{iff for some state } s, (s_0, t) \in A_t \text{ and } t \models f.$$
$$s_0 \models \text{A}[\text{U}g] \quad \text{iff for all fullpaths } (s_0, s_1, \ldots), \exists i \geq 0 \land s_i \models g \land \forall j (0 \leq j \land j < i \Rightarrow s_j \models f)$$
$$s_0 \models \text{E}[\text{U}g] \quad \text{iff for some fullpath } (s_0, s_1, \ldots), \exists i \geq 0 \land s_i \models g \land \forall j (0 \leq j \land j < i \Rightarrow s_j \models f).$$
We write $f \models \models$ to indicate that $f$ is valid, i.e., true at all states in all structures.
We introduce the abbreviations $f \lor g$ for $\neg(\neg f \land \neg g)$, $f \rightarrow g$ for $\neg f \lor g$, and $f \equiv g$ for $(f \rightarrow g) \land (g \rightarrow f)$ for logical disjunction, implication, and equivalence, respectively. We also introduce a number of additional modalities as abbreviations: $A[\text{W}g]$ for $\neg \text{EF}[\neg g]$, $E[\text{W}g]$ for $\neg A[\text{W}g]$, $A\text{FF}f$ for $A[\text{true} \lor f]$, $E\text{FF}$ for $E[\text{true} \lor f]$, $A\text{GF}$ for $\neg \text{EF}f$, $E\text{GF}$ for $\neg \text{AF}f$, $A[\text{U}g]$ for $\neg \text{EF}[\neg g \lor \neg g]$, $E[\text{U}g]$ for $\neg A[\text{U}g]$, $A\text{X}f$ for $\neg \text{EX}f$, $E\text{X}f$ for $E\text{X}f$, and $A\text{AX}f$ for $A\text{AX}f$. Particularly useful modalities are $A\text{FF}$.
which means that for every path, there exists a state on the path where $f$ holds, and $AG f$, which means that $f$ holds at every state along each path.
A formula of the form $A[f Ug]$ or $E[f Ug]$ is an eventuality formula. An eventuality corresponds to a liveness property in that it makes a promise that something does happen. This promise must be fulfilled. The eventuality $A[f Ug]$ is fulfilled for $s$ in $M$ provided that for every (respectively, for some) path starting at $s$, there exists a finite prefix of the path in $M$ whose last state satisfies $g$ and all of whose other states satisfy $f$. Since $AFg$ and $EFg$ are special cases of $A[f Ug]$ and $E[f Ug]$, respectively, they are also eventualities. In contrast, $A[f Wg]$, $E[f Wg]$ (and their special cases $AG$ and $EG$) are invariance formulae. An invariance corresponds to a safety property since it asserts that certain conditions will necessarily be met.
CTL is a propositional branching-time temporal logic. That is, it includes propositional logic and temporal operators. A CTL temporal operator is composed of a path-quantifier (either $A$, meaning for all possible computations, or $E$, meaning for some possible computation), followed by a linear temporal operator (one of $X$, $F$, $G$, or $U$). $Xp$ means that $p$ holds at the next point along the given computation; $Fp$ means that $p$ holds at some point along the given computation; $Gp$ means that $p$ holds at all points along the given computation; and $pUq$ means that $q$ holds at some point along the given computation and $p$ holds from the current point until that point.
### A.1 Expressing Dependencies in CTL
Atomic propositions naturally model the states of a given system; each proposition corresponds to a significant event and holds in the state immediately following the occurrence of that event.
Now we show how certain dependencies that were motivated and defined by other researchers can be expressed uniformly in CTL.
- **Order Dependency [K191]:** If both events $e_1$ and $e_2$ occur, then $e_1$ precedes $e_2$. This was expressed as $e_1 < e_2$ in the above discussion. In CTL, it becomes:
\[ AG[e_2 \Rightarrow AG\neg e_1] \]
That is, if $e_2$ occurs, then $e_1$ cannot occur subsequently.
- **Existence Dependency [K191]:** If event $e_1$ occurs sometimes, then event $e_2$ also occurs sometimes. This was expressed as $e_1 \rightarrow e_2$ in the above discussion. In CTL, it becomes:
\[ \neg E[\neg e_2 U(e_1 \land EG\neg e_2)] \]
That is, there is no computation such that $e_2$ does not occur until a state $s$ is reached where $s$ satisfies $(e_1 \land EG\neg e_2)$, i.e., $e_1$ is executed in state $s$, and subsequently, $e_2$ never occurs.
The following instances of the above dependencies have also appeared in the literature.
- **Commit Dependency [CR92]:** Transaction $A$ is commit-dependent on transaction $B$, iff if both transactions commit, then $A$ commits before $B$ commits. Let the relevant significant events be denoted as $cm_A$ and $cm_B$.
\[ AG[cm_B \Rightarrow AG\neg cm_A] \]
- **Abort Dependency [CR92]:** Transaction $A$ is abort-dependent on transaction $B$, iff if $B$ aborts, then $A$ must also abort. Let the significant events here be $ab_A$ and $ab_B$, so this can be written $ab_B \rightarrow ab_A$, and is rendered in CTL just like $e_1 \rightarrow e_2$ above:
\[ \neg E[\neg ab_A U (ab_B \land EG\neg ab_A)] \]
- **Conditional Existence Dependency [K191]:** If event $e_1$ occurs, then if event $e_2$ also occurs, then event $e_3$ must occur. That is, the existence dependency between $e_2$ and $e_3$ comes into force if $e_1$ occurs. This can be written $e_1 \rightarrow (e_2 \rightarrow e_3)$. Translating it to CTL involves two applications of the translation of $e_1 \rightarrow e_2$ given above, one nested inside the other. The first application, to $e_2 \rightarrow e_3$, yields the following “mixed” formula:
\[ e_1 \rightarrow \neg E[\neg e_3 U (e_2 \land EG\neg e_3)] \]
The second application, which substitutes $\neg E[\neg e_3 U (e_2 \land EG\neg e_3)]$ for $e_2$ in the CTL translation of $e_1 \rightarrow e_2$ given above, gives us
\[ \neg E[\neg E[\neg e_3 U (e_2 \land EG\neg e_3)] U (e_1 \land EG\neg e_3)] \]
Eliminating the double negations finally yields the following formula:
\[ \neg E[\neg e_3 U (e_2 \land EG\neg e_3)] U (e_1 \land EGE[\neg e_3 U (e_2 \land EG\neg e_3)]) \]
### A.2 Expressing Real-time Dependencies in CTL
We use the variant of CTL called RTCTL≥ (Real-Time Computation Tree Logic ≥) [EMSS93]. This is the same as CTL except that $EF^{\geq t}$ means “will occur after $t$ or more time units along some computation.”
- **Real-time Order Dependency:** If both events $e_1$ and $e_2$ occur, then $e_1$ precedes $e_2$, and $e_2$ occurs within $t$ time units of $e_1$.
\[ AG[(e_2 \Rightarrow AG\neg e_1) \land (e_1 \Rightarrow \neg EF^{\geq t} e_2)] \]
- **Real-time Existence Dependency:** If event $e_1$ occurs sometimes, then event $e_2$ also occurs sometimes. Furthermore, $e_2$ occurs no later than $t$ time units after $e_1$.
\[ \neg E[\neg e_2 U (e_1 \land EG\neg e_3)] \land \neg EF[e_1 \land EF^{\geq t} e_2] \]
|
{"Source-Url": "http://knoesis.wright.edu/sites/default/files/ASSR93_0.pdf", "len_cl100k_base": 12787, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 54958, "total-output-tokens": 14263, "length": "2e13", "weborganizer": {"__label__adult": 0.00030612945556640625, "__label__art_design": 0.00033593177795410156, "__label__crime_law": 0.0003666877746582031, "__label__education_jobs": 0.0010995864868164062, "__label__entertainment": 0.00010013580322265624, "__label__fashion_beauty": 0.00016057491302490234, "__label__finance_business": 0.0005946159362792969, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.000705718994140625, "__label__hardware": 0.001255035400390625, "__label__health": 0.0005249977111816406, "__label__history": 0.000335693359375, "__label__home_hobbies": 0.0001252889633178711, "__label__industrial": 0.0006508827209472656, "__label__literature": 0.0003135204315185547, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0004589557647705078, "__label__science_tech": 0.1158447265625, "__label__social_life": 0.0001016855239868164, "__label__software": 0.01546478271484375, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0002416372299194336, "__label__transportation": 0.0007419586181640625, "__label__travel": 0.0001952648162841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54413, 0.0223]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54413, 0.45986]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54413, 0.88281]], "google_gemma-3-12b-it_contains_pii": [[0, 4059, false], [4059, 8897, null], [8897, 12637, null], [12637, 18677, null], [18677, 23476, null], [23476, 28090, null], [28090, 32112, null], [32112, 37336, null], [37336, 41347, null], [41347, 45820, null], [45820, 49215, null], [49215, 54413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4059, true], [4059, 8897, null], [8897, 12637, null], [12637, 18677, null], [18677, 23476, null], [23476, 28090, null], [28090, 32112, null], [32112, 37336, null], [37336, 41347, null], [41347, 45820, null], [45820, 49215, null], [49215, 54413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54413, null]], "pdf_page_numbers": [[0, 4059, 1], [4059, 8897, 2], [8897, 12637, 3], [12637, 18677, 4], [18677, 23476, 5], [23476, 28090, 6], [28090, 32112, 7], [32112, 37336, 8], [37336, 41347, 9], [41347, 45820, 10], [45820, 49215, 11], [49215, 54413, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54413, 0.04098]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.